Search Results

Search found 6769 results on 271 pages for 'django sessions'.

Page 270/271 | < Previous Page | 266 267 268 269 270 271  | Next Page >

  • CodePlex Daily Summary for Friday, July 06, 2012

    CodePlex Daily Summary for Friday, July 06, 2012Popular ReleasesTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.?????????? - ????????: All-In-One Code Framework ??? 2012-07-04: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ???OneCode??????,??????????10????Microsoft OneCode Sample,????4?Windows Base Sample,2?XML Sample?4?ASP.NET Sample。???????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Windows Base Sample CSCheckOSBitness VBCheckOSBitness CSCheckOSVersion VBCheckOSVersion XML Sample CSXPath VBXPath ASP.NET Sample CSASPNETDataPager VBASPNET...sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.0: The first release of sheetengine. See sheetengine.codeplex.com for a list of features and examples.AssaultCube Reloaded: 2.5.1 Intrepid Fixed: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, download the Linux package. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) If you use the default maprot or any maprot, you need to fix it Well, 2.5 was...xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...D3API.Net: TESTING TOOLS (PRE-BLIZZARD API RELEASE): PLEASE NOTE: This release is COMPLETELY SEPARATE from the API. It is intended only so development with this API can begin. This will not be a maintained part of the project. (The Test Application might evolve, but the Test API will not) This release is to address the issue that since Blizzard hasn't released the API, you cannot test this API. Because of this, I decided to create a Test Application AND a Test API that includes the following: Test Application: --Has built in examples from Bl...RTF DOM Parser: 2012-7-3 Relasese: Fix some bug when parse RTFBlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。New Projects40FINGERS DotNetNuke Demo Skins: Collection of DotNetNuke Demo Skins, create for you by Timo Breumelhof of 40FINGERS. Check out the individual downloads for more informationASP.NET Virtual Templates: This project allows you to provide files like views, stylsheets and scripts embedded in an assembly to any web application by using the virtual file system.Bauble: Bauble is a dock launcher written in C# utilizing WPF. As a launcher, it contains an animated list application icons, and will open their program on click.BBQ Assistant: Project to create and maintain a Windows Phone application to allow users to enter BBQ events and record timelines.CharmFlyout - A Metro Flyout Custom Control: A custom control for displaying flyouts from the settings charm in Windows Metro style (WinRT) applications written in C# / XAML.dotNetDR_Auth2????API????: This is SUMMARYEntacts: Entacts app is a contact app for electronic contacts.FlMML customized: FlMML customized ?、FlMML?MML?????????????????????。 FlMML?Flash?????????????????。 MML????????????????????????????。 FluidDb: FluidDb is a better microORM. Unique features, excellent performance, and cleaner code in as few trips to the database as possible.Gabe's gubb Framework (GGF): GGF is a set of classes built to help you work with the REST-based gubb API(http://gubb.net). gubb is a list management site similar to (better than?) Remember the Milk. The core of GGF allows for object/transaction modeling and facilitation of HTTP-based requests. Written in C#.Grandshot 2: Grandshot 2 is an awesome 2D shooter with a great ragdoll and animation system, vehicles and lots of blood and gore. Written in VB.NetJason's CG: This is jason's CGJQS: This is a simple WriterService.Lincoln Wood: An evolutionary implementation of the next gen Lincoln Wood Community environment using MVC2 and other good stuff.Microsoft CRM PluginQuickDeploy: Small tool for deploy your CRM 2011 plugin very fast, especially in the development process. It can also be added into the build event in Visual Studio 2010.Morus: socialMouseBot: Prevent a PC from sleeping the silly way: move the mouse cursor on a timer!QIF AS9102 Form Design Study: This is a Visual Studio 2010 C# Winform application that uses a simple AS9102 form as a design study for consuming (C3) and producing (C2) QIF sample xml files.RaveIt: Windows phone 7 drumm machine appRESTFunctoids: RESTFunctoids for BizTalk 2010 allows you to consume REST Services directly from your map.Secure(): Secure() MS Repo This repository will host Microsoft-oriented code from my site Secure() at http://nathanv.comSharpMik: SharpMik is a library to play Amiga music using C#SharpSyslog: Syslog server lib for .Net/C# (v3.5) implementing RFC 5424 The Syslog Protocol. SuperMetroQuiz: O SuperQuiz é um jogo de Quiz para Windows 8, desenvolvido em C#, que utilizou como base o template grid, disponível no Visual Studio 2012 RC. takela: An ASP.Net MVC Razor Project. TaskScheduler ASP.NET: Simple Example of how to schedule tasks in ASP.NET. works in WebForms, MVC and others, dont need requests or infinite loops. provides full control over the taskTeenyGrab: Take screenshots and upload them to FTP server at the touch of a button.testdd07052012git01: cxvtestdd07052012git1: xzctestdd07052012hg01: cvtestddhg0705201201: xzctesttfs07052012tfs01: zxtesttom07052012git02: rfeThTa7Maged: Its Point of sale project TX264: A GUI for x264, ffmpeg, lame, faac, qaac, neroaacenc, oggenc, aften, lame, flac, mp4box and mkvtoolnix.Visual Studio Extension - Collapse Solution: Visual Studio extension that collapses every item in the Solution Explorer tool window at the solution or project level.Visual Studio Extension - Enable Code Analysis: Visual Studio extension that turns Code Analysis On or Off for all projects in the solution.VocalsBase: not foundWPF Active Directory Explorer: Robust and Extensible Active Directory Explorer and Editoryeg: . Net deneme

    Read the article

  • CodePlex Daily Summary for Tuesday, November 23, 2010

    CodePlex Daily Summary for Tuesday, November 23, 2010Popular ReleasesSilverlight and WP7 Exception Handling and Logging building block: first version: Zipped full source code.Wii Backup Fusion: Wii Backup Fusion 0.8.2 Beta: New in this release: - Update titles after language change - Tool tips for name/title - Transfer DVD to a specific image file - Download titles from wiitdb.com - Save Settings geometry - Titles and Cover language global in settings - Convert Files (images) to another format - Format WBFS partition - Create WBFS file - WIT path configurable in settings - Save last path in Files/Load - Sort game lists - Save column width - Sequenz of columns changeable - Set indicated columns in settings - Bus...TweetSharp: TweetSharp v2.0.0.0 - Preview 2: Preview 2 ChangesIntroduced support for .NET Framework 2.0 Supported Platforms: .NET 2.0, .NET 3.5 SP1, .NET 4.0 Client Profile, Windows Phone 7 Twitter API coverage: accounts blocks direct messages favorites friendships notifications saved searches timelines tweets users Preview 1 ChangesIntroduced support for OAuth Echo Supported Platforms: .NET 4.0, Windows Phone 7 Twitter API coverage: timelines tweets Version 2.0 Overview / RoadmapMajor rewrite for simplic...SQL Monitor: SQL Monitor 1.3: 1. change sys.sysprocesses to DMV: http://msdn.microsoft.com/en-us/library/ms187997.aspx select * from sys.dm_exec_connections select * from sys.dm_exec_requests select * from sys.dm_exec_sessions 2. adjust columns to fit without scrollingMiniTwitter: 1.60: MiniTwitter 1.60 ???? ?? Twitter ?????????????????????????????? ??????????????????、?????????????????? ?????????????????????????????? ???????????????????????????????????????Minemapper: Minemapper v0.1.1: Fixed 'Generate entire world image', wasn't working. Improved Java detection for biome support. Biomes button is automatically unchecked if Java cannot be found in either the path environment variable or the Windows Registry. Fixed problem where Height + and - buttons would change the height more than one each click. Added keyboard shortcuts for navigation controls. You can now press and hold navigation buttons to continuously pan/zoom.VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6DotSpatial: DotSpatial 11-21-2010: This release introduces the following Fixed bugs related to dispose, which caused issues when reordering layers in the legend Fixed bugs related to assigning categories where NULL values are in the fields New fast-acting resize using a bitmap "prediction" of what the final resize content will look like. ImageData.ReadBlock, ImageData.WriteBlock These allow direct file access for reading or writing a rectangular window. Bitmaps are used for holding the values. Removed the need to stor...Sincorde Silverlight Library: 2010 - November: Silverlight 4 Visual Studio 2010 Microsoft Expression dependencies removed Design-Time bug fixedMDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.01: Added new extensions for - object.CountLoopsToNull Added new extensions for DateTime: - DateTime.IsWeekend - DateTime.AddWeeks Added new extensions for string: - string.Repeat - string.IsNumeric - string.ExtractDigits - string.ConcatWith - string.ToGuid - string.ToGuidSave Added new extensions for Exception: - Exception.GetOriginalException Added new extensions for Stream: - Stream.Write (overload) And other new methods ... Release as of dotnetpro 01/2011Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-19: Code samples for Visual Studio 2010Prism Training Kit: Prism Training Kit 4.0: Release NotesThis is an updated version of the Prism training Kit that targets Prism 4.0 and added labs for some of the new features of Prism 4.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication MEF Navigation Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2...Free language translator and file converter: Free Language Translator 2.2: Starting with version 2.0, the translator encountered a major redesign that uses MEF based plugins and .net 4.0. I've also fixed some bugs and added support for translating subtitles that can show up in video media players. Version 2.1 shows the context menu 'Translate' in Windows Explorer on right click. Version 2.2 has links to start the media file with its associated subtitle. Download the zip file and expand it in a temporary location on your local disk. At a minimum , you should uninstal...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.4 Released: Hi, Today we are releasing Visifire 3.6.4 with few bug fixes: * Multi-line Labels were getting clipped while exploding last DataPoint in Funnel and Pyramid chart. * ClosestPlotDistance property in Axis was not behaving as expected. * In DateTime Axis, Chart threw exception on mouse click over PlotArea if there were no DataPoints present in Chart. * ToolTip was not disappearing while changing the DataSource property of the DataSeries at real-time. * Chart threw exception ...Microsoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.New ProjectsAlien Terminator: Alien TerminatorArchWeb 2.0: ArchWeb è un'applicazione che unita al sito web permette di gestire documenti aziendali e mail preimpostate. ArchWeb supporta file di qualsiasi estensione (di dimensini variabili fino a 1.5 GB), tutti i tipi di documenti del pacchetto Microsoft Office o del pacchetto Adobe.Azure Table Sync Lib: Developing custom sync providers for Windows Azure Table Storage to support the following sync scenarios 1. Azure Table <-> SQL Server 2. Azure Table <-> SQL CE 3. Azure Table <-> SQL AzureColecaoFilmes: Prova de conceito usando DDD, ASP.NET MVC 3, NHibernate, Fluent Nhibernate e Ninject.Coleotrack: Coleotrack: ASP.NET MVC Issue Tracking.dmgbois: Personal repo.DNN Social Bookmark: DotNetNuke plugin for adding social media components.docNET: docNET grew out of my own battles creating system documentation from inline comments in code files. Using the XML documentation file as base to generate documentation in different formats. As I'm kinda biased towards my own needs I'm starting from C#/ASP.NET => web output. Event Trivia: A Silverlight trivia application which can be played in between sessions at an event. It has been used at PDC09, PDC10 and MIX10 so far. Written by Pete Brown at Microsoft.F# MathProvider: F# Math Provider wrappers native Blas+Lapack runtimes for F# users to perform linear algebra easily. Frets Terminator: Frets TerminatorGurfing projects: Few summaryHexa.xText: Hexa.xText is a .Net command line tool to extract text from source files for later translation, just like the GNU xgettext does. Extracted text are placed inside po files that must be compiled into satellite assemblies through the included PO2Assembly MSBuild task.Hold Popup for Touch: Hold Popup offers an optional solution to get an event while having pressed a mouse button or have a touchdown over time. Instead of handling everything by yourself (such as timer for holding or handling events), hold popup handles all itself. Developed in .Net 4.0 for WPF.HumanResourceManagementSystem: Project NIIT MMS901 - Quater 5 Human Resource Management System People Management ModuleLive Messenger: Live Messenger is a Windows Live Messenger module for DNNMtechSystem: MultipointMouse Technology Education ChainNerdOnRails.ActiveQuery: an object-relational mapper for url-parameter written in .net for .netPHMS: Publishing House Management SystemRabbitEars: Thread safe event driven message notification for RabbitMQ using the .net API. Implmenet in vb.net project is a Visual Studio 2010 solution with two project the RabbitEars project and a demo windows form to allow users to test drive RabbitMQScreen Snapshot Manager: Screen Snapshot Manager is a desktop winforms application, which runs in background, and keeps taking snapshots of current screen after every 30 seconds for tracking purpose. :)SharePoint 2010 Workflow Actions IntelliSense: This project hosts a SharePoint 2010 Workflow Actions Schema file to assist developers in building Workflow Actions File for SharePoint Designer. Show Reader for MSDN Shows: Show Reader is a viewer for MSDN Shows based on .NET Framweork. In a single Window you can view your favorite shows with transcripts. Moreover, in the same window you can browse Microsoft web-sites where you can find Shows about .NET technologies, like the .NET Show, Channel9 and MSDN TV. ShowReader is available in two editions. A Windows Forms edition, written in Visual Basic 2005 and a Windows Presentation Foundation, written in Visual Basic 2008 It was developed by the Italian developer ...Subtitles Matcher: WPF application using prism and MEF automaticlly find and download subtitles for you media fileTweakSP2010: "Because SharePoint 2010 needs some Tweak'n" SharePoint Administration and Development extensions.TweetsSaver: ?????????zlibnet - c# zlib wrapper library: zlibnet - c# zlib wrapper library Features: -zip -unzip -compression/decompression stream -fast, since using the unmanaged zlib library -64bit support ZupSch: ZupSch is a little cms for student

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • CodePlex Daily Summary for Monday, July 09, 2012

    CodePlex Daily Summary for Monday, July 09, 2012Popular ReleasesDbDiff: Database Diff and Database Scripting: 1.1.3.3: Sql 2005, Sql 2012 fixes Removed dbdiff recommended default exe because it was a wrong build.re-linq: 1.13.158: This is build 1.13.158 of re-linq. Find the complete release notes for the build here: Release NotesMVVM Light Toolkit: MVVM Light Toolkit V4 RTM: The issue with the installer is fixed, sorry for the problems folks This version supports Silverlight 3, Silverlight 4, Silverlight 5, WPF 3.5 SP1, WPF4, Windows Phone 7.0 and 7.5, WinRT (Windows 8). Support for Visual Studio 2010 and Visual Studio 2012 RC.BlackJumboDog: Ver5.6.7: 2012.07.08 Ver5.6.7 (1) ????????????????「????? Request.Receve()」?????????? (2) Web???????????TaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...40FINGERS DotNetNuke StyleHelper: 40FINGERS StyleHelper Skin Object 02.06.00: Version 02.06.00 Beta If you upgrade to this version and have used redirecting in your existing skins, make sure to test them! Changes to Redirect: Intro: Redirect Logic changes The way redirect were handled in a previous version was quite basic. I mostly used it to redirect Mobile devices fromthe Home page of a regular website to a mobile site I came a cross a more complex problem for a client were certain page should be redirected to the matching mobile pages. To suppor...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...Media Assistant: Media Assistant 3.0 Alpha: Migrate Database from SQL Compact Framework to SQLite database.Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...Office Integration Pack: Office Integration Pack 1.03: Version 1.03 add a couple of new capabilities including: Support for exporting images to Word content controls and tables The ability to import all data from Sheet1 with a column mapping, without specifying a range or worksheet.Enterprise Library 5 Caching with ProtoBuf.NET: Initial Release: Initial Release of implementationSharePoint Common Framework: SharePointCommon 1.1: - added lazy loading of any entity property - improved performance on big listsNew ProjectsAdf: wewfAttilaxToPinyin: ???????????? pinyinConvert.v20120709 。。?????????????,????,???,????. ---------------??------------ ??????,???Facebook???????????Twitter?????。????,Twitter????Build Time: Este es un proyecto software para la realización de horarios entres Profesores - Aulas - Cursos.ChainOfResponsibility: Exemple en C# d'implémentation du pattern Chain of ResponsibilityDauphine SmartControls for K2 blackpearl: Dauphine SmartControls for K2 blackpearl simplifies the integration of K2 blackpearl processes with ASP.NET web forms.Excel add-in for range databases: Range database for Excel.OhHai Browser: OhHai Browser is a opensource web browser running webkit.net OhHai Browser has just been released after a year of beta testing PHP File System: A PHP class library to facilitate working with file system.QScanLibrary: A simple anti-malware libraryRGraphWebApi: This is a project about the MVC4 Web Api Service Layers for the RGraph javascript charts library.Rhomble: RhombleShipping.ByTotal plugin for nopCommerce: This nopCommerce plugin is a shipping rate computation method that provides preconfigured shipping rates based on an order's subtotal.SideWinder Strategic Commander Revival: Provides an editor for the "Microsoft SideWinder Strategic Commander" that is compatible with the latest Windows edition.Software Testing Suite PowerShell: This is a suite project embracing several individual PowerShell modules. The project provides unified reporting and test management.TPM: C# program that edit and mix Texture Packs for game Minecraft.WCF with XML Serialization: This is simple WCF Web Service created with Web Service Software Factory PatternWindows 8 Minesweeper: A minesweeper clone in the Windows 8 Metro style, developed in JavaScript and HTML5.XWin Server: The Codeplex site for XWin Server/GUI??Nop???: ??Nop???,??Nop??

    Read the article

  • CodePlex Daily Summary for Saturday, July 07, 2012

    CodePlex Daily Summary for Saturday, July 07, 2012Popular ReleasesHigLabo: HigLabo_20120706: Breaking change Now HigLabo.Mail require reference to HigLabo.Net. ProtocolType change name to HttpProtocolType in HigLabo.Net project. AsyncCallErrorEventArgs change name to AsyncHttpCallErrorEventArgs. Delete command class in Pop3,Smtp that may not used. Other change Add HigLabo.Net.Ftp project.(Not complete) Create SocketClient that can easily communicate to server by Socket object.ecBlog: ecBlog 0.2: ecBlog alpha realaseTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.?????????? - ????????: All-In-One Code Framework ??? 2012-07-04: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ???OneCode??????,??????????10????Microsoft OneCode Sample,????4?Windows Base Sample,2?XML Sample?4?ASP.NET Sample。???????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Windows Base Sample CSCheckOSBitness VBCheckOSBitness CSCheckOSVersion VBCheckOSVersion XML Sample CSXPath VBXPath ASP.NET Sample CSASPNETDataPager VBASPNET...xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...BlackJumboDog: Ver5.6.6: 2012.07.03 Ver5.6.6 (1) ???????????ftp://?????????、????LIST?????Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsnopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.60: Highlight features & improvements: • Significant performance optimization. • Use AJAX for adding products to the cart. • New flyout mini-shopping cart. • Auto complete suggestions for product searching. • Full-Text support. • EU cookie law support. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).THE NVL Maker: The NVL Maker Ver 3.51: http://download.codeplex.com/Download?ProjectName=nvlmaker&DownloadId=371510 ????:http://115.com/file/beoef05k#THE-NVL-Maker-ver3.51-sim.7z ????:http://www.mediafire.com/file/6tqdwj9jr6eb9qj/THENVLMakerver3.51tra.7z ======================================== ???? ======================================== 3.51 beta ???: ·?????????????????????? ·?????????,?????????0,?????????????????????? ·??????????????????????????? ·?????????????TJS????(EXP??) ·??4:3???,???????????????,??????????? ·?????????...????: ????2.0.3: 1、???????????。 2、????????。 3、????????????。 4、bug??,????。New ProjectsCode Bits: Set of useful code blocks that can be included in your code. Includes NuGet support.Critr: A personal project that takes formatted Excel show logs, parses them and uploads them to small local database for analytics.kb.net: An Open Source Knowledge Base based on SQL Server Express 2012 and .Net 4.0LyncServerExtension: L’objectif de ce projet est l’ajout de la fonctionnalité de délégation patron/secrétaire à Microsoft Lync Server 2010. MVC Web Api 4 Flot: MVC4 Web Api Service Layers for the Flot project on http://code.google.com/p/flot. Until now implemented only the GET method.ostests: testif is web and mobile assessment software. Create interactive tests easily and share them with your colleagues, employees and friends.Pegasus Attack: Pegasus Attack will be a simple shmup style game in the style of Truxton Basic features Multiple levels (text document written, just stores location of enemies) Basic enemies with basic AI (hard-coded, or from a text document) Various bullet types Title screen / Help screen / Control window / In-game game-states / two playable Characters Rainbow Dash and Fluttershy Basic effects (explosion animation) Items (powerups, guns, ...)proLearningEnglish: Apps RDF to build a software for learning English. Users are teachers and pupils in grades 6.Pusher .Net Client: This is a .Net client for Pusher (http://www.pusher.com) allowing .Net clients such as WinForms and Console applications to receive websocket messages.RadEditor Lite for AJAX: RadEditor Lite for AJAX modified from the open source Telerik Free Tool: RadEditor Lite for MOSS 2010. RconLibrary: Battlefield 3 RCON communication library.SharePoint Notes: Simple visual webpart to show list items as notes. Easy to modify, and not really complex.Software Manager: Software Manager is a software package that will help with distribution and licensing of programs that are developed with VB.NET or C#.StoreFramework: this project is a test framework about the codefirst and pocoTwitterRt - Tweet from Windows Metro Apps: Add the ability to tweet from your Metro style (WinRT) application. Binaries at nuget.org/packages/TwitterRt. Discussion at w8isms.blogspot.com.YucadagBlog: e

    Read the article

  • CodePlex Daily Summary for Sunday, July 08, 2012

    CodePlex Daily Summary for Sunday, July 08, 2012Popular ReleasesBlackJumboDog: Ver5.6.7: 2012.07.08 Ver5.6.7 (1) ????????????????「????? Request.Receve()」?????????? (2) Web???????????FlMML customized: FlMML customized ??: FlMML customized ????。 ??、PCM??????????、??????。ecBlog: ecBlog 0.2: ecBlog alpha realaseTaskScheduler ASP.NET: Release 3 - 1.2.0.0: Release 3 - Version 1.2.0.0 That version was altered only the library: In TaskScheduler was added new properties: UseBackgroundThreads Enables the use of separate threads for each task. StoreThreadsInPool Manager enables to store in the Pool threads that are performing the tasks. OnStopSchedulerAutoCancelThreads Scheduler allows aborting threads when it is stopped. false if the scheduler is not aborted the threads that are running. AutoDeletedExecutedTasks Allows Manager Delete Task afte...DotNetNuke Persian Packages: ??? ?? ???? ????? ???? 6.2.0: *????? ???? ??? ?? ???? 6.2.0 ? ??????? ???? ????? ???? ??? ????? *????? ????? ????? ??? ??? ???? ??? ??????? ??????? - ???? *?????? ???? ??? ?????? ?? ???? ???? ????? ? ?? ??? ?? ???? ???? ?? *????? ????? ????? ????? ????? / ??????? ???? ?? ???? ??? ??? - ???? *???? ???? ???? ????? ? ??????? ??? ??? ??? ?? ???? *????? ????? ???????? ??? ? ??????? ?? ?? ?????? ????? ????????? ????? ?????? - ???? *????? ????? ?????? ????? ?? ???? ?? ?? ?? ???????? ????? ????? ????????? ????? ?????? *???? ?...testtom07052012git02: r1: r1Cypher Bot: Cypher Bot 4.1: Cypher Bot is the most advanced encryption tool on the planet.... and now it actually works. That's right we fixed the bugs! For a full program summary go to the Home Page or visit www.iRareMedia.com So what's new? We've pretty much fixed all the bugs, but here's a run down if you wanna know exactly what's different: Fixed Installation / Setup Error, where an error message would display: "No Internet Connection, Try Again Later" Fixed File Encryption / Decryption error where the file exten...Coding4Fun Kinect Service: Coding4Fun Kinect Service v1.5: Requires Kinect for Windows SDK v1.5 Minor bug fixes + Kinect for Windows SDK v1.5 Aligning version with the Kinect for Windows SDK requiredtedplay: tedplay 1.1: tedplay 1.1 source and Win32 binary is out now. Changes are: SID card support Commodore 64 PSID music format support optimized FIR filter global hotkeys for skipping tracks (Windows only) module properties window (Windows only) mutable noise channel via GUI button (Windows only) disable SID card from the menu (Windows only) bugfixes PSID tunes are played on the C64 clock frequency but in a Commodore plus/4 virtual machine. The purpose is not to have yet another SID player, but t...xUnit.net Contrib: xunitcontrib-resharper 0.6 (RS 7.0, 6.1.1): xunitcontrib release 0.6 (ReSharper runner) This release provides a test runner plugin for Resharper 7.0 (EAP build 82) and 6.1, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) Copies of the plugin that support previous verions of ReSharper can be downloaded from this release. The plan is to support the latest revisions of the last two paid-for major versions of ReSharper (namely 7.0 and 6.1) Also note that all builds work against ALL VERSIONS...Umbraco CMS: Umbraco 4.8.0 Beta: Whats newuComponents in the core Multi-Node Tree Picker, Multiple Textstring, Slider and XPath Lists Easier Lucene searching built in IFile providers for easier file handling Updated 3rd party libraries Applications / Trees moved out of the database SQL Azure support added Various bug fixes Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to...CODE Framework: 4.0.20704.0: See CODE Framework (.NET) Change Log for changes in this version.xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit): xUnit.net 1.9.1: xUnit.net release 1.9.1Build #1600 Important note for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. Important note for VS2012 users: The VS2012 runner is in the Visual Studio Gallery now, and should be installed via Tools | Extension Manager from insi...MVC Controls Toolkit: Mvc Controls Toolkit 2.2.0: Added Modified all Mv4 related features to conform with the Mvc4 RC Now all items controls accept any IEnumerable<T>(before just List<T> were accepted by most of controls) retrievalManager class that retrieves automatically data from a data source whenever it catchs events triggered by filtering, sorting, and paging controls move method to the updatesManager to move one child objects from a father to another. The move operation can be undone like the insert, update and delete operatio...IronPython: 2.7.3: On behalf of the IronPython team, I'm happy to announce the final release of IronPython 2.7.3. This release includes everything from IronPython 54498, 62475, and 74478 as well. Like all IronPython 2.7-series releases, .NET 4 is required to install it. Installing this release will replace any existing IronPython 2.7-series installation. The incompatibility with IronRuby has been resolved, and they can once again be installed side-by-side. The biggest improvements in IronPython 2.7.3 are: the...Mini SQL Query: Mini SQL Query (v1.0.68.441): Just a bug fix release for when the connections try to refresh after an edit. Make sure you read the Quickstart for an introduction.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.58: Fix for Issue #18296: provide "ALL" value to the -ignore switch to ignore all error and warning messages. Fix for issue #18293: if encountering EOF before a function declaration or expression is properly closed, throw an appropriate error and don't crash. Adjust the variable-renaming algorithm so it's very specific when renaming variables with the same number of references so a single source file ends up with the same minified names on different platforms. add the ability to specify kno...LogExpert: 1.4 build 4566: This release for the 1.4 version line contains various fixes which have been made some times ago. Until now these fixes were only available in the 1.5 alpha versions. It also contains a fix for: 710. Column finder (press F8 to show) Terminal server issues: Multiple sessions with same user should work now Settings Export/Import available via Settings Dialog still incomple (e.g. tab colors are not saved) maybe I change the file format one day no command line support yet (for importin...CommonLibrary.NET: CommonLibrary.NET 0.9.8.5 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Releases notes for FluentScript located at http://fluentscript.codeplex.com/wikipage?action=Edit&title=Release%20Notes&referringTitle=Documentation Fluentscript - 0.9.8.5 - Final ReleaseApplication: FluentScript Versio...SharePoint 2010 Metro UI: SharePoint 2010 Metro UI8: Please review the documentation link for how to install. Installation takes some basic knowledge of how to upload and edit SharePoint Artifact files. Please view the discussions tab for ongoing FAQsNew ProjectsAdventures of Adventure Land: Adventures of Adventure Land is a new text based adventure. You will be able to level up, fight challenging enemies, use magical spells, and simply adventure.AdventureWorks Silverlight samples: AdventureWorks Silverlight samplesAFS.Collab.Duplex: my duplexarmmychan: ????????C# - WPF - .Net - MSSQL - Open Source Restaurant POS System: A C# .Net / WPF / MSSQL Restaurant Point of Sale (POS) Software that is PCI-DSS 2.0 Compliant running stable in many restaurants / integrated with MercuryPay.dcview: dcinside.com? ??? ? ?? ?? ??? ???? ???.DotNetNuke Contact Form: simple yet effective DotNetNuke contact formISBNdb.com Helper Library: A .net helper library that encapsulates all the functionality of the API at www.isbndb.com in .NET CLR objects.JPO Class Register: Simple class register.NACHA C# Class with WPF Test App: Do you need to generate a NACHA PPD file? This is great starting point for you! Actually, it's a great, almost finished, point for you! More info below.NotificationsWidget In ASP MVC: SummaryPayPal Manager: I decided to make a basic (for now) desktop client for PayPal to get better at WebRequests in VB.NET. I will be adding much more such as sending payments, etc.Pomodoro Timer Count Down App: Pomodoro count down timer Application Features ------------------------------- Pomodoro Mode Count down timer mode Start stop pause Notification Powershell HTML Highlight: Powershell html syntax highlightingProject F10_P1: F10 p1Razor-sharp your skills: This project will have details about the C# 2.0 C# 3.0 C# 4.0 C# 5.0 Salaria: Bienvenue sur notre projet "Salaria".SharePoint 2010 - Unlock SP Files: Unlock any file in SharePoint or get lock information of a SharePoint file.( Compatible with office 365) ""The file "" is locked for exclusive use by""SharePoint 2010 Metro Masterpage: This project will give you a full metro masterpage for sharepoint 2010SharePoint Cache Refresh Framework: This project's aim is build a small and easy to use framework for SharePoint developers to be able to control cached objects across servers in a SharePoint FarmTFSProjectTest: A test project.uhimania test project: testUpgrade SPSolution: This is a Sharepoint 2010 Management Shell cmdlet, which upgrades a sharepoint solution and installs/activates any new features added in the package.Video Frame Explorer: Video Frame Explorer allows you to make thumbnails (caps, previews) of video files. It supports of practically any videos-formats (even MP4, MKV, MOV if you havWML: WML

    Read the article

  • NHibernate, and odd "Session is Closed!" errors

    - by Sekhat
    Note: Now that I've typed this out, I have to apologize for the super long question, however, I think all the code and information presented here is in some way relevant. Okay, I'm getting odd "Session Is Closed" errors, at random points in my ASP.NET webforms application. Today, however, it's finally happening in the same place over and over again. I am near certain that nothing is disposing or closing the session in my code, as the bits of code that use are well contained away from all other code as you'll see below. I'm also using ninject as my IOC, which may / may not be important. Okay, so, First my SessionFactoryProvider and SessionProvider classes: SessionFactoryProvider public class SessionFactoryProvider : IDisposable { ISessionFactory sessionFactory; public ISessionFactory GetSessionFactory() { if (sessionFactory == null) sessionFactory = Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005.ConnectionString(p => p.FromConnectionStringWithKey("QoiSqlConnection"))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<JobMapping>()) .BuildSessionFactory(); return sessionFactory; } public void Dispose() { if (sessionFactory != null) sessionFactory.Dispose(); } } SessionProvider public class SessionProvider : IDisposable { ISessionFactory sessionFactory; ISession session; public SessionProvider(SessionFactoryProvider sessionFactoryProvider) { this.sessionFactory = sessionFactoryProvider.GetSessionFactory(); } public ISession GetCurrentSession() { if (session == null) session = sessionFactory.OpenSession(); return session; } public void Dispose() { if (session != null) { session.Dispose(); } } } These two classes are wired up with Ninject as so: NHibernateModule public class NHibernateModule : StandardModule { public override void Load() { Bind<SessionFactoryProvider>().ToSelf().Using<SingletonBehavior>(); Bind<SessionProvider>().ToSelf().Using<OnePerRequestBehavior>(); } } and as far as I can tell work as expected. Now my BaseDao<T> class: BaseDao public class BaseDao<T> : IDao<T> where T : EntityBase { private SessionProvider sessionManager; protected ISession session { get { return sessionManager.GetCurrentSession(); } } public BaseDao(SessionProvider sessionManager) { this.sessionManager = sessionManager; } public T GetBy(int id) { return session.Get<T>(id); } public void Save(T item) { using (var transaction = session.BeginTransaction()) { session.SaveOrUpdate(item); transaction.Commit(); } } public void Delete(T item) { using (var transaction = session.BeginTransaction()) { session.Delete(item); transaction.Commit(); } } public IList<T> GetAll() { return session.CreateCriteria<T>().List<T>(); } public IQueryable<T> Query() { return session.Linq<T>(); } } Which is bound in Ninject like so: DaoModule public class DaoModule : StandardModule { public override void Load() { Bind(typeof(IDao<>)).To(typeof(BaseDao<>)) .Using<OnePerRequestBehavior>(); } } Now the web request that is causing this is when I'm saving an object, it didn't occur till I made some model changes today, however the changes to my model has not changed the data access code in anyway. Though it changed a few NHibernate mappings (I can post these too if anyone is interested) From as far as I can tell, BaseDao<SomeClass>.Get is called then BaseDao<SomeOtherClass>.Get is called then BaseDao<TypeImTryingToSave>.Save is called. it's the third call at the line in Save() using (var transaction = session.BeginTransaction()) that fails with "Session is Closed!" or rather the exception: Session is closed! Object name: 'ISession'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ObjectDisposedException: Session is closed! Object name: 'ISession'. And indeed following through on the Debugger shows the third time the session is requested from the SessionProvider it is indeed closed and not connected. I have verified that Dispose on my SessionFactoryProvider and on my SessionProvider are called at the end of the request and not before the Save call is made on my Dao. So now I'm a little stuck. A few things pop to mind. Am I doing anything obviously wrong? Does NHibernate ever close sessions without me asking to? Any workarounds or ideas on what I might do? Thanks in advance

    Read the article

  • Facebook Connect login button not rendering

    - by tloflin
    I'm trying to implement a Facebook Connect Single Sign-on site. I originally just had a Connect button (<fb:login-button>), which the user had to click every time they wanted to sign in. I now have the auto login and logout features working. That is, my site will detect a logged-in Facebook account and automatically authenticate them if it can find a match to one of my site's user accounts, and automatically deauthenticate if the Facebook session is lost. I also have a manual logout button that will log the user out of both my site and Facebook. All of those are working correctly, but now my original Connect button is intermittently not being rendered correctly. It just shows up as the plain XHTML (ie., it looks like plain text--not a button--and is unclickable), and no XFBML is applied. Here is the basic code: On every page: <body> {...} <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script type="text/javascript> FB.init('APIKey', '/xd_receiver.htm'); var isAuth; // isAuth determines if the user is authenticated on my site // it should be true on the logout page, false on the login page FB.ensureInit(function(){ var session = FB.Facebook.apiClient.get_session(); if (session && !isAuth) { PageMethods.FacebookLogin(session.uid, session.session_key, FBLogin, FBLoginFail); // This is an AJAX call that authenticates the user on my site. } else if(!session && isFBAuth) { PageMethods.FacebookLogout(FBLogout, FBLogoutFail); // This is an AJAX call that deauthenticates the user on my site. } // the callback functions do nothing at the moment }); </script> {...} </body> On the login page: (this page is not visible to logged in users) <body> {...} <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script type="text/javascript> FB.init('APIKey', '/xd_receiver.htm'); {...} // auto-auth code as on every page </script> <!-- This is the button that fails to render --> <fb:login-button v="2" onlogin="UserSignedIntoFB();" size="large" autologoutlink="true"><fb:intl>Login With Your Facebook Account</fb:intl></fb:login-button> <script type="text/javascript"> function UserSignedIntoFB() { {...} // posts back to the server which authenticates the user on my site & redirects } </script> {...} </body> On the logout page: (this page is not visible to logged out users) <body> {...} <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script type="text/javascript> FB.init('APIKey', '/xd_receiver.htm'); {...} // auto-auth code as on every page </script> <script type="text/javascript"> function FBLoggedOut() { {...} // posts back to the server which deauthenticates the user on my site & redirects to login } function Logout() { if (FB.Facebook.apiClient.get_session()) { FB.Connect.logout(FBLoggedOut); // logs out of Facebook if it's a Facebook account return false; } else { return true; // posts back if it's a non-Facebook account & redirects to login } } </script> <a onclick="return Logout();" href="postback_url">Sign Out</a> {...} </body> Some things I've already looked at: The automatic login and logout are working great. I can log in and out of Facebook on another tab and my site will notice the session changes and update accordingly. The logout seems to be working fine: when clicked, it deauthenticates the user and logs them out of Facebook, as intended. The issue seems to usually happen after a Facebook user is logged out, but it happens somewhat intermittently; it might happen before they ever login, and it goes away after a few minutes/refreshes. Some cookies are left over after the login/logout process, but deleting them does not fix the issue. Restarting the browser does not fix the issue. The user is definitely logged out of Facebook and my site when the problem occurs. I've checked for Facebook sessions and site authentication. All external script calls are being served up correctly. I have a suspicion that there's something else I need to be doing upon logout (like clearing session or cookies), but according to everything I've read about Facebook Connect, all I need to do is call the logout function (and deauthenticate on my server-side). I'm really at a loss; does anybody have any ideas what could be wrong?

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • ASP.NET MVC RememberMe(It's large, please don't quit reading. Have explained the problem in detail a

    - by nccsbim071
    After searching a lot i did not get any answers and finally i had to get back to you. Below i am explaining my problem in detail. It's too long, so please don't quit reading. I have explained my problem in simple language. I have been developing an asp.net mvc project. I am using standard ASP.NET roles and membership. Everything is working fine but the remember me functionality doesn't work at all. I am listing all the details of work. Hope you guys can help me out solve this problem. I simply need this: I need user to login to web application. During login they can either login with remember me or without it. If user logs in with remember me, i want browser to remember them for long time, let's say atleast one year or considerably long time. The way they do it in www.dotnetspider.com,www.codeproject.com,www.daniweb.com and many other sites. If user logs in without remember me, then browser should allow access to website for some 20 -30 minutes and after that their session should expire. Their session should also expire when user logs in and shuts down the browser without logging out. Note: I have succesfully implemented above functionality without using standard asp.net roles and membership by creating my own talbes for user and authenticating against my database table, setting cookie and sessions in my other projects. But for this project we starting from the beginning used standard asp.net roles and membership. We thought it will work and after everything was build at the time of testing it just didn't work. and now we cannot replace the existing functionality with standard asp.net roles and membership with my own custom user tables and all the stuff, you understand what i am taling about. Either there is some kind of bug with standard asp.net roles and membership functionality or i have the whole concept of standard asp.net roles and membership wrong. i have stated what i want above. I think it's very simple and reasonable. What i did Login form with username,password and remember me field. My setting in web.config: <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880"/> </authentication> in My controller action, i have this: FormsAuth.SignIn(userName, rememberMe); public void SignIn(string userName, bool createPersistentCookie) { FormsAuthentication.SetAuthCookie(userName, createPersistentCookie); } Now the problems are following: I have already stated in above section "I simply need this". user can successfully log in to the system. Their session exists for as much minutes as specified in timeout value in web.config. I have also given a sample of my web.config. In my samplem if i set the timeout to 5 minutes,then user session expires after 5 minutes, that's ok. But if user closes the browser and reopen the browser, user can still enter the website without loggin in untill time specified in "timeout" has not passed out. The sliding expiration for timeout value is also working fine. Now if user logs in to the system with remember me checked, user session still expires after 5 minutes. This is not good behaviour, is it?. I mean to say that if user logs in to the system with remember me checked he should be remembered for a long time untill he doesn't logs out of the system or user doesn't manually deletes all the cookies from the browser. If user logs in to the system without remember me checked his session should expire after the timeout period values specified in web.config and also if users closes the browser. The problem is that if user closes the browser and reopens it he can still enter the website without logging in. I search internet a lot on this topic, but i could not get the solution. In the blog post(http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx) made by Scott Gu on exactly the same topic. The users are complaining about the same thing in their comments ut there is no easy solution given in by Mr. Scott. I read it at following places: http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx http://geekswithblogs.net/vivek/archive/2006/09/14/91191.aspx I guess this is a problem of lot's of users. As seem from blog post made by Mr. Scott Gu. Your help will be really appreciated. Thanks in advance.

    Read the article

  • ASP.NET RememberMe(It's large, please don't quit reading. Have explained the problem in detail and s

    - by nccsbim071
    After searching a lot i did not get any answers and finally i had to get back to you. Below i am explaining my problem in detail. It's too long, so please don't quit reading. I have explained my problem in simple language. I have been developing an asp.net mvc project. I am using standard ASP.NET roles and membership. Everything is working fine but the remember me functionality doesn't work at all. I am listing all the details of work. Hope you guys can help me out solve this problem. I simply need this: I need user to login to web application. During login they can either login with remember me or without it. If user logs in with remember me, i want browser to remember them for long time, let's say atleast one year or considerably long time. The way they do it in www.dotnetspider.com,www.codeproject.com,www.daniweb.com and many other sites. If user logs in without remember me, then browser should allow access to website for some 20 -30 minutes and after that their session should expire. Their session should also expire when user logs in and shuts down the browser without logging out. Note: I have succesfully implemented above functionality without using standard asp.net roles and membership by creating my own talbes for user and authenticating against my database table, setting cookie and sessions in my other projects. But for this project we starting from the beginning used standard asp.net roles and membership. We thought it will work and after everything was build at the time of testing it just didn't work. and now we cannot replace the existing functionality with standard asp.net roles and membership with my own custom user tables and all the stuff, you understand what i am taling about. Either there is some kind of bug with standard asp.net roles and membership functionality or i have the whole concept of standard asp.net roles and membership wrong. i have stated what i want above. I think it's very simple and reasonable. What i did Login form with username,password and remember me field. My setting in web.config: in My controller action, i have this: FormsAuth.SignIn(userName, rememberMe); public void SignIn(string userName, bool createPersistentCookie) { FormsAuthentication.SetAuthCookie(userName, createPersistentCookie); } Now the problems are following: I have already stated in above section "I simply need this". user can successfully log in to the system. Their session exists for as much minutes as specified in timeout value in web.config. I have also given a sample of my web.config. In my samplem if i set the timeout to 5 minutes,then user session expires after 5 minutes, that's ok. But if user closes the browser and reopen the browser, user can still enter the website without loggin in untill time specified in "timeout" has not passed out. The sliding expiration for timeout value is also working fine. Now if user logs in to the system with remember me checked, user session still expires after 5 minutes. This is not good behaviour, is it?. I mean to say that if user logs in to the system with remember me checked he should be remembered for a long time untill he doesn't logs out of the system or user doesn't manually deletes all the cookies from the browser. If user logs in to the system without remember me checked his session should expire after the timeout period values specified in web.config and also if users closes the browser. The problem is that if user closes the browser and reopens it he can still enter the website without logging in. I search internet a lot on this topic, but i could not get the solution. In the blog post(http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx) made by Scott Gu on exactly the same topic. The users are complaining about the same thing in their comments ut there is no easy solution given in by Mr. Scott. I read it at following places: http://weblogs.asp.net/scottgu/archive/2005/11/08/430011.aspx http://geekswithblogs.net/vivek/archive/2006/09/14/91191.aspx I guess this is a problem of lot's of users. As seem from blog post made by Mr. Scott Gu. Your help will be really appreciated. Thanks in advance.

    Read the article

  • PHP MVC Framework Structure

    - by bigstylee
    I am sorry about the amount of code here. I have tried to show enough for understanding while avoiding confusion (I hope). I have included a second copy of the code at Pastebin. (The code does execute without error/notice/warning.) I am currently creating a Content Management System while trying to implement the idea of Model View Controller. I have only recently come across the concept of MVC (within the last week) and trying to implement this into my current project. One of the features of the CMS is dynamic/customisable menu areas and each feature will be represented by a controller. Therefore there will be multiple versions of the Controller Class, each with specific extended functionality. I have looked at a number of tutorials and read some open source solutions to the MVC Framework. I am now trying to create a lightweight solution for my specific requirements. I am not interested in backwards compatibility, I am using PHP 5.3. An advantage of the Base class is not having to use global and can directly access any loaded class using $this->Obj['ClassName']->property/function();. Hoping to get some feedback using the basic structure outlined (with performance in mind). Specifically; a) Have I understood/implemented the concept of MVC correctly? b) Have I understood/implemented Object Orientated techniques with PHP 5 correctly? c) Should the class propertise of Base be static? d) Improvements? Thank you very much in advance! <?php /* A "Super Class" that creates/stores all object instances */ class Base { public static $Obj = array(); // Not sure this is the correct use of the "static" keyword? public static $var; static public function load_class($directory, $class) { echo count(self::$Obj)."\n"; // This does show the array is getting updated and not creating a new array :) if (!isset(self::$Obj[$class]) && !is_object(self::$Obj[$class])) //dont want to load it twice { /* Locate and include the class file based upon name ($class) */ return self::$Obj[$class] = new $class(); } return TRUE; } } /* Loads general configuration objects into the "Super Class" */ class Libraries extends Base { public function __construct(){ $this->load_class('library', 'Database'); $this->load_class('library', 'Session'); self::$var = 'Hello World!'; //testing visibility /* Other general funciton classes */ } } class Database extends Base { /* Connects to the the database and executes all queries */ public function query(){} } class Session extends Base { /* Implements Sessions in database (read/write) */ } /* General functionality of controllers */ abstract class Controller extends Base { protected function load_model($class, $method) { /* Locate and include the model file */ $this->load_class('model', $class); call_user_func(array(self::$Obj[$class], $method)); } protected function load_view($name) { /* Locate and include the view file */ #include('views/'.$name.'.php'); } } abstract class View extends Base { /* ... */ } abstract class Model extends Base { /* ... */ } class News extends Controller { public function index() { /* Displays the 5 most recent News articles and displays with Content Area */ $this->load_model('NewsModel', 'index'); $this->load_view('news', 'index'); echo $this->var; } public function menu() { /* Displays the News Title of the 5 most recent News articles and displays within the Menu Area */ $this->load_model('news/index'); $this->load_view('news/index'); } } class ChatBox extends Controller { /* ... */ } /* Lots of different features extending the controller/view/model class depending upon request and layout */ class NewsModel extends Model { public function index() { echo $this->var; self::$Obj['Database']->query(/*SELECT 5 most recent news articles*/); } public function menu() { /* ... */ } } $Libraries = new Libraries; $controller = 'News'; // Would be determined from Query String $method = 'index'; // Would be determined from Query String $Content = $Libraries->load_class('controller', $controller); //create the controller for the specific page if (in_array($method, get_class_methods($Content))) { call_user_func(array($Content, $method)); } else { die('Bad Request'. $method); } $Content::$var = 'Goodbye World'; echo $Libraries::$var . ' - ' . $Content::$var; ?> /* Ouput */ 0 1 2 3 Goodbye World! - Goodbye World

    Read the article

  • Jquery Serialize data

    - by Richard
    So I have several text boxes, drop down menus, and radio options on this form. When the user clicks submit i want to save ALL that information so I can put it into a database. So this is all the form's inputs <div id="reg2a"> First Name: <br/><input type="text" name="fname" /> <br/> Last Name: <br/><input type="text" name="lname" /> <br/> Address: <br/><input type="text" name="address" /> <br/> City: <br/><input type="text" name="city" /> <br/> State: <br/><input type="text" name="state" /> <br/> Zip Code: <br/><input type="text" name="zip" /> <br/> Phone Number: <br/><input type="text" name="phone" /> <br/> Fax: <br/><input type="text" name="fax" /> <br/> Email: <br/><input type="text" name="email" /> <br/> Ethnicity: <i>Used only for grant reporting purposes</i> <br/><input type="text" name="ethnicity" /> <br/><br/> Instutional Information Type (select the best option) <br/> <select name="iitype"> <option value="none">None</option> <option value="uni">University</option> <option value="commorg">Community Organization</option> </select> <br/><br/> Number of sessions willing to present: <select id="vennum_select" name="vnum"> <?php for($i=0;$i<=3;$i++) { ?> <option value="<?php echo $i ?>"><?php echo $i ?></option> <?php } ?><br/> </select><br/> Number of tables requested: <select id="tabnum_select" name="tnum"> <?php for($i=1;$i<=3;$i++) { ?> <option value="<?php echo $i ?>"><?php echo $i ?></option> <?php } ?> </select><br/><br/> Awarding of a door prize during the conference will result in a reduction in the cost of your first table. <br/><br/> I am providing a door prize for delivery during the conference of $75 or more <select id="prize_select" name="pnum"> <option value="0">No</option> <option value="1">Yes</option> </select><br/> Prize name: <input type="text" name="prize_name" /><br/> Description: <input type="text" name="descr" /><br/> Value: <input type="text" name="retail" /><br/><br/> Name of Institution: <br/><input type="text" name="institution" /> <br/><br/> Type (select the best option) <br/> <select name="type"> <option value="none">None</option> <option value="k5">K-5</option> <option value="k8">K-8</option> <option value="68">6-8</option> <option value="912">9-12</option> </select> <br/><br/> Address: <br/><input type="text" name="address_sch" /> <br/> City: <br/><input type="text" name="city_sch" /> <br/> State: <br/><input type="text" name="state_sch" /> <br/> Zip Code: <br/><input type="text" name="zip_sch" /> <br/> <form name="frm2sub" id="frm2sub" action="page3.php" method="post"> <input type="submit" name="submit" value="Submit" id="submit" /> </form> </div> This is my jquery function: $("#frm2sub").submit( function() { var values = {}; values["fname"] = $("#fname").val(); }); I can do this for each one of the input boxes but I want to give all this data to the next page. So how do I put this array into $_POST? Btw, I've tried doing var data = $("#reg2a").serialize(); and var data = $(document).serialize(); Data ended up being empty. Any ideas? Thanks

    Read the article

  • ruby on rails has_many through relationship

    - by BennyB
    Hi i'm having a little trouble with a has_many through relationship for my app and was hoping to find some help. So i've got Users & Lectures. Lectures are created by one user but then other users can then "join" the Lectures that have been created. Users have their own profile feed of the Lectures they have created & also have a feed of Lectures friends have created. This question however is not about creating a lecture but rather "Joining" a lecture that has been created already. I've created a "lecturerelationships" model & controller to handle this relationship between Lectures & the Users who have Joined (which i call "actives"). Users also then MUST "Exit" the Lecture (either by clicking "Exit" or navigating to one of the header navigation links). I'm grateful if anyone can work through some of this with me... I've got: Users.rb model Lectures.rb model Users_controller Lectures_controller then the following model lecturerelationship.rb class lecturerelationship < ActiveRecord::Base attr_accessible :active_id, :joinedlecture_id belongs_to :active, :class_name => "User" belongs_to :joinedlecture, :class_name => "Lecture" validates :active_id, :presence => true validates :joinedlecture_id, :presence => true end lecturerelationships_controller.rb class LecturerelationshipsController < ApplicationController before_filter :signed_in_user def create @lecture = Lecture.find(params[:lecturerelationship][:joinedlecture_id]) current_user.join!(@lecture) redirect_to @lecture end def destroy @lecture = Lecturerelationship.find(params[:id]).joinedlecture current_user.exit!(@user) redirect_to @user end end Lectures that have been created (by friends) show up on a users feed in the following file _activity_item.html.erb <li id="<%= activity_item.id %>"> <%= link_to gravatar_for(activity_item.user, :size => 200), activity_item.user %><br clear="all"> <%= render :partial => 'shared/join', :locals => {:activity_item => activity_item} %> <span class="title"><%= link_to activity_item.title, lecture_url(activity_item) %></span><br clear="all"> <span class="user"> Joined by <%= link_to activity_item.user.name, activity_item.user %> </span><br clear="all"> <span class="timestamp"> <%= time_ago_in_words(activity_item.created_at) %> ago. </span> <% if current_user?(activity_item.user) %> <%= link_to "delete", activity_item, :method => :delete, :confirm => "Are you sure?", :title => activity_item.content %> <% end %> </li> Then you see I link to the the 'shared/join' partial above which can be seen in the file below _join.html.erb <%= form_for(current_user.lecturerelationships.build(:joinedlecture_id => activity_item.id)) do |f| %> <div> <%= f.hidden_field :joinedlecture_id %> </div> <%= f.submit "Join", :class => "btn btn-large btn-info" %> <% end %> Some more files that might be needed: config/routes.rb SampleApp::Application.routes.draw do resources :users do member do get :following, :followers, :joined_lectures end end resources :sessions, :only => [:new, :create, :destroy] resources :lectures, :only => [:create, :destroy, :show] resources :relationships, :only => [:create, :destroy] #for users following each other resources :lecturerelationships, :only => [:create, :destroy] #users joining existing lectures So what happens is the lecture comes in my activity_feed with a Join button option at the bottom...which should create a lecturerelationship of an "active" & "joinedlecture" (which obviously are supposed to be coming from the user & lecture classes. But the error i get when i click the join button is as follows: ActiveRecord::StatementInvalid in LecturerelationshipsController#create SQLite3::ConstraintException: constraint failed: INSERT INTO "lecturerelationships" ("active_id", "created_at", "joinedlecture_id", "updated_at") VALUES (?, ?, ?, ?) Also i've included my user model (seems the error is referring to it) user.rb class User < ActiveRecord::Base attr_accessible :email, :name, :password, :password_confirmation has_secure_password has_many :lectures, :dependent => :destroy has_many :lecturerelationships, :foreign_key => "active_id", :dependent => :destroy has_many :joined_lectures, :through => :lecturerelationships, :source => :joinedlecture before_save { |user| user.email = email.downcase } before_save :create_remember_token validates :name, :presence => true, :length => { :maximum => 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i validates :email, :presence => true, :format => { :with => VALID_EMAIL_REGEX }, :uniqueness => { :case_sensitive => false } validates :password, :presence => true, :length => { :minimum => 6 } validates :password_confirmation, :presence => true def activity # This feed is for "My Activity" - basically lectures i've started Lecture.where("user_id = ?", id) end def friendactivity Lecture.from_users_followed_by(self) end # lECTURE TO USER (JOINING) RELATIONSHIPS def joined?(selected_lecture) lecturerelationships.find_by_joinedlecture_id(selected_lecture.id) end def join!(selected_lecture) lecturerelationships.create!(:joinedlecture_id => selected_lecture.id) end def exit!(selected_lecture) lecturerelationships.find_by_joinedlecture_id(selected_lecture.id).destroy end end Thanks for any and all help - i'll be on here for a while so as mentioned i'd GREATLY appreciate someone who may have the time to work through my issues with me...

    Read the article

  • How does Ocaml decide precedence for user-defined operators?

    - by forefinger
    I want nice operators for complex arithmetic to make my code more readable. Ocaml has a Complex module, so I just want to add operators that call those functions. The most intuitive way for me is to make a new complex operator from all of the usual operators by appending '&' to the operator symbol. Thus +& and *& will be complex addition and multiplication. I would also like ~& to be complex conjugation. If I'm going to use these operators, I want them to associate the same way that normal arithmetic associates. Based on the following sessions, they are automatically behaving the way I want, but I would like to understand why, so that I don't get horrible bugs when I introduce more operators. My current guess is that their precedence is done by lexically sorting the operator symbols according to an ordering that is consistent with normal arithmetic precedence. But I cannot confirm this. Session one: # open Complex;; # let (+&) a b = add a b;; val ( +& ) : Complex.t -> Complex.t -> Complex.t = <fun> # let ( *&) a b = mul a b;; val ( *& ) : Complex.t -> Complex.t -> Complex.t = <fun> # one +& zero *& one +& zero *& one;; - : Complex.t = {re = 1.; im = 0.} # zero +& one *& zero +& one *& zero;; - : Complex.t = {re = 0.; im = 0.} # i +& i *& i +& i *& i *& i;; - : Complex.t = {re = -1.; im = 0.} Session two: # open Complex;; # let ( *&) a b = mul a b;; val ( *& ) : Complex.t -> Complex.t -> Complex.t = <fun> # let (+&) a b = add a b;; val ( +& ) : Complex.t -> Complex.t -> Complex.t = <fun> # one +& zero *& one +& zero *& one;; - : Complex.t = {re = 1.; im = 0.} # zero +& one *& zero +& one *& zero;; - : Complex.t = {re = 0.; im = 0.} # i +& i *& i +& i *& i *& i;; - : Complex.t = {re = -1.; im = 0.} # let (~&) a = conj a;; val ( ~& ) : Complex.t -> Complex.t = <fun> # (one +& i) *& ~& (one +& i);; - : Complex.t = {re = 2.; im = 0.}

    Read the article

  • $.post is not working

    - by BEBO
    i am trying to post data to Mysql using jquery $.post and php page. my code is not running and nothing is added to the mysql table. I am not sure if the path i am creating is wrong but any help would be appreciated. Jquery location: f_js/tasks/TaskTest.js <script type="text/javascript"> $(document).ready(function(){ $("#AddTask").click(function(){ var acct = $('#acct').val(); var quicktask = $('#quicktask').val(); var user = $('#user').val(); $.post('addTask.php',{acct:acct,quicktask:quicktask,user:user}, function(data){ $('#result').fadeIn('slow').html(data); }); }); }); </script> addTask.php (runs the jqeury code) <?php include 'dbconnect.php'; include 'sessions.php'; $acct = $_POST['acct']; $task = $_POST['quicktask']; $taskstatus = 'Active'; //get task Creator $user = $_POST['user']; //query task creator from users table $allusers = mysql_query("SELECT * FROM users WHERE username = '$user'"); while ($rows = mysql_fetch_array($allusers)) { //get first and last name for task creator $taskOwner = $rows['user_firstname']; $taskOwnerLast = $rows['user_lastname']; $taskOwnerFull = $taskOwner." ".$taskOwnerLast; mysql_query("INSERT INTO tasks (taskresource, tasktitle, taskdetail, taskstatus, taskowner, taskOwnerFullName) VALUES ('$acct', '$task', '$task', '$taskstatus', '$user', '$taskOwnerFull' )"); echo "inserted"; } ?> Accountview.php finally the front page <html> <div class="input-cont "> <input type="text" class="form-control col-lg-12" placeholder="Add a quick Task..." name ="quicktask" id="quicktask"> </div> <div class="form-group"> <div class="pull-right chat-features"> <a href="javascript:;"> <i class="icon-camera"></i> </a> <a href="javascript:;"> <i class="icon-link"></i> </a> <input type="button" class="btn btn-danger" name="AddTask" id="AddTask" value="Add" /> <input type="hidden" name="acct" id="acct" value="<?php echo $_REQUEST['acctname']?>"/> <input type="hidden" name="user" id="user" value="<?php $username = $_SESSION['username']; echo $username?>"/> <div id="result">result</div> </div> </div> <!-- js placed at the end of the document so the pages load faster --> <script src="js/jquery.js"></script> <script src="f_js/tasks/TaskTest.js"></script> <!--common script for all pages--> <script src="js/common-scripts.js"></script> <script type="text/javascript" src="assets/gritter/js/jquery.gritter.js"></script> <script src="js/gritter.js" type="text/javascript"></script> <script> </html> Firebug reponse: Response Headers Connection Keep-Alive Content-Length 0 Content-Type text/html Date Fri, 08 Nov 2013 21:48:50 GMT Keep-Alive timeout=5, max=100 Server Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 X-Powered-By PHP/5.4.16 refresh 5; URL=index.php Request Headers Accept */* Accept-Encoding gzip, deflate Accept-Language en-US,en;q=0.5 Content-Length 13 Content-Type application/x-www-form-urlencoded; charset=UTF-8 Cookie PHPSESSID=6gufl3guiiddreg8cdlc0htnc6 Host localhost Referer http://localhost/betahtml/AccountView.php?acctname=client%201 User-Agent Mozilla/5.0 (Windows NT 6.3; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 X-Requested-With XMLHttpRequest

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • Failure connecting to Dell MD3200i from XenServer 6.2 pool

    - by Tom Sparrow
    This question also asked at Citrix Forums http://forums.citrix.com/thread.jspa?threadID=332289 I have a MD3200i that is currently working fine with my Xen5.6 pool, but I cannot get a connection to the new 6.2 pool to work. I previously had a problem with a 6.0 upgrade (which is why the old pool is still on 5.6), but rolled back rather than fix it as it wasn't urgent at the time. This install is on new machines - I tried 6.1 first (which had the same problems) then 6.2 was released the second day after installation so I switched to that. I've not installed anything from the Dell resource DVD at this point - I can't find anything saying I should, and everything I have read suggests it shouldn't be necessary. I can ping all 8 ip addresses from both servers in the pool, iscsiadm -m discovery works fine, I can login to the nodes and iscsiadm reports the sessions active correctly. I've added the required sections to multipath.conf, but multipath -ll reports DM multipath kernel driver not loaded immediately after boot. The following is a log of a test session immediately after boot. root@xen3 ~]# iscsiadm -m node --loginall=all Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.101,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.101,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.104,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.102,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.103,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.104,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.102,3260] Logging in to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.103,3260] Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.101,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.101,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.104,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.102,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.103,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.104,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.130.102,3260]: successful Login to [iface: default, target: iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91, portal: 192.168.131.103,3260]: successful [root@xen3 ~]# iscsiadm -m session tcp: [1] 192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [2] 192.168.131.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [3] 192.168.131.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [4] 192.168.131.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [5] 192.168.130.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [6] 192.168.130.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [7] 192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [8] 192.168.131.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 [root@xen3 ~]# service multipathd restart ok Stopping multipathd daemon: [ OK ] Starting multipathd daemon: [ OK ] [root@xen3 ~]# multipath Jul 04 09:58:47 | DM multipath kernel driver not loaded Jul 04 09:58:47 | DM multipath kernel driver not loaded [root@xen3 ~]# multipath -ll Jul 04 09:59:03 | DM multipath kernel driver not loaded Jul 04 09:59:03 | DM multipath kernel driver not loaded [ root@xen3 ~]# modprobe dm_multipath [root@xen3 ~]# multipath Jul 04 10:19:50 | 36b8ca3a0e7024800194a0bd11891cd14: ignoring map create: 1Dell_Internal_Dual_SD_0123456789AB undef Dell,Internal Dual SD size=1.9G features='0' hwhandler='0' wp=undef `-+- policy='round-robin 0' prio=1 status=undef `- 7:0:0:0 sdb 8:16 undef ready running [root@xen3 ~]# multipath -ll 1Dell_Internal_Dual_SD_0123456789AB dm-1 Dell,Internal Dual SD size=1.9G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=enabled `- 7:0:0:0 sdb 8:16 active ready running [root@xen3 ~]# iscsiadm -m session tcp: [1] 192.168.130.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [2] 192.168.131.101:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [3] 192.168.131.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [4] 192.168.131.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [5] 192.168.130.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [6] 192.168.130.104:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [7] 192.168.130.102:3260,2 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 tcp: [8] 192.168.131.103:3260,1 iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 [root@xen3 ~]# dmesg | tail -n 50 [ 1161.881010] sd 8:0:0:0: [sdf] Unhandled error code [ 1161.881013] sd 8:0:0:0: [sdf] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1161.881017] sd 8:0:0:0: [sdf] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1161.881024] end_request: I/O error, dev sdf, sector 0 [ 1161.881031] Buffer I/O error on device sdf, logical block 0 [ 1161.881045] sd 15:0:0:0: [sdi] Unhandled error code [ 1161.881048] sd 15:0:0:0: [sdi] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1161.881052] sd 15:0:0:0: [sdi] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1161.881058] end_request: I/O error, dev sdi, sector 0 [ 1161.881065] Buffer I/O error on device sdi, logical block 0 [ 1161.881122] sd 9:0:0:0: [sdg] Unhandled error code [ 1161.881124] sd 9:0:0:0: [sdg] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1161.881126] sd 9:0:0:0: [sdg] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1161.881132] end_request: I/O error, dev sdg, sector 0 [ 1161.881140] Buffer I/O error on device sdg, logical block 0 [ 1168.220951] connection6:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220957] connection7:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220967] connection7:0: detected conn error (1011) [ 1168.220969] connection4:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220973] connection4:0: detected conn error (1011) [ 1168.220975] connection3:0: ping timeout of 15 secs expired, recv timeout 10, last rx 84060, last ping 85060, now 86560 [ 1168.220978] connection3:0: detected conn error (1011) [ 1168.220985] connection6:0: detected conn error (1011) [ 1168.480994] sd 14:0:0:0: [sde] Unhandled error code [ 1168.480998] sd 14:0:0:0: [sde] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481001] sd 14:0:0:0: [sde] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481009] end_request: I/O error, dev sde, sector 0 [ 1168.481015] Buffer I/O error on device sde, logical block 0 [ 1168.481076] sd 11:0:0:0: [sdc] Unhandled error code [ 1168.481078] sd 11:0:0:0: [sdc] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481080] sd 11:0:0:0: [sdc] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481087] end_request: I/O error, dev sdc, sector 0 [ 1168.481092] Buffer I/O error on device sdc, logical block 0 [ 1168.481144] sd 10:0:0:0: [sdd] Unhandled error code [ 1168.481147] sd 10:0:0:0: [sdd] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481150] sd 10:0:0:0: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481156] end_request: I/O error, dev sdd, sector 0 [ 1168.481163] Buffer I/O error on device sdd, logical block 0 [ 1168.481168] sd 13:0:0:0: [sdj] Unhandled error code [ 1168.481170] sd 13:0:0:0: [sdj] Result: hostbyte=DID_TRANSPORT_DISRUPTED driverbyte=DRIVER_OK [ 1168.481172] sd 13:0:0:0: [sdj] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 1168.481178] end_request: I/O error, dev sdj, sector 0 [ 1168.481184] Buffer I/O error on device sdj, logical block 0 [ 1457.105996] device-mapper: multipath round-robin: version 1.0.0 loaded [ 1457.106155] device-mapper: multipath: Cannot access device path 8:0: -16 [ 1457.106164] device-mapper: table: 252:1: multipath: error getting device [ 1457.106172] device-mapper: ioctl: error adding target to table [ 1457.171292] device-mapper: multipath: Cannot access device path 8:0: -16 [ 1457.171299] device-mapper: table: 252:1: multipath: error getting device [ 1457.171304] device-mapper: ioctl: error adding target to table [root@xen3 ~]# fdisk -l Disk /dev/sda: 299.4 GB, 299439751168 bytes 255 heads, 63 sectors/track, 36404 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 de Dell Utility /dev/sda2 * 6 528 4194304 83 Linux Partition 2 does not end on cylinder boundary. /dev/sda3 528 1050 4194304 83 Linux /dev/sda4 1050 36404 283986359+ 8e Linux LVM Disk /dev/sdb: 2040 MB, 2040528896 bytes 255 heads, 63 sectors/track, 248 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 248 1992028+ 83 Linux Disk /dev/dm-1: 2040 MB, 2040528896 bytes 255 heads, 63 sectors/track, 248 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/dm-1p1 1 248 1992028+ 83 Linux [root@xen3 ~]# xe sr-probe type=lvmoiscsi device-config:target=192.168.130.101 device-config:targetIQN=iqn.1984-05.com.dell:powervault.md3200i.6782bcb0006bd850000000004ed88b91 Error code: SR_BACKEND_FAILURE_107 Error parameters: , The SCSIid parameter is missing or incorrect, <?xml version="1.0" ?> <iscsi-target/> Note: the xml ends there correctly on the last line - it doesn't ever return a list of LUNs (and there is one in the group on the SAN for those servers.

    Read the article

  • CentOS Client - Unable to Establish iSCSI connection with multiple interfaces on the initiator

    - by slashdot
    So after upgrading to CentOS 6.2, I am seemingly no longer able to login into my iSCSI targets. I have multiple interfaces on different subnets on the system, and I first thought that it had to do with the fact that I may not be binding correct interfaces, which seems to be the case when looking at netstat, as this is clearly wrong: [root]? netstat -na|grep .90 tcp 0 1 10.10.100.60:42354 10.10.8.90:3260 SYN_SENT tcp 0 1 10.10.100.60:40777 10.10.9.90:3260 SYN_SENT I then went ahead and disabled all but one interface, and so as a result netstat appears to be correct, but the issue with login remains. I am positive that the target never sees a packet, because I see nothing by SYN_SENT. I know the problem is on my client, because the target is servicing multiple systems, none of which are CentOS 6.2. At this point I am pretty confident that some things changed between CentOS 6.0/6.1 and 6.2. So, if anyone have any thoughts, or ran into this, I would very much like to hear your thoughts. [root]? iscsiadm --mode node --targetname iqn.2011-12.dom.homer:01:lab-centos-servers-00001 --portal 10.10.8.90:3260,2 --interface=sw-iscsi-0 --login Logging in to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260] (multiple) iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals [root]? netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.8.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2.7 10.10.9.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3.7 10.10.100.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2.7 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3.7 0.0.0.0 10.10.100.1 0.0.0.0 UG 0 0 0 eth0 Output of ip addr show for the two interfaces involved: [root]? for i in 2.7 3.7; do ip addr show eth$i; done 6: eth2.7@eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:8d brd ff:ff:ff:ff:ff:ff inet 10.10.8.60/24 brd 10.10.8.255 scope global eth2.7 inet6 fe80::20c:29ff:fe94:5b8d/64 scope link valid_lft forever preferred_lft forever 7: eth3.7@eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:97 brd ff:ff:ff:ff:ff:ff inet 10.10.9.60/24 brd 10.10.9.255 scope global eth3.7 inet6 fe80::20c:29ff:fe94:5b97/64 scope link valid_lft forever preferred_lft forever Update 01/06/2012: This issue is getting even more interesting by the day it seems. I went a few weeks back and grabbed a snapshot of this system from before upgrading to 6.2. I spun up a new system from the snapshot, and reconfigured interface info and host keys, as well as iSCSI initiator and iscsi interface info to match new MACs. Changed nothing else. Then, I attempted to connect to my targets, and no issues at all. I cannot say that this was unexpected. I then went ahead and compared sysctl settings from both systems and there were differences after the upgrade, but nothing seemingly relevant to iSCSI or IP that could contribute to this. I also noticed that by default now two sessions per connection were enabled after the upgrade, but I changed it back to 1 session in /etc/iscsi/iscsid.conf. On the problematic system we can see that source interface is seemingly wrong, but even when I disable the 10.10.100 interface, problems persist. So, while this may be relevant, I could not validate it for certain. Needless to say, further research is necessary. Something is clearly different between releases. Working system is on 6.1, and non-working is 6.2. ::Working System:: tcp 0 0 10.10.8.210:39566 10.10.8.90:3260 ESTABLISHED tcp 0 0 10.10.9.210:46518 10.10.9.90:3260 ESTABLISHED [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.210 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.210 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.210 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 ::Non-working System:: tcp 0 1 10.10.100.60:44737 10.10.9.90:3260 SYN_SENT tcp 0 1 10.10.100.60:55479 10.10.8.90:3260 SYN_SENT [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.60 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.60 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.60 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 And the result is still same: [root]? iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not login to [iface: sw-iscsi-1, target: iqn.2011-12.dom.homer:02:lab-centos-servers-00001, portal: 10.10.9.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals Update 01/08/2012: I believe I have been able to figure out the answer to my issue. It is quite obscure and I doubt this will happen to anyone else any time soon. It turns out that setting iface.iscsi_ifacename and iface.hwaddress in the interfaces configuration file is not legal. When one manually adds an iscsi target, such as below, all settings from the interface config file are copied into the node config file, that gets created by the below command. Result is parameters iface.iscsi_ifacename and iface.hwaddress together in the same config file. These parameters are seemingly mutually exclusive, which does not exactly make sense, or there is perhaps an oversight in the codepath. Perhaps I will investigate further. # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:01:lab-centos-servers-00001 -p 10.10.8.90,3260,2 -I sw-iscsi-0 # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:02:lab-centos-servers-00001 -p 10.10.9.90,3260,2 -I sw-iscsi-1 Notice, below I commented out iface.hwaddress and iface.ipaddress, after which I re-added targets, with same command as above. All works just fine. [root]? cat * # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-0 iface.net_ifacename = eth2.6 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.8.60 iface.transport_name = tcp iface.vlan_id = 6 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-1 iface.net_ifacename = eth3.7 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.9.60 iface.transport_name = tcp iface.vlan_id = 7 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD Again, chances of this happening to someone else are slim to none, so likely waste of time typing this up. But, if someone does encounter this issue, I hope this post will help.

    Read the article

  • How to get full control of umask/PAM/permissions?

    - by plua
    OUR SITUATION Several people from our company log in to a server and upload files. They all need to be able to upload and overwrite the same files. They have different usernames, but are all part of the same group. However, this is an internet server, so the "other" users should have (in general) just read-only access. So what I want to have is these standard permissions: files: 664 directories: 771 My goal is that all users do not need to worry about permissions. The server should be configured in such a way that these permissions apply to all files and directories, newly created, copied, or over-written. Only when we need some special permissions we'd manually change this. We upload files to the server by SFTP-ing in Nautilus, by mounting the server using sshfs and accessing it in Nautilus as if it were a local folder, and by SCP-ing in the command line. That basically covers our situation and what we aim to do. Now, I have read many things about the beautiful umask functionality. From what I understand umask (together with PAM) should allow me to do exactly what I want: set standard permissions for new files and directories. However, after many many hours of reading and trial-and-error, I still do not get this to work. I get many unexpected results. I really like to get a solid grasp of umask and have many question unanswered. I will post these questions below, together with my findings and an explanation of my trials that led to these questions. Given that many things appear to go wrong, I think that I am doing several things wrong. So therefore, there are many questions. NOTE: I am using Ubuntu 9.10 and therefore can not change the sshd_config to set the umask for the SFTP server. Installed SSH OpenSSH_5.1p1 Debian-6ubuntu2 < required OpenSSH 5.4p1. So here go the questions. 1. DO I NEED TO RESTART FOR PAM CHANGS TO TAKE EFFECT? Let's start with this. There were so many files involved and I was unable to figure out what does and what does not affect things, also because I did not know whether or not I have to restart the whole system for PAM changes to take effect. I did do so after not seeing the expected results, but is this really necessary? Or can I just log out from the server and log back in, and should new PAM policies be effective? Or is there some 'PAM' program to reload? 2. IS THERE ONE SINGLE FILE TO CHANGE THAT AFFECTS ALL USERS FOR ALL SESSIONS? So I ended up changing MANY files, as I read MANY different things. I ended up setting the umask in the following files: ~/.profile -> umask=0002 ~/.bashrc -> umask=0002 /etc/profile -> umask=0002 /etc/pam.d/common-session -> umask=0002 /etc/pam.d/sshd -> umask=0002 /etc/pam.d/login -> umask=0002 I want this change to apply to all users, so some sort of system-wide change would be best. Can it be achieved? 3. AFTER ALL, THIS UMASK THING, DOES IT WORK? So after changing umask to 0002 at every possible place, I run tests. ------------SCP----------- TEST 1: scp testfile (which has 777 permissions for testing purposes) server:/home/ testfile 100% 4 0.0KB/s 00:00 Let's check permissions: user@server:/home$ ls -l total 4 -rwx--x--x 1 user uploaders 4 2011-02-05 17:59 testfile (711) ---------SSH------------ TEST 2: ssh server user@server:/home$ touch anotherfile user@server:/home$ ls -l total 4 -rw-rw-r-- 1 user uploaders 0 2011-02-05 18:03 anotherfile (664) --------SFTP----------- Nautilus: sftp://server/home/ Copy and paste newfile from client to server (777 on client) TEST 3: user@server:/home$ ls -l total 4 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 newfile (777) Create a new file through Nautilus. Check file permissions in terminal: TEST 4: user@server:/home$ ls -l total 4 -rw------- 1 user uploaders 0 2011-02-05 18:06 newfile (600) I mean... WHAT just happened here?! We should get 644 every single time. Instead I get 711, 777, 600, and then once 644. And the 644 is only achieved when creating a new, blank file through SSH, which is the least probable scenario. So I am asking, does umask/pam work after all? 4. SO WHAT DOES IT MEAN TO UMASK SSHFS? Sometimes we mount a server locally, using sshfs. Very useful. But again, we have permissions issues. Here is how we mount: sshfs -o idmap=user -o umask=0113 user@server:/home/ /mnt NOTE: we use umask = 113 because apparently, sshfs starts from 777 instead of 666, so with 113 we get 664 which is the desired file permission. But what now happens is that we see all files and directories as if they are 664. We browse in Nautilus to /mnt and: Right click - New File (newfile) --- TEST 5 Right click - New Folder (newfolder) --- TEST 6 Copy and paste a 777 file from our local client --- TEST 7 So let's check on the command line: user@client:/mnt$ ls -l total 8 -rw-rw-r-- 1 user 1007 3 Feb 5 18:05 copyfile (664) -rw-rw-r-- 1 user 1007 0 Feb 5 18:15 newfile (664) drw-rw-r-- 1 user 1007 4096 Feb 5 18:15 newfolder (664) But hey, let's check this same folder on the server-side: user@server:/home$ ls -l total 8 -rwxrwxrwx 1 user uploaders 3 2011-02-05 18:05 copyfile (777) -rw------- 1 user uploaders 0 2011-02-05 18:15 newfile (600) drwx--x--x 2 user uploaders 4096 2011-02-05 18:15 newfolder (711) What?! The REAL file permissions are very different from what we see in Nautilus. So does this umask on sshfs just create a 'filter' that shows unreal file permissions? And I tried to open a file from another user but the same group that had real 600 permissions but 644 'fake' permissions, and I could still not read this, so what good is this filter?? 5. UMASK IS ALL ABOUT FILES. BUT WHAT ABOUT DIRECTORIES? From my tests I can see that the umask that is being applied also somehow influences the directory permissions. However, I want my files to be 664 (002) and my directories to be 771 (006). So is it possible to have a different umask for directories? 6. PERHAPS UMASK/PAM IS REALLY COOL, BUT UBUNTU IS JUST BUGGY? On the one hand, I have read topics of people that have had success with PAM/UMASK and Ubuntu. On the other hand, I have found many older and newer bugs regarding umask/PAM/fuse on Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gdm/+bug/241198 https://bugs.launchpad.net/ubuntu/+source/fuse/+bug/239792 https://bugs.launchpad.net/ubuntu/+source/pam/+bug/253096 https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/549172 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=314796 So I do not know what to believe anymore. Should I just give up? Would ACL solve all my problems? Or do I have again problems using Ubuntu? One word of caution with backups using tar. Red Hat /Centos distributions support acls in the tar program but Ubuntu does not support acls when backing up. This means that all acls will be lost when you create a backup. I am very willing to upgrade to Ubuntu 10.04 if that would solve my problems too, but first I want to understand what is happening.

    Read the article

  • Windows periodically disconnects, reconnects to the network

    - by einpoklum
    My setup: I have a PC with a Gigabyte GA-MA78S2H motherboard (Realtek Gigabit wired Ethernet on-board). I have the latest drivers (at least the latest driver for the NIC. I'm connecting via an Edimax BR-6216Mg (again, wired connection). For some reason I experience short periodic disconnects and reconnects. Specifically, Skype disconnects, tries to connect, succeeds after a short while; incoming SFTP sessions get dropped; using a browser, I sometime get stuck in the DNS lookup or connection to the website and a page won't load. A couple of seconds later, a reload works. All this happens with Windows XP SP3. With Windows 7, it also happens. (When I initially wrote this question I didn't notice it.) ipconfig for my adapter: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 00-1D-7D-E9-72-9E Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.0.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.254 DHCP Server . . . . . . . . . . . : 192.168.0.254 DNS Servers . . . . . . . . . . . : 192.117.235.235 62.219.186.7 Lease Obtained. . . . . . . . . . : Saturday, March 10, 2012 8:28:20 AM Lease Expires . . . . . . . . . . : Friday, January 26, 1906 2:00:04 AM A result of some tests a couple of the disconnects: C:\Documents and Settings\eyalroz.BAKNUNIN>nslookup google.com DNS request timed out. timeout was 2 seconds. *** Can't find server name for address 192.117.235.235: Timed out DNS request timed out. timeout was 2 seconds. *** Can't find server name for address 62.219.186.7: Timed out *** Default servers are not available Server: UnKnown Address: 192.117.235.235 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. *** Request to UnKnown timed-out C:\Documents and Settings\eyalroz.BAKNUNIN>ping 194.90.1.5 Pinging 194.90.1.5 with 32 bytes of data: Control-C ^C C:\Documents and Settings\eyalroz.BAKNUNIN>tracert -d 194.90.1.5 Tracing route to 194.90.1.5 over a maximum of 30 hops 1 <1 ms <1 ms <1 ms 192.168.0.254 2 * * 11 ms 10.168.128.1 3 14 ms 13 ms 14 ms 212.179.160.142 4 * * * Request timed out. 5 * * * Request timed out. 6 * * 47 ms 62.219.189.169 7 31 ms 27 ms 32 ms 62.219.189.150 8 15 ms 14 ms 16 ms 192.114.65.202 9 15 ms 15 ms 11 ms 212.143.10.66 10 13 ms 29 ms 31 ms 212.143.12.234 11 35 ms 15 ms 18 ms 212.143.8.72 12 22 ms 22 ms 16 ms 194.90.1.5 I usually ping 194.90.1.5 (which is not at my ISP) with 15ms response time and no losses. Things I've done/tried: [2012-03-26] I replaced the cable; I thought that made a difference, but the disconnects were back a while later, so that wasn't it. Updated the NIC driver. Tried reducing the MTU (used a utility called Dr. TCP); there was no effect. I updated my board BIOS revision (which caused all the HW to be "reinstalled" or re-identified - successfully). I installed another NIC, and tried switching to it - same effect with the on-board NIC. A while back I tried another router (although it was an Edimax model) - same problem. Connected the computer directly, with no router. Same problem. ping -t to the router (192.168.0.254) gives pongs, nothing is lost, and time is < 1 ms almost always (sometimes it says 1 or 2 ms). This is the case also during the disconnects.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • rsync over ssh is not working anymore, while ssh itself is working fine (Write failed: broken pipe)

    - by brazorf
    This issue started happening after i changed router. This is the scenario: Windows7 Host Ubuntu 10.04 Guest (VirtualBox) Ubuntu 10.04 remote server What i used to do is run a very basic rsync command: rsync -avz --delete /local/path/ username@host:/path/to/remote/directory This worked perfect until i did change adsl provider, and i changed router aswell: now, this happens: rsync on Ubuntu Guest is not working anymore (to any random server), if using this new router rsync on Ubuntu Guest is WORKING, if i switch back to old router i tried a new virtual box ubuntu install, and the command is WORKING with both the routers So, the not-working-combo is oldUbuntu + newRouter. To get things worst, i can state that (on the not-working ubuntu) i ping the remote host plain ssh connection to the remote host is working fine (i can auth, connect, and do stuff on the remote host) scp is NOT working (this is just a further thing i tried) This is the console output of the execution, with ssh verbose set to vvvv: root@client:~# rsync -ae 'ssh -vvvv' /root/test-rsync/ {username}@{hostname}:/home/{username}/test/ OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /root/.ssh/config debug1: Applying options for {hostname} debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to {hostname} [{ip.add.re.ss}] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug3: Not a RSA1 key file /root/.ssh/{private_key}. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /root/.ssh/{private_key} type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug3: Wrote 792 bytes for a total of 831 debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],zlib,none debug2: kex_parse_kexinit: [email protected],zlib,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 [email protected] debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 [email protected] debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug3: Wrote 24 bytes for a total of 855 debug2: dh_gen_key: priv key bits set: 125/256 debug2: bits set: 525/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: Wrote 144 bytes for a total of 999 debug3: check_host_in_hostfile: filename /root/.ssh/known_hosts debug3: check_host_in_hostfile: match line 4 debug3: check_host_in_hostfile: filename /root/.ssh/known_hosts debug3: check_host_in_hostfile: match line 5 debug1: Host '{hostname}' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:4 debug2: bits set: 512/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: Wrote 16 bytes for a total of 1015 debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug3: Wrote 48 bytes for a total of 1063 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/{private_key} (0x7f3ad0e7f9b0) debug3: Wrote 80 bytes for a total of 1143 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /root/.ssh/{private_key} debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1511 debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp 1b:65:36:92:59:b3:12:3e:8c:c6:03:28:d4:81:09:dc debug3: sign_and_send_pubkey debug1: read PEM private key done: type RSA debug3: Wrote 656 bytes for a total of 2167 debug1: Enabling compression at level 6. debug1: Authentication succeeded (publickey). debug2: fd 4 setting O_NONBLOCK debug3: fd 5 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug3: Wrote 112 bytes for a total of 2279 debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending environment. debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env SSH_CLIENT debug3: Ignored env SSH_TTY debug1: Sending env LC_ALL = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env PWD debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env LANGUAGE debug3: Ignored env LOGNAME debug3: Ignored env SSH_CONNECTION debug3: Ignored env LESSOPEN debug3: Ignored env LESSCLOSE debug3: Ignored env _ debug1: Sending command: rsync --server -logDtpre.iLsf . /home/{username}/test/ debug2: channel 0: request exec confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug3: Wrote 208 bytes for a total of 2487 At this point everything freeze for lots of minutes, ending in Write failed: Broken pipe rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7] Any suggestion? Thank You F. Edit 2012/09/13: i am changing title and issue definition, since i made some TINY step ahead and i think i can give more detailed clues.

    Read the article

  • How to improve Varnish performance?

    - by Darkseal
    We're experiencing a strange problem with our current Varnish configuration. 4x Web Servers (IIS 6.5 on Windows 2003 Server, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) 3x Varnish Servers (varnish-3.0.3 revision 9e6a70f on Ubuntu 12.04.2 LTS - 64 bit/precise, Kernel Linux 3.2.0-29-generic, each installed on a Intel(R) Xeon(R) CPU E5450 @ 3.00GHz Quad Core, 4GB RAM) The Varnish Servers performance are awfully bad in general, to the point that if we shut down one of them the other two are unable to fullfill all the requests and start to skip beats resulting in pending requests, timeouts, 404, etc. What can we do to improve our Varnish performance? Considering that we're getting less than 5k request per seconds during our max peak, we should be able to serve our pages even with a single one of them without any problem. We use a standard, vanilla CFG, as shown by this varnishadm param.show output: acceptor_sleep_decay 0.900000 [] acceptor_sleep_incr 0.001000 [s] acceptor_sleep_max 0.050000 [s] auto_restart on [bool] ban_dups on [bool] ban_lurker_sleep 0.010000 [s] between_bytes_timeout 60.000000 [s] cc_command "exec gcc -std=gnu99 -g -O2 -pthread -fpic -shared - Wl,-x -o %o %s" cli_buffer 8192 [bytes] cli_timeout 20 [seconds] clock_skew 10 [s] connect_timeout 0.700000 [s] critbit_cooloff 180.000000 [s] default_grace 10.000000 [seconds] default_keep 0.000000 [seconds] default_ttl 120.000000 [seconds] diag_bitmap 0x0 [bitmap] esi_syntax 0 [bitmap] expiry_sleep 1.000000 [seconds] fetch_chunksize 128 [kilobytes] fetch_maxchunksize 262144 [kilobytes] first_byte_timeout 60.000000 [s] group varnish (113) gzip_level 6 [] gzip_memlevel 8 [] gzip_stack_buffer 32768 [Bytes] gzip_tmp_space 0 [] gzip_window 15 [] http_gzip_support off [bool] http_max_hdr 64 [header lines] http_range_support on [bool] http_req_hdr_len 8192 [bytes] http_req_size 32768 [bytes] http_resp_hdr_len 8192 [bytes] http_resp_size 32768 [bytes] idle_send_timeout 60 [seconds] listen_address :80 listen_depth 1024 [connections] log_hashstring on [bool] log_local_address off [bool] lru_interval 2 [seconds] max_esi_depth 5 [levels] max_restarts 4 [restarts] nuke_limit 50 [allocations] pcre_match_limit 10000 [] pcre_match_limit_recursion 10000 [] ping_interval 3 [seconds] pipe_timeout 60 [seconds] prefer_ipv6 off [bool] queue_max 100 [%] rush_exponent 3 [requests per request] saintmode_threshold 10 [objects] send_timeout 600 [seconds] sess_timeout 5 [seconds] sess_workspace 16384 [bytes] session_linger 50 [ms] session_max 100000 [sessions] shm_reclen 255 [bytes] shm_workspace 8192 [bytes] shortlived 10.000000 [s] syslog_cli_traffic on [bool] thread_pool_add_delay 2 [milliseconds] thread_pool_add_threshold 2 [requests] thread_pool_fail_delay 200 [milliseconds] thread_pool_max 2000 [threads] thread_pool_min 5 [threads] thread_pool_purge_delay 1000 [milliseconds] thread_pool_stack unlimited [bytes] thread_pool_timeout 300 [seconds] thread_pool_workspace 65536 [bytes] thread_pools 2 [pools] thread_stats_rate 10 [requests] user varnish (106) vcc_err_unref on [bool] vcl_dir /etc/varnish vcl_trace off [bool] vmod_dir /usr/lib/varnish/vmods waiter default (epoll, poll) This is our default.vcl file: LINK sub vcl_recv { # BASIC recv COMMANDS: # # lookup -> search the item in the cache # pass -> always serve a fresh item (no-caching) # pipe -> like pass but ensures a direct-connection with the backend (no-cache AND no-proxy) # Allow the backend to serve up stale content if it is responding slow. # This defines when Varnish should use a stale object if it has one in the cache. set req.grace = 30s; if (client.ip == "127.0.0.1") { # request from NGINX - do not alter X-Forwarded-For set req.http.HTTPS = "on"; } else { # Add an X-Forwarded-For to keep track of original request unset req.http.HTTPS; unset req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; } set req.backend = www_director; # Strip all cookies to force an anonymous request when the back-end servers are down. if (!req.backend.healthy) { unset req.http.Cookie; } ## HHTP Accept-Encoding if (req.http.Accept-Encoding) { if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { /* non-RFC2616 or CONNECT */ return (pipe); } if (req.request != "GET" && req.request != "HEAD") { /* only deal with GET and HEAD by default */ return (pass); } if (req.http.Authorization) { return (pass); } if (req.http.HTTPS ~ "on") { return (pass); } ###################################################### # COOKIE HANDLING ###################################################### # METHOD 1: do not remove cookies, but pass the page if they contain TB_NC if (!(req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$")) { if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { return (pass); } } return (lookup); } # Code determining what to do when serving items from the IIS Server sub vcl_fetch { unset beresp.http.Server; set beresp.http.Server = "Server-1"; # Allow items to be stale if needed. This is the maximum time Varnish should keep an object. set beresp.grace = 1h; if (req.url ~ "(?i)\.(png|gif|ipeg|jpg|ico|swf|css|js)(\?[a-z0-9]+)?$") { unset beresp.http.set-cookie; } # Default Varnish VCL logic if (!beresp.cacheable || beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has specific TB_NC no-caching cookie if (req.http.Cookie && req.http.Cookie ~ "TB_NC") { set beresp.http.X-Cacheable = "NO:Got Cookie"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control private else if (beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; set beresp.ttl = 120 s; return(hit_for_pass); } # Not Cacheable if it has Cache-Control no-cache or Pragma no-cache else if (beresp.http.Cache-Control ~ "no-cache" || beresp.http.Pragma ~ "no-cache") { set beresp.http.X-Cacheable = "NO:Cache-Control=no-cache (or pragma no-cache)"; set beresp.ttl = 120 s; return(hit_for_pass); } # If we reach to this point, the object is cacheable. # Cacheable but with not enough ttl: we need to extend the lifetime of the object artificially # NOTE: Varnish default TTL is set in /etc/sysconfig/varnish # and can be checked using the following command: # varnishadm param.show default_ttl else if (beresp.ttl < 1s) { set beresp.ttl = 5s; set beresp.grace = 5s; set beresp.http.X-Cacheable = "YES:FORCED"; } # Cacheable and with valid TTL. else { set beresp.http.X-Cacheable = "YES"; } # DEBUG INFO (Cookies) # set beresp.http.X-Cookie-Debug = "Request cookie: " + req.http.Cookie; return(deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; if (obj.status == 404) { synthetic {" <!-- Markup for the 404 page goes here --> "}; } else if (obj.status == 500) { synthetic {" <!-- Markup for the 500 page goes here --> "}; } else if (obj.status == 503) { if (req.restarts < 4) { return(restart); } else { synthetic {" <!-- Markup for the 503 page goes here --> "}; } } else { synthetic {" <!-- Markup for a generic error page goes here --> "}; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } Thanks in advance,

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 266 267 268 269 270 271  | Next Page >