Search Results

Search found 22803 results on 913 pages for 'customer support sr'.

Page 893/913 | < Previous Page | 889 890 891 892 893 894 895 896 897 898 899 900  | Next Page >

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Using Apache FOP from .NET level

    - by Lukasz Kurylo
    In one of my previous posts I was talking about FO.NET which I was using to generate a pdf documents from XSL-FO. FO.NET is one of the .NET ports of Apache FOP. Unfortunatelly it is no longer maintained. I known it when I decidec to use it, because there is a lack of available (free) choices for .NET to render a pdf form XSL-FO. I hoped in this implementation I will find all I need to create a pdf file with my really simple requirements. FO.NET is a port from some old version of Apache FOP and I found really quickly that there is a lack of some features that I needed, like dotted borders, double borders or support for margins. So I started to looking for some alternatives. I didn’t try the NFOP, another port of Apache FOP, because I found something I think much more better, the IKVM.NET project.   IKVM.NET it is not a pdf renderer. So what it is? From the project site:   IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. It includes the following components: a Java Virtual Machine implemented in .NET a .NET implementation of the Java class libraries tools that enable Java and .NET interoperability   In the simplest form IKVM.NET allows to use a Java code library in the C# code and vice versa.   I tried to use an Apache FOP, the best I think open source pdf –> XSL-FO renderer written in Java from my project written in C# using an IKVM.NET and it work like a charm. In the rest of the post I want to show, how to prepare a .NET *.dll class library from Apache FOP *.jar’s with IKVM.NET and generate a simple Hello world pdf document.   To start playing with IKVM.NET and Apache FOP we need to download their packages: IKVM.NET Apache FOP and then unpack them.   From the FOP directory copy all the *.jar’s files from lib and build catalogs to some location, e.g. d:\fop. Second step is to build the *.dll library from these files. On the console execute the following comand:   ikvmc –target:library –out:d:\fop\fop.dll –recurse:d:\fop   The ikvmc is located in the bin subdirectory where you unpacked the IKVM.NET. You must execute this command from this catalog, add this path to the global variable PATH or specify the full path to the bin subdirectory.   In no error occurred during this process, the fop.dll library should be created. Right now we can create a simple project to test if we can create a pdf file.   So let’s create a simple console project application and add reference to the fop.dll and the IKVM dll’s: IKVM.OpenJDK.Core and IKVM.OpenJDK.XML.API.   Full code to generate a pdf file from XSL-FO template:   static void Main(string[] args)         {             //initialize the Apache FOP             FopFactory fopFactory = FopFactory.newInstance();               //in this stream we will get the generated pdf file             OutputStream o = new DotNetOutputMemoryStream();             try             {                 Fop fop = fopFactory.newFop("application/pdf", o);                 TransformerFactory factory = TransformerFactory.newInstance();                 Transformer transformer = factory.newTransformer();                   //read the template from disc                 Source src = new StreamSource(new File("HelloWorld.fo"));                 Result res = new SAXResult(fop.getDefaultHandler());                 transformer.transform(src, res);             }             finally             {                 o.close();             }             using (System.IO.FileStream fs = System.IO.File.Create("HelloWorld.pdf"))             {                 //write from the .NET MemoryStream stream to disc the generated pdf file                 var data = ((DotNetOutputMemoryStream)o).Stream.GetBuffer();                 fs.Write(data, 0, data.Length);             }             Process.Start("HelloWorld.pdf");             System.Console.ReadLine();         }   Apache FOP be default using a Java’s Xalan to work with XML files. I didn’t find a way to replace this piece of code with equivalent from .NET standard library. If any error or warning will occure during generating the pdf file, on the console will ge shown, that’s why I inserted the last line in the sample above. The DotNetOutputMemoryStream this is my wrapper for the Java OutputStream. I have created it to have the possibility to exchange data between the .NET <-> Java objects. It’s implementation:   class DotNetOutputMemoryStream : OutputStream     {         private System.IO.MemoryStream ms = new System.IO.MemoryStream();         public System.IO.MemoryStream Stream         {             get             {                 return ms;             }         }         public override void write(int i)         {             ms.WriteByte((byte)i);         }         public override void write(byte[] b, int off, int len)         {             ms.Write(b, off, len);         }         public override void write(byte[] b)         {             ms.Write(b, 0, b.Length);         }         public override void close()         {             ms.Close();         }         public override void flush()         {             ms.Flush();         }     } The last thing we need, this is the HelloWorld.fo template.   <?xml version="1.0" encoding="utf-8"?> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <fo:layout-master-set>     <fo:simple-page-master master-name="simple"                   page-height="29.7cm"                   page-width="21cm"                   margin-top="1.8cm"                   margin-bottom="0.8cm"                   margin-left="1.6cm"                   margin-right="1.2cm">       <fo:region-body margin-top="3cm"/>       <fo:region-before extent="3cm"/>       <fo:region-after extent="1.5cm"/>     </fo:simple-page-master>   </fo:layout-master-set>   <fo:page-sequence master-reference="simple">     <fo:flow flow-name="xsl-region-body">       <fo:block font-size="18pt" color="black" text-align="center">         Hello, World!       </fo:block>     </fo:flow>   </fo:page-sequence> </fo:root>   I’m not going to explain how how this template is created, because this will be covered in the near future posts.   Generated pdf file should look that:

    Read the article

  • CodePlex Daily Summary for Saturday, November 27, 2010

    CodePlex Daily Summary for Saturday, November 27, 2010Popular ReleasesMahTweets for Windows Phone: Nightly 69: Latest nightly for build 69XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.WatchersNET.SiteMap: WatchersNET.SiteMap 01.03.02: Whats NewNew Tax Filter, You can now select which Terms you want to Use.Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Typps (formerly jiffycms) wysiwyg rich text HTML editor for ASP.NET AJAX: Typps 2.9: -When uploading files (not images), through the file uploader and the multi-file uploader, FileUploaded and MultiFileUploaded event handlers were reporting an empty event argument, this is fixed now. -Fixed also url field not updating when uploading a file ( not image)Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.SQL Monitor: SQL Monitor 1.4: 1.added automatically load sql server instances 2.added friendly wait cursor 3.fixed problem with 4.0 fx 4.added exception handlingSexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).Simple Service Locator: Simple Service Locator v0.12: The Simple Service Locator is an easy-to-use Inversion of Control library that is a complete implementation of the Common Service Locator interface. It solely supports code-based configuration and is an ideal starting point for developers unfamiliar with larger IoC / DI libraries New features in this release Collections that are registered using RegisterAll<T> can now be injected using automatic constructor injection. A new RegisterAll<T>(params T[]) method overload is added that allows ea...BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6MDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...New ProjectsCommunity Megaphone for Windows Phone: The Official CommunityMegaphone application for Windows Phone. Releases are in XAP format, allowing you to upload the compiled application into a device using the Windows Phone Application Deployment tool. Download the source code to learn how to build Windows Phone applications.Group positioning GPS: this is a project for a subject in the university geographic information systems, its about a group of gadjets that can know the position of the other devices in the same group.IIS HTTPS Binder: On IIS 7 that you can not create more than one HTTPS binding, even though you have more then one SSL certificate and you need HTTPS binding on different hosted websites. If you have one IP address you can bind only one SSL to chosen website. This small application can fix this.Inspired Faith Now Playing: A ClickOnce-deployed desktop application that sits in your application tray and notifies you when any of your favorite Messianic Jewish and Christian artists or songs play on the Inspired Faith online radio station (http://inspiredfaith.org)MA Manager: The goal of this software is to help manage a martial arts school through multiple clients and platforms. This allows instructors to easily track students progress overtime.MathModels: This project contains commonly used algorithm implementations in C# .netMyFlickr API: MyFlickr API is a library for developers allows them to call Flickr API from any .Net Application.NerdOnRails.DynamicProxy: a small dynamic-proxy implementation using the new dynamic object that was introduced with .Net 4.NerdOnRails.Injector: a small dependency injection framework for .netNHyperV: NHyperV is a C# Hyper-V programming model.Power Scheme Switcher: This is a very simple utility that exposes an icon in the system tray and allows you to quickly change the Power Plan Scheme from there. It is developed in c# with Vs 2010(WinForm)rcforms: Custom sharepoint forms exampleRobo Commander: This is a Lego NXT commander developed under Visual Studio. This console will have basic commands and uses the NXT++ libraries. Shared Genomics Project - Workbench Codebase: The Shared Genomics workbench enables a diverse user group of researchers to explore the associations between genetic and other factors in their datasets. It provides a graphical user interface to the analysis functions published in a sister Codeplex project i.e. MPI Codebase.SharePoint 2010 PowerShell v2 reference: This project hosts a complete PowerShell v2 reference for SharePoint 2010.Social PowerFlow: Experiments with PowerWF and the social network - integrating Windows Powershell and Windows Workflow Foundation with Facebook and TwitterTweetSayings: Twitter SayingsXamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery/WPF is a lightweight yet powerful library that enables rapid development in WPF. It simplifies several tasks like page/window traversing; finding controls by name, type, style, property value or position in control tree; event handling; animating and much more.XmlDSigEx XML Digital Signature Library: The XmlDSigEx library is an alternative to using the SignedXml classes in the .Net Framework. It addresses some of the shortcomings of the standard types particularly in regards to canonicalisation and the enveloped transform. It is currently under development.Yoi's Online Project: This is online storage for Yovi's project?????? ???? ???: ?????? ???? ?? ????? ??? ?????? ? ??????? ?? ???? ??? ????

    Read the article

  • Pluggable Rules for Entity Framework Code First

    - by Ricardo Peres
    Suppose you want a system that lets you plug custom validation rules on your Entity Framework context. The rules would control whether an entity can be saved, updated or deleted, and would be implemented in plain .NET. Yes, I know I already talked about plugable validation in Entity Framework Code First, but this is a different approach. An example API is in order, first, a ruleset, which will hold the collection of rules: 1: public interface IRuleset : IDisposable 2: { 3: void AddRule<T>(IRule<T> rule); 4: IEnumerable<IRule<T>> GetRules<T>(); 5: } Next, a rule: 1: public interface IRule<T> 2: { 3: Boolean CanSave(T entity, DbContext ctx); 4: Boolean CanUpdate(T entity, DbContext ctx); 5: Boolean CanDelete(T entity, DbContext ctx); 6: String Name 7: { 8: get; 9: } 10: } Let’s analyze what we have, starting with the ruleset: Only has methods for adding a rule, specific to an entity type, and to list all rules of this entity type; By implementing IDisposable, we allow it to be cancelled, by disposing of it when we no longer want its rules to be applied. A rule, on the other hand: Has discrete methods for checking if a given entity can be saved, updated or deleted, which receive as parameters the entity itself and a pointer to the DbContext to which the ruleset was applied; Has a name property for helping us identifying what failed. A ruleset really doesn’t need a public implementation, all we need is its interface. The private (internal) implementation might look like this: 1: sealed class Ruleset : IRuleset 2: { 3: private readonly IDictionary<Type, HashSet<Object>> rules = new Dictionary<Type, HashSet<Object>>(); 4: private ObjectContext octx = null; 5:  6: internal Ruleset(ObjectContext octx) 7: { 8: this.octx = octx; 9: } 10:  11: public void AddRule<T>(IRule<T> rule) 12: { 13: if (this.rules.ContainsKey(typeof(T)) == false) 14: { 15: this.rules[typeof(T)] = new HashSet<Object>(); 16: } 17:  18: this.rules[typeof(T)].Add(rule); 19: } 20:  21: public IEnumerable<IRule<T>> GetRules<T>() 22: { 23: if (this.rules.ContainsKey(typeof(T)) == true) 24: { 25: foreach (IRule<T> rule in this.rules[typeof(T)]) 26: { 27: yield return (rule); 28: } 29: } 30: } 31:  32: public void Dispose() 33: { 34: this.octx.SavingChanges -= RulesExtensions.OnSaving; 35: RulesExtensions.rulesets.Remove(this.octx); 36: this.octx = null; 37:  38: this.rules.Clear(); 39: } 40: } Basically, this implementation: Stores the ObjectContext of the DbContext to which it was created for, this is so that later we can remove the association; Has a collection - a set, actually, which does not allow duplication - of rules indexed by the real Type of an entity (because of proxying, an entity may be of a type that inherits from the class that we declared); Has generic methods for adding and enumerating rules of a given type; Has a Dispose method for cancelling the enforcement of the rules. A (really dumb) rule applied to Product might look like this: 1: class ProductRule : IRule<Product> 2: { 3: #region IRule<Product> Members 4:  5: public String Name 6: { 7: get 8: { 9: return ("Rule 1"); 10: } 11: } 12:  13: public Boolean CanSave(Product entity, DbContext ctx) 14: { 15: return (entity.Price > 10000); 16: } 17:  18: public Boolean CanUpdate(Product entity, DbContext ctx) 19: { 20: return (true); 21: } 22:  23: public Boolean CanDelete(Product entity, DbContext ctx) 24: { 25: return (true); 26: } 27:  28: #endregion 29: } The DbContext is there because we may need to check something else in the database before deciding whether to allow an operation or not. And here’s how to apply this mechanism to any DbContext, without requiring the usage of a subclass, by means of an extension method: 1: public static class RulesExtensions 2: { 3: private static readonly MethodInfo getRulesMethod = typeof(IRuleset).GetMethod("GetRules"); 4: internal static readonly IDictionary<ObjectContext, Tuple<IRuleset, DbContext>> rulesets = new Dictionary<ObjectContext, Tuple<IRuleset, DbContext>>(); 5:  6: private static Type GetRealType(Object entity) 7: { 8: return (entity.GetType().Assembly.IsDynamic == true ? entity.GetType().BaseType : entity.GetType()); 9: } 10:  11: internal static void OnSaving(Object sender, EventArgs e) 12: { 13: ObjectContext octx = sender as ObjectContext; 14: IRuleset ruleset = rulesets[octx].Item1; 15: DbContext ctx = rulesets[octx].Item2; 16:  17: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Added)) 18: { 19: Object entity = entry.Entity; 20: Type realType = GetRealType(entity); 21:  22: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 23: { 24: if (rule.CanSave(entity, ctx) == false) 25: { 26: throw (new Exception(String.Format("Cannot save entity {0} due to rule {1}", entity, rule.Name))); 27: } 28: } 29: } 30:  31: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Deleted)) 32: { 33: Object entity = entry.Entity; 34: Type realType = GetRealType(entity); 35:  36: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 37: { 38: if (rule.CanDelete(entity, ctx) == false) 39: { 40: throw (new Exception(String.Format("Cannot delete entity {0} due to rule {1}", entity, rule.Name))); 41: } 42: } 43: } 44:  45: foreach (ObjectStateEntry entry in octx.ObjectStateManager.GetObjectStateEntries(EntityState.Modified)) 46: { 47: Object entity = entry.Entity; 48: Type realType = GetRealType(entity); 49:  50: foreach (dynamic rule in (getRulesMethod.MakeGenericMethod(realType).Invoke(ruleset, null) as IEnumerable)) 51: { 52: if (rule.CanUpdate(entity, ctx) == false) 53: { 54: throw (new Exception(String.Format("Cannot update entity {0} due to rule {1}", entity, rule.Name))); 55: } 56: } 57: } 58: } 59:  60: public static IRuleset CreateRuleset(this DbContext context) 61: { 62: Tuple<IRuleset, DbContext> ruleset = null; 63: ObjectContext octx = (context as IObjectContextAdapter).ObjectContext; 64:  65: if (rulesets.TryGetValue(octx, out ruleset) == false) 66: { 67: ruleset = rulesets[octx] = new Tuple<IRuleset, DbContext>(new Ruleset(octx), context); 68: 69: octx.SavingChanges += OnSaving; 70: } 71:  72: return (ruleset.Item1); 73: } 74: } It relies on the SavingChanges event of the ObjectContext to intercept the saving operations before they are actually issued. Yes, it uses a bit of dynamic magic! Very handy, by the way! So, let’s put it all together: 1: using (MyContext ctx = new MyContext()) 2: { 3: IRuleset rules = ctx.CreateRuleset(); 4: rules.AddRule(new ProductRule()); 5:  6: ctx.Products.Add(new Product() { Name = "xyz", Price = 50000 }); 7:  8: ctx.SaveChanges(); //an exception is fired here 9:  10: //when we no longer need to apply the rules 11: rules.Dispose(); 12: } Feel free to use it and extend it any way you like, and do give me your feedback! As a final note, this can be easily changed to support plain old Entity Framework (not Code First, that is), if that is what you are using.

    Read the article

  • squid3 auth thru samba using ntlm to AD doesn't work

    - by derty
    some users here are spending to much time exploring the WWW. So big boss whats to get this under control. We use a squid3 just for some security reason and chace benefits. and now i'm trying to set up a new proxy on a different server (Debian 6) Permissions are defined in AC and the squid3 should get the auth thru samba/winbind by using the ntlm protocol. but i'll get all the time Access, denited. it only works by using LDAP but thats not the way i need it. here some log and confs squid access.log 1326878095.784 1 192.168.15.27 TCP_DENIED/407 4049 GET http://at.msn.com/? -NONE/- text/html 1326878095.791 1 192.168.15.27 TCP_DENIED/407 4294 GET http://at.msn.com/? - NONE/- text/html 1326878095.803 9 192.168.15.27 TCP_DENIED/403 4028 GET http://at.msn.com/? kavan NONE/- text/html 1326878095.848 0 192.168.15.27 TCP_DENIED/403 3881 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878100.279 0 192.168.15.27 TCP_DENIED/403 3735 GET http://www.google.at/ kavan NONE/- text/html 1326878100.296 0 192.168.15.27 TCP_DENIED/403 3870 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878155.700 0 192.168.15.27 TCP_DENIED/407 4072 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.705 2 192.168.15.27 TCP_DENIED/407 4317 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.709 3 192.168.15.27 TCP_DENIED/403 4026 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml kavan NONE/- text/html squid chace 2012/01/18 10:12:49| Creating Swap Directories 2012/01/18 10:12:49| Starting Squid Cache version 3.1.6 for x86_64-pc-linux-gnu... 2012/01/18 10:12:49| Process ID 17236 2012/01/18 10:12:49| With 65535 file descriptors available 2012/01/18 10:12:49| Initializing IP Cache... 2012/01/18 10:12:49| DNS Socket created at [::], FD 7 2012/01/18 10:12:49| DNS Socket created at 0.0.0.0, FD 8 2012/01/18 10:12:49| Adding nameserver 192.168.15.2 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.19 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.1 from /etc/resolv.conf 2012/01/18 10:12:49| Adding domain schoenbrunn.local from /etc/resolv.conf 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'ntlm_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'squid_kerb_auth' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_group' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| Unlinkd pipe opened on FD 73 2012/01/18 10:12:49| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2012/01/18 10:12:49| Store logging disabled 2012/01/18 10:12:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2012/01/18 10:12:49| Target number of buckets: 1008 2012/01/18 10:12:49| Using 8192 Store buckets 2012/01/18 10:12:49| Max Mem size: 262144 KB 2012/01/18 10:12:49| Max Swap size: 0 KB 2012/01/18 10:12:49| Using Least Load store dir selection 2012/01/18 10:12:49| Set Current Directory to /var/spool/squid3 2012/01/18 10:12:49| Loaded Icons. 2012/01/18 10:12:49| Accepting HTTP connections at [::]:3128, FD 74. 2012/01/18 10:12:49| HTCP Disabled. 2012/01/18 10:12:49| Squid modules loaded: 0 2012/01/18 10:12:49| Adaptation support is off. 2012/01/18 10:12:49| Ready to serve requests. 2012/01/18 10:12:50| storeLateRelease: released 0 objects smb.conf # Domain Authntication Settings workgroup = <WORKGROUP> security = ads password server = <DOMAINNAME>.LOCAL realm = <DOMAINNAME>.LOCAL ldap ssl = no # logging log level = 5 max log size = 50 # logs split per machine log file = /var/log/samba/%m.log # max 50KB per log file, then rotate ; max log size = 50 # User settings username map = /etc/samba/smbusers idmap uid = 10000-20000000 idmap gid = 10000-20000000 idmap backend = ad ; template primary group = <ad group> template shell = /sbin/nologin # Winbind Settings winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind netsted groups = Yes winbind nested groups = Yes winbind cache time = 10 winbind use default domain = Yes #Other Globals unix charset = LOCALE server string = <SERVERNAME> load printers = no printing = cups cups options = raw ; printcap name = /etc/printcap #obtain list of printers automatically on SystemV ; printcap name = lpstat ; printing = cups squid.conf auth_param ntlm program /usr/bin/ntlm_auth --require-membership-of=<DOMAINNAME>\\INTERNETZ --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b "dc=<dcname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f sAMAccountName=%s -h 192.168.15.19:3268 auth_param basic realm "Proxy Authentifizierung. Bitte geben Sie Ihren Benutzername und Ihr Passwort ein!" #means insert you PW in an other language - # external_acl_type InetGroup %LOGIN /usr/lib/squid3/squid_ldap_group -R -b "dc=<domainname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f "(&(objectclass=person)(sAMAccountName=%v) (memberof=cn=%a,cn=internetz,dc=<domainname>,dc=local))" -h 192.168.15.19:3268 auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d auth_param negotiate children 10 auth_param negotiate keep_alive on acl localnet proxy_auth REQUIRED acl InetAccess external InetGroup Internetz http_access allow InetAccess http_access deny all acl auth proxy_auth REQUIRED http_access allow auth and a very suspicious is that by adding the proxy server to the Domain i see 2 new entries in the PC one with the original computer-name leopoldine and one with leopoldine CNF:f8efa4c4-ff0e-4217-939d-f1523b43464d ?!? I tried a lot, really... but i stuck on this problem... i actually i even reinstalled all dependent programs and reconfigured them from default. Group exists and has me in it. Firefox running on the old proxy and i use IE for testing the new one. But i'll get all the time Access-Denited and to be honest i'm quite a beginner, so please don't be to prude. I'll interested in improving, i'll get the information we need to fix this but i started working 2 month ago and got only 1 1/2 year's training and not a single sec. in linux ;)

    Read the article

  • Can you help me fix my broken packages?

    - by Andreas Hartmann
    I would like to upgrade from 13.04 to 13.10, but some broken packages are preventing upgrade success: grep Broken /var/log/dist-upgrade/apt.log output: Broken libwayland-client0:amd64 Conflicts on libwayland0 [ amd64 ] < 1.0.5-0ubuntu1 > ( libs ) (< 1.1.0) Broken libunity9:amd64 Breaks on unity-common [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 > ( gnome ) (< 7.1.2) Broken cups-filters:amd64 Conflicts on ghostscript-cups [ amd64 ] < 9.07~dfsg2-0ubuntu3.1 > ( text ) Broken libpam-systemd:amd64 Conflicts on libpam-xdg-support [ amd64 ] < 0.2-0ubuntu2 > ( admin ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ amd64 ] < 0.9.13-1 > ( libs ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ i386 ] < 0.9.13-1 > ( libs ) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ amd64 ] < 6.90.2daily13.04.05-0ubuntu1 > ( gnome ) (< 7.0.7) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ i386 ] < none > ( none ) (< 7.0.7) Broken libaccount-plugin-generic-oauth:amd64 Conflicts on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libaccount-plugin-generic-oauth:amd64 Breaks on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libmutter0b:amd64 Breaks on libmutter0a [ amd64 ] < 3.6.3-0ubuntu2 > ( libs ) Broken python3-aptdaemon.pkcompat:amd64 Breaks on libpackagekit-glib2-14 [ amd64 ] < 0.7.6-3ubuntu1 > ( libs ) (<= 0.7.6-4) Broken apache2:amd64 Conflicts on apache2.2-common [ amd64 ] < 2.2.22-6ubuntu5.1 > ( httpd ) Broken chromium-codecs-ffmpeg-extra:amd64 Conflicts on chromium-codecs-ffmpeg [ amd64 ] < 28.0.1500.71-0ubuntu1.13.04.1 -> 29.0.1547.65-0ubuntu2 > ( universe/web ) Broken unity-scope-home:amd64 Conflicts on unity-lens-shopping [ amd64 ] < 6.8.0daily13.03.04-0ubuntu1 > ( gnome ) Broken libsnmp30:amd64 Breaks on libsnmp15 [ amd64 ] < 5.4.3~dfsg-2.7ubuntu1 > ( libs ) Broken apache2.2-bin:amd64 Breaks on gnome-user-share [ amd64 ] < 3.0.4-0ubuntu1 > ( gnome ) (< 3.8.0-2~) Broken libgjs0d:amd64 Conflicts on libgjs0c [ amd64 ] < 1.34.0-0ubuntu1 > ( libs ) Broken unity-gtk2-module:amd64 Conflicts on appmenu-gtk [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken lib32asound2:amd64 Depends on libasound2 [ amd64 ] < 1.0.25-4ubuntu3.1 -> 1.0.27.2-1ubuntu6 > ( libs ) (= 1.0.25-4ubuntu3.1) Broken unity-gtk3-module:amd64 Conflicts on appmenu-gtk3 [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken activity-log-manager:amd64 Conflicts on activity-log-manager-common [ amd64 ] < 0.9.4-0ubuntu6.2 > ( utils ) Broken libgtksourceview-3.0-0:amd64 Depends on libgtksourceview-3.0-common [ amd64 ] < 3.6.3-0ubuntu1 -> 3.8.2-0ubuntu1 > ( libs ) (< 3.7) Broken icaclient:amd64 Depends on lib32asound2 [ amd64 ] < 1.0.25-4ubuntu3.1 > ( libs ) Broken libunity-core-6.0-5:amd64 Depends on unity-services [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 -> 7.1.2+13.10.20131014.1-0ubuntu1 > ( gnome ) (= 7.0.0daily13.06.19~13.04-0ubuntu1) Broken libbamf3-1:amd64 Depends on bamfdaemon [ amd64 ] < 0.4.0daily13.06.19~13.04-0ubuntu1 -> 0.5.1+13.10.20131011-0ubuntu1 > ( libs ) (= 0.4.0daily13.06.19~13.04-0ubuntu1) Broken apache2-bin:amd64 Conflicts on apache2.2-bin [ amd64 ] < 2.2.22-6ubuntu5.1 -> 2.4.6-2ubuntu2 > ( httpd ) (< 2.3~) Output for cat /etc/apt/sources.list /etc/apt/sources.list.d/*.list # deb cdrom:[Ubuntu 13.04 _Raring Ringtail_ - Release amd64 (20130424)]/ raring main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://de.archive.ubuntu.com/ubuntu/ raring universe deb http://de.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://de.archive.ubuntu.com/ubuntu/ raring multiverse deb http://de.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner # deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main # deb-src http://extras.ubuntu.com/ubuntu raring main # deb http://linux.dropbox.com/ubuntu precise main output for sudo dpkg -l | grep -e "^iU" -e "^rc": rc ibm-lotus-cae 8.5.2-20100805.0821 i386 IBM Lotus Composite Application Editor rc ibm-lotus-cae-nl1 8.5.2-20100805.0821 i386 IBM Lotus CAE NL1 rc ibm-lotus-feedreader 8.5.2-20100805.0821 i386 Feeds for IBM Lotus Notes 8.5.2 rc ibm-lotus-feedreader-nl1 8.5.2-20100805.0821 i386 IBM Lotus Feed Reader NL1 rc ibm-lotus-notes 8.5.2-20100805.0821 i386 IBM Lotus Notes rc ibm-lotus-notes-core-de 8.5.2-20100805.0821 i386 IBM Lotus Notes Native German (de) rc ibm-lotus-notes-nl1 8.5.2-20100805.0821 i386 IBM Lotus Notes Java NL1 rc ibm-lotus-sametime 8.5.2-20100805.0821 i386 IBM Lotus Sametime rc ibm-lotus-symphony 8.5.2-20100805.0821 i386 IBM Lotus Symphony rc ibm-lotus-symphony-nl1 8.5.2-20100805.0821 i386 IBM Lotus Symphony NL1 rc libapache2-mod-php5filter 5.4.9-4ubuntu2.2 amd64 server-side, HTML-embedded scripting language (apache 2 filter module) rc libavcodec53:amd64 6:0.8.6-1ubuntu2 amd64 Libav codec library rc libavutil51:amd64 6:0.8.6-1ubuntu2 amd64 Libav utility library rc libmotif4:amd64 2.3.3-7ubuntu1 amd64 Open Motif - shared libraries rc linux-image-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • CodePlex Daily Summary for Saturday, October 05, 2013

    CodePlex Daily Summary for Saturday, October 05, 2013Popular ReleasesEvent-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerLearning JQuery 1.3 And Above Examples: jQuery Demo whats got Added in 1.8: Getting Startedhttp://jqdemos.darshanmarathe.com Getting JQueryJQuery CDN Google Click Microsoft Click Media Temple Click Creating your first page Show/Hide JavaScript fundamentals JavaScript as a Scripting language JavaScript as a functional programming language JavaScript as a dynamic programming language Selectors CSS Selectors Attribute Selectors Custom Selectors Form Selectors Events Page Load/Ready Events Binding Events Compund Events Tricks and other fundas Effects Inline css Modi...Pulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...Upida.Net: Upida.Net 2.2: Example "MyClients" fixed and updated Redundant libraries removed in examples References fixed in examplesCompare .NET Objects: Version 1.7.3.0: Fix for problem with enum showing type in the breadcrumb Changed skip of same class from GetHashCode to use ReferenceEquals Applied patch 15082 from FarrisMoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsBeetle.js: Beetle.js v0.9: Beetle.js Beta v0.9DNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.10.3): Fix a bug when the first caracter of a description line is '[' Add search featureSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1002September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0AudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...New ProjectsC# GUI Oscillocope: c# oscilloscope gui programmingDefinitive Business Management: Business ManagementEnough Connectivity: Enough Connectivity eases the access to Bluetooth and other devices that are connected via the SerialPort on Netduino, .NET Gadgeteer or .NET Micro Framework.Excel add-in for the Intel Math Kernel Library: Expose Intel MKL functionality in Excel.Fix SP 2013 Multitenant Missing BCS and SSS Links: This Project fixes 3 issues in SP 2013 Enterprise Multitenancygaragemanagement: Auto Garage Management SoftwareHost: YibushanrenIRSystem: it's a development project for user activitiesKSP Vessel Viewer: A tiny tool to display craft-file informationLerniXml: Learning XML (+ related technologies)lkasdjlkaslkdljkasd: aPersonal Accountant: Simple and flexible web service for personal finance accounting.Quan Ly Nha Hang: Qu?n lý nhà hàngTicTacToe Ultimate: Services for Score board for Tic Tac Toe game like, but harder and more interesting.Unconfused Bills .net: Unconfuse your bills with this Google Calendar integrated budgeting toolVisual Studio 2010 Extensions - Mike Parks & Cory Cissell: Source code to the extensions Cory and I made a few years back.

    Read the article

  • How to use ULS in SharePoint 2010 for Custom Code Exception Logging?

    - by venkatx5
    What is ULS in SharePoint 2010? ULS stands for Unified Logging Service which captures and writes Exceptions/Logs in Log File(A Plain Text File with .log extension). SharePoint logs Each and every exceptions with ULS. SharePoint Administrators should know ULS and it's very useful when anything goes wrong. but when you ask any SharePoint 2007 Administrator to check log file then most of them will Kill you. Because read and understand the log file is not so easy. Imagine open a plain text file of 20 MB in NotePad and go thru line by line. Now Microsoft developed a tool "ULS Viewer" to view those Log files in easily readable format. This tools also helps to filter events based on exception priority. You can read on this blog to know in details about ULS Viewer . Where to get ULS Viewer? ULS Viewer is developed by Microsoft and available to download for free. URL : http://code.msdn.microsoft.com/ULSViewer/Release/ProjectReleases.aspx?ReleaseId=3308 Note: Eventhought this tool developed by Microsoft, it's not supported by Microsoft. Means you can't support for this tool from Microsoft and use it on your own Risk. By the way what's the risk in viewing Log Files?! How to use ULS in SharePoint 2010 Custom Code? ULS can be extended to use in user solutions to log exceptions. In Detail, Developer can use ULS to log his own application errors and exceptions on SharePoint Log files. So now all in Single Place (That's why it's called "Unified Logging"). Well in this article I am going to use Waldek's Code (Reference Link). However the article is core and am writing container for that (Basically how to implement the code in Detail). Let's see the steps. Open Visual Studio 2010 -> File -> New Project -> Visual C# -> Windows -> Class Library -> Name : ULSLogger (Make sure you've selected .net Framework 3.5)   In Solution Explorer Panel, Rename the Class1.cs to LoggingService.cs   Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint"   Right Click on the Project -> Properties. Select "Signing" Tab -> Check "Sign the Assembly".   In the below drop down select <New> and enter "ULSLogger", uncheck the "Protect my key with a Password" option.   Now copy the below code and paste. (Or Just refer.. :-) ) using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; using System.Runtime.InteropServices; namespace ULSLogger { public class LoggingService : SPDiagnosticsServiceBase { public static string vsDiagnosticAreaName = "Venkats SharePoint Logging Service"; public static string CategoryName = "vsProject"; public static uint uintEventID = 700; // Event ID private static LoggingService _Current; public static LoggingService Current {  get   {    if (_Current == null)     {       _Current = new LoggingService();     }    return _Current;   } }private LoggingService() : base("Venkats SharePoint Logging Service", SPFarm.Local) {}protected override IEnumerable<SPDiagnosticsArea> ProvideAreas() { List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>  {   new SPDiagnosticsArea(vsDiagnosticAreaName, new List<SPDiagnosticsCategory>    {     new SPDiagnosticsCategory(CategoryName, TraceSeverity.Medium, EventSeverity.Error)    })   }; return areas; }public static string LogErrorInULS(string errorMessage) { string strExecutionResult = "Message Not Logged in ULS. "; try  {   SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];   LoggingService.Current.WriteTrace(uintEventID, category, TraceSeverity.Unexpected, errorMessage);   strExecutionResult = "Message Logged"; } catch (Exception ex) {  strExecutionResult += ex.Message; } return strExecutionResult; }public static string LogErrorInULS(string errorMessage, TraceSeverity tsSeverity) { string strExecutionResult = "Message Not Logged in ULS. "; try  {  SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];  LoggingService.Current.WriteTrace(uintEventID, category, tsSeverity, errorMessage);  strExecutionResult = "Message Logged";  } catch (Exception ex)  {   strExecutionResult += ex.Message;   } return strExecutionResult;  } } }   Just build the solution and it's ready to use now. This ULS solution can be used in SharePoint Webparts or Console Application. Lets see how to use it in a Console Application. SharePoint Server 2010 must be installed in the same Server or the application must be hosted in SharPoint Server 2010 environment. The console application must be set to "x64" Platform target.   Create a New Console Application. (Visual Studio -> File -> New Project -> C# -> Windows -> Console Application) Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint" Open Program.cs add "using Microsoft.SharePoint.Administration;" Right Click on References -> Add Reference -> Under "Browse" tab select the "ULSLogger.dll" which we created first. (Path : ULSLogger\ULSLogger\bin\Debug\) Right Click on Project -> Properties -> Select "Build" Tab -> Under "Platform Target" option select "x64". Open the Program.cs and paste the below code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint.Administration; using ULSLogger; namespace ULSLoggerClient {  class Program   {   static void Main(string[] args)     {     Console.WriteLine("ULS Logging Started.");     string strResult = LoggingService.LogErrorInULS("My Application is Working Fine.");      Console.WriteLine("ULS Logging Info. Result : " + strResult);     string strResult = LoggingService.LogErrorInULS("My Application got an Exception.", TraceSeverity.High);     Console.WriteLine("ULS Logging Waring Result : " + strResult);      Console.WriteLine("ULS Logging Completed.");      Console.ReadLine();     }   } } Just build the solution and execute. It'll log the message on the log file. Make sure you are using Farm Administrator User ID. You can play with Message and TraceSeverity as required. Now Open ULS Viewer -> File -> Open From -> ULS -> Select First Option to open the default ULS Log. It's Uls RealTime and will show all log entries in readable table format. Right Click on a row and select "Filter By This Item". Select "Event ID" and enter value "700" that we used in the application. Click Ok and now you'll see the Exceptions/Logs which logged by our application.   If you want to see High Priority Messages only then Click Icons except Red Cross Icon on the Toolbar. The tooltip will tell what's the icons used for.

    Read the article

  • My sound stopped working today, how can I fix it?

    - by Oli
    This seems to be a problem with pulseaudio. I was logged in over VNC on my phone and started playing a video this caused X to crash (as sometimes happens). I restarted and suddenly the sound doesn't work. I have a Intel HDA/Realtek ALC889 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller alsamixer is detecting this just fine. PulseAudio doesn't detect this alsa device so is using auto_null as the default sink (logs below). When I properly kill PulseAudio (tell it not to auto-start) direct ALSA communication with the sound card works just fine. speaker-test, for example, works. So the hardware and ALSA layers are fine IMO. In the logs, it seems that the card might be "busy" but I really don't know how or why it would be now (and never before). Is there an ALSA lock file somewhere that it still there because of my crash? I just ran sudo fuser /dev/snd/* and saw this: oli@bert:~$ sudo fuser /dev/snd/* /dev/snd/controlC0: 1884 /dev/snd/pcmC0D0c: 1884m /dev/snd/timer: 1884 A look at the process list (ps aux | grep 1884) tells me process 1884 is arecord -c 1 -f S16_LE -r 8000 -t raw. No idea what this is or why it's running. When I try and kill arecord (as root), it just respawns and rebinds on the hardware. I'm in a very annoying situation where I don't know what is going on and don't know how to find out. I'm open to all suggestions to get this working again. Fire away. And here's what I get when I stop PA auto-loading, kill it and then start it with -vvvv. oli@bert:~$ pulseaudio -vvvvv I: main.c: setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted I: main.c: setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted D: core-rtclock.c: Timer slack is set to 50 us. D: core-util.c: RealtimeKit worked. I: core-util.c: Successfully gained nice level -11. I: main.c: This is PulseAudio 0.9.21-63-gd3efa-dirty D: main.c: Compilation host: x86_64-pc-linux-gnu D: main.c: Compilation CFLAGS: -g -O2 -g -Wall -O3 -Wall -W -Wextra -pipe -Wno-long-long -Winline -Wvla -Wno-overlength-strings -Wunsafe-loop-optimizations -Wundef -Wformat=2 -Wlogical-op -Wsign-compare -Wformat-security -Wmissing-include-dirs -Wformat-nonliteral -Wold-style-definition -Wpointer-arith -Winit-self -Wdeclaration-after-statement -Wfloat-equal -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wmissing-declarations -Wmissing-noreturn -Wshadow -Wendif-labels -Wcast-align -Wstrict-aliasing=2 -Wwrite-strings -Wno-unused-parameter -ffast-math -Wp,-D_FORTIFY_SOURCE=2 -fno-common -fdiagnostics-show-option D: main.c: Running on host: Linux x86_64 2.6.38-rc3 #1 SMP Tue Feb 1 10:53:04 GMT 2011 D: main.c: Found 8 CPUs. I: main.c: Page size is 4096 bytes D: main.c: Compiled with Valgrind support: no D: main.c: Running in valgrind mode: no D: main.c: Running in VM: no D: main.c: Optimised build: yes D: main.c: All asserts enabled. I: main.c: Machine ID is 8310740c4729ef474fe5ecec4bbf5a6b. I: main.c: Session ID is 8310740c4729ef474fe5ecec4bbf5a6b-1297338553.571075-1050119523. I: main.c: Using runtime directory /home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-runtime. I: main.c: Using state directory /home/oli/.pulse. I: main.c: Using modules directory /usr/lib/pulse-0.9.21/modules. I: main.c: Running in system mode: no I: main.c: Fresh high-resolution timers available! Enjoy ol' chap! I: cpu-x86.c: CPU flags: CMOV MMX SSE SSE2 SSE3 SSSE3 SSE4_1 SSE4_2 I: svolume_mmx.c: Initialising MMX optimized functions. I: remap_mmx.c: Initialising MMX optimized remappers. I: svolume_sse.c: Initialising SSE2 optimized functions. I: remap_sse.c: Initialising SSE2 optimized remappers. I: sconv_sse.c: Initialising SSE2 optimized conversions. D: memblock.c: Using shared memory pool with 1024 slots of size 64.0 KiB each, total size is 64.0 MiB, maximum usable slot size is 65472 D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes.tdb' I: module-device-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes'. I: module.c: Loaded "module-device-restore" (index: #0; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes.tdb' I: module-stream-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes'. I: module.c: Loaded "module-stream-restore" (index: #1; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database.tdb' I: module-card-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database'. I: module.c: Loaded "module-card-restore" (index: #2; argument: ""). I: module.c: Loaded "module-augment-properties" (index: #3; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards. I: module.c: Loaded "module-udev-detect" (index: #4; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-bluetooth-discover.so': success D: dbus-util.c: Successfully connected to D-Bus system bus ba7c9a1f90b3d49d930bca2100000015 as :1.62 D: bluetooth-util.c: dbus: interface=org.freedesktop.DBus, path=/org/freedesktop/DBus, member=NameAcquired D: bluetooth-util.c: Bluetooth daemon is apparently not available. I: module.c: Loaded "module-bluetooth-discover" (index: #5; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-esound-protocol-unix.so': success I: module.c: Loaded "module-esound-protocol-unix" (index: #6; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #7; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-gconf.so': success I: module.c: Loaded "module-gconf" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'auto_null' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'auto_null.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). D: module-always-sink.c: Autoloading null-sink as no other sinks detected. I: sink.c: Created sink 0 "auto_null" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: sink.c: device.description = "Dummy Output" I: sink.c: device.class = "abstract" I: sink.c: device.icon_name = "audio-card" D: core-subscribe.c: Dropped redundant event due to change event. I: source.c: Created source 0 "auto_null.monitor" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: source.c: device.description = "Monitor of Dummy Output" I: source.c: device.class = "monitor" I: source.c: device.icon_name = "audio-input-microphone" D: module-null-sink.c: Thread starting up I: module.c: Loaded "module-null-sink" (index: #11; argument: "sink_name=auto_null sink_properties='device.description="Dummy Output"'"). I: module.c: Loaded "module-always-sink" (index: #12; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #13; argument: ""). D: module-suspend-on-idle.c: Sink auto_null becomes idle, timeout in 5 seconds. I: module.c: Loaded "module-suspend-on-idle" (index: #14; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session1" D: module-console-kit.c: Added new session /org/freedesktop/ConsoleKit/Session1 I: module.c: Loaded "module-console-kit" (index: #15; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #16; argument: ""). D: dbus-util.c: Successfully connected to D-Bus session bus efbffc6788fad56cfd64d40c00000018 as :1.182 D: main.c: Got org.pulseaudio.Server! I: main.c: Daemon startup complete. I: client.c: Created 1 "Native client (UNIX socket client)" I: client.c: Created 2 "Native client (UNIX socket client)" D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: module-augment-properties.c: Looking for .desktop file for gnome-volume-control-applet D: module-augment-properties.c: Looking for .desktop file for gnome-settings-daemon D: core-subscribe.c: Dropped redundant event due to change event. I: module-suspend-on-idle.c: Sink auto_null idle for too long, suspending ... D: sink.c: Suspend cause of sink auto_null is 0x0004, suspending Note the one section that seems to find the hardware but says it's busy (no idea if this is relevant). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards.

    Read the article

  • vsftpd not allowing uploads. 550 response.

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • Windows installation repair option not showing up

    - by Carl
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. Do not choose the option to press R to use the Recovery Console. In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:\windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.) UPDATE: Upon loading the Recovery Console, I get a message saying The system registry does not appear to have an active ControlSet key. The system registry may be damaged. You can try restarting it with the Last Known Good configuration or you can try repairing the installation of Windows using the setup program's repair and recovery options. I then did bootcfg /scan - "successful" ... Total installs: 1 ... [1] c:\windows - with the c:\windows command prompt below it. bootcfg /list gives [1] Windows XP Pro; OS Load Options /noexecute=optin /fastdetect; OS Location: c:\windows I followed the instructions at http://michaelstevenstech.com/XPrepairinstall.htm - "Warning 2" link copy E:\i386\ntldr C:\ copy E:\i386\ntdetect.com C:\ attrib -h -r -s C:\boot.ini del C:\boot.ini BootCfg /Rebuild I added /fastdetect when it asked for options. I re-ran Windows setup - no change - no repair option. UPDATE: I followed the procedure at http://support.microsoft.com/default.aspx?scid=kb;en-us;307545 I rebooted. I now get a quick message on bootup to select the boot - 1: [blank] ; Windows XP Professional ; Windows Recover Console. The "1: " is new. The rest is the way it was when all was okay. Selecting 1: and the next one gives the same result - I get to a login icon, and then it asks for a password, with the blinking cursor, but I can't type anything. I reboot with the Windows CD. Now I see a repair option for installation "1: " I selected R on that, and it did "Setup is copying files..." and rebooted when it was done. Then it booted, and I got a window saying "Setup will complete in approximately 39 minutes." That's where I am now. I wasn't expecting this last part - I did a repair several months ago and I don't recall that. UPDATE: Booted up. Asked if I wanted to register Windows online. All my icons are there, and the old desktop documents. Good. All the applications I tried from the Start Menu work (tested a few), except Corel Photopaint - I get registry entry not found errors. Windows ran for a while, then froze. The mouse and keyboard don't work. Pressing the power button got Windows to shut down. I probably need to put SP2 on it, and then all the updates for my laptop for XP Pro SP2 (drivers), there's a bunch. The mouse and keyboard quit working again. That wasn't a problem when I first set up this laptop. I've ran 4 times now. Two mouse/keyboards hangs by pressing Ctrl-C (to copy text from a notepad document), and two by selecting Start-Run (wasn't able to type anything in the box).

    Read the article

  • Tracing Silex from PHP to the OS with DTrace

    - by cj
    In this blog post I show the full stack tracing of Brendan Gregg's php_syscolors.d script in the DTrace Toolkit. The Toolkit contains a dozen very useful PHP DTrace scripts and many more scripts for other languages and the OS. For this example, I'll trace the PHP micro framework Silex, which was the topic of the second of two talks by Dustin Whittle at a recent SF PHP Meetup. His slides are at Silex: From Micro to Full Stack. Installing DTrace and PHP The php_syscolors.d script uses some static PHP probes and some kernel probes. For Oracle Linux I discussed installing DTrace and PHP in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages. On other platforms with DTrace support, follow your standard procedures to enable DTrace and load the correct providers. The sdt and systrace providers are required in addition to fasttrap. On Oracle Linux, I loaded the DTrace modules like: # modprobe fasttrap # modprobe sdt # modprobe systrace # chmod 666 /dev/dtrace/helper Installing the DTrace Toolkit I download DTraceToolkit-0.99.tar.gz and extracted it: $ tar -zxf DTraceToolkit-0.99.tar.gz The PHP scripts are in the Php directory and examples in the Examples directory. Installing Silex I downloaded the "fat" Silex .tgz file from the download page and extracted it: $ tar -zxf silex_fat.tgz I changed the demonstration silex/web/index.php so I could use the PHP development web server: <?php // web/index.php $filename = __DIR__.preg_replace('#(\?.*)$#', '', $_SERVER['REQUEST_URI']); if (php_sapi_name() === 'cli-server' && is_file($filename)) { return false; } require_once __DIR__.'/../vendor/autoload.php'; $app = new Silex\Application(); //$app['debug'] = true; $app->get('/hello', function() { return 'Hello!'; }); $app->run(); ?> Running DTrace The php_syscolors.d script uses the -Z option to dtrace, so it can be started before PHP, i.e. when there are zero of the requested probes available to be traced. I ran DTrace like: # cd DTraceToolkit-0.99/Php # ./php_syscolors.d Next, I started the PHP developer web server in a second terminal: $ cd silex $ php -S localhost:8080 -t web web/index.php At this point, the web server is idle, waiting for requests. DTrace is idle, waiting for the probes in php_syscolors.d to be fired, at which time the action associated with each probe will run. I then loaded the demonstration page in a browser: http://localhost:8080/hello When the request was fulfilled and the simple output of "Hello" was displayed, I ^C'd php and dtrace in their terminals to stop them. DTrace output over a thousand lines long had been generated. Here is one snippet from when run() was invoked: C PID/TID DELTA(us) FILE:LINE TYPE -- NAME ... 1 4765/4765 21 Application.php:487 func -> run 1 4765/4765 29 ClassLoader.php:182 func -> loadClass 1 4765/4765 17 ClassLoader.php:198 func -> findFile 1 4765/4765 31 ":- syscall -> access 1 4765/4765 26 ":- syscall <- access 1 4765/4765 16 ClassLoader.php:198 func <- findFile 1 4765/4765 25 ":- syscall -> newlstat 1 4765/4765 15 ":- syscall <- newlstat 1 4765/4765 13 ":- syscall -> newlstat 1 4765/4765 13 ":- syscall <- newlstat 1 4765/4765 22 ":- syscall -> newlstat 1 4765/4765 14 ":- syscall <- newlstat 1 4765/4765 15 ":- syscall -> newlstat 1 4765/4765 60 ":- syscall <- newlstat 1 4765/4765 13 ":- syscall -> newlstat 1 4765/4765 13 ":- syscall <- newlstat 1 4765/4765 20 ":- syscall -> open 1 4765/4765 16 ":- syscall <- open 1 4765/4765 26 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 17 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 12 ":- syscall -> newfstat 1 4765/4765 12 ":- syscall <- newfstat 1 4765/4765 20 ":- syscall -> mmap 1 4765/4765 14 ":- syscall <- mmap 1 4765/4765 3201 ":- syscall -> mmap 1 4765/4765 27 ":- syscall <- mmap 1 4765/4765 1233 ":- syscall -> munmap 1 4765/4765 53 ":- syscall <- munmap 1 4765/4765 15 ":- syscall -> close 1 4765/4765 13 ":- syscall <- close 1 4765/4765 34 Request.php:32 func -> main 1 4765/4765 22 Request.php:32 func <- main 1 4765/4765 31 ClassLoader.php:182 func <- loadClass 1 4765/4765 33 Request.php:249 func -> createFromGlobals 1 4765/4765 29 Request.php:198 func -> __construct 1 4765/4765 24 Request.php:218 func -> initialize 1 4765/4765 26 ClassLoader.php:182 func -> loadClass 1 4765/4765 89 ClassLoader.php:198 func -> findFile 1 4765/4765 43 ":- syscall -> access ... The output shows PHP functions being called and returning (and where they are located) and which system calls the PHP functions in turn invoked. The time each line took from the previous one is displayed in the third column. The first column is the CPU number. In this example, the process was always on CPU 1 so the output is naturally ordered without requiring post-processing, or the D script requiring to be modified to display a time stamp. On a terminal, the output of php_syscolors.d is color-coded according to whether each function is a PHP or system one, hence the file name. Summary With one tool, I was able to trace the interaction of a user application with the operating system. I was able to do this to an application running "live" in a web context. The DTrace Toolkit provides a very handy repository of DTrace information. Even though the PHP scripts were created in the time frame of the original PHP DTrace PECL extension, which only had PHP function entry and return probes, the scripts provide core examples for custom investigation and resolution scripts. You can easily adapt the ideas and and create scripts using the other PHP static probes, which are listed in the PHP Manual. Because DTrace is "always on", you can take advantage of it to resolve development questions or fix production situations.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • How do I prevent missing network from slowing down boot-up?

    - by Ravi S Ghosh
    I have been having rather slow boot on Ubuntu 12.04. Lately, I tried to figure out the reason and it seems to be the network connection which does not get connected and requires multiple attempts. Here is part of dmesg [ 2.174349] EXT4-fs (sda2): INFO: recovery required on readonly filesystem [ 2.174352] EXT4-fs (sda2): write access will be enabled during recovery [ 2.308172] firewire_core: created device fw0: GUID 384fc00005198d58, S400 [ 2.333457] usb 7-1.2: new low-speed USB device number 3 using uhci_hcd [ 2.465896] EXT4-fs (sda2): recovery complete [ 2.466406] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 2.589440] usb 7-1.3: new low-speed USB device number 4 using uhci_hcd **[ 18.292029] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 18.458958] udevd[377]: starting version 175 [ 18.639482] Adding 4200960k swap on /dev/sda5. Priority:-1 extents:1 across:4200960k [ 19.314127] wmi: Mapper loaded [ 19.426602] r592 0000:09:01.2: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.426739] r592: driver successfully loaded [ 19.460105] input: Dell WMI hotkeys as /devices/virtual/input/input5 [ 19.493629] lp: driver loaded but no devices found [ 19.497012] cfg80211: Calling CRDA to update world regulatory domain [ 19.535523] ACPI Warning: _BQC returned an invalid level (20110623/video-480) [ 19.539457] acpi device:03: registered as cooling_device2 [ 19.539520] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/device:01/LNXVIDEO:00/input/input6 [ 19.539568] ACPI: Video Device [M86] (multi-head: yes rom: no post: no) [ 19.578060] Linux video capture interface: v2.00 [ 19.667708] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [ 19.763171] r852 0000:09:01.3: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.763258] r852: driver loaded successfully [ 19.854769] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.0/input/input7 [ 19.854864] generic-usb 0003:045E:00DD.0001: input,hidraw0: USB HID v1.11 Keyboard [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input0 [ 19.878605] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.1/input/input8 [ 19.878698] generic-usb 0003:045E:00DD.0002: input,hidraw1: USB HID v1.11 Device [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input1 [ 19.902779] input: DELL DELL USB Laser Mouse as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.3/7-1.3:1.0/input/input9 [ 19.925034] generic-usb 0003:046D:C063.0003: input,hidraw2: USB HID v1.10 Mouse [DELL DELL USB Laser Mouse] on usb-0000:00:1d.1-1.3/input0 [ 19.925057] usbcore: registered new interface driver usbhid [ 19.925059] usbhid: USB HID core driver [ 19.942362] uvcvideo: Found UVC 1.00 device Laptop_Integrated_Webcam_2M (0c45:63ea) [ 19.947004] input: Laptop_Integrated_Webcam_2M as /devices/pci0000:00/0000:00:1a.7/usb1/1-6/1-6:1.0/input/input10 [ 19.947075] usbcore: registered new interface driver uvcvideo [ 19.947077] USB Video Class driver (1.1.1) [ 20.145232] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 20.145235] Copyright(c) 2003-2011 Intel Corporation [ 20.145327] iwlwifi 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 20.145357] iwlwifi 0000:04:00.0: setting latency timer to 64 [ 20.145402] iwlwifi 0000:04:00.0: pci_resource_len = 0x00002000 [ 20.145404] iwlwifi 0000:04:00.0: pci_resource_base = ffffc90000674000 [ 20.145407] iwlwifi 0000:04:00.0: HW Revision ID = 0x0 [ 20.145531] iwlwifi 0000:04:00.0: irq 46 for MSI/MSI-X [ 20.145613] iwlwifi 0000:04:00.0: Detected Intel(R) WiFi Link 5100 AGN, REV=0x54 [ 20.145720] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 20.167535] iwlwifi 0000:04:00.0: device EEPROM VER=0x11f, CALIB=0x4 [ 20.167538] iwlwifi 0000:04:00.0: Device SKU: 0Xf0 [ 20.167567] iwlwifi 0000:04:00.0: Tunable channels: 13 802.11bg, 24 802.11a channels [ 20.172779] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [ 20.172783] Disabling lock debugging due to kernel taint [ 20.250115] [fglrx] Maximum main memory to use for locked dma buffers: 3759 MBytes. [ 20.250567] [fglrx] vendor: 1002 device: 9553 count: 1 [ 20.251256] [fglrx] ioport: bar 1, base 0x2000, size: 0x100 [ 20.251271] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 20.251277] pci 0000:01:00.0: setting latency timer to 64 [ 20.251559] [fglrx] Kernel PAT support is enabled [ 20.251578] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [ 20.310385] iwlwifi 0000:04:00.0: loaded firmware version 8.83.5.1 build 33692 [ 20.310598] Registered led device: phy0-led [ 20.310628] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.372306] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [ 20.411015] psmouse serio1: synaptics: Touchpad model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa40000/0xa0000 [ 20.454232] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input11 [ 20.545636] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.545640] cfg80211: World regulatory domain updated: [ 20.545642] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 20.545644] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545647] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545649] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545652] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545654] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.609484] type=1400 audit(1340502633.160:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=693 comm="apparmor_parser" [ 20.609494] type=1400 audit(1340502633.160:3): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=642 comm="apparmor_parser" [ 20.609843] type=1400 audit(1340502633.160:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=693 comm="apparmor_parser" [ 20.609852] type=1400 audit(1340502633.160:5): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=642 comm="apparmor_parser" [ 20.610047] type=1400 audit(1340502633.160:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=693 comm="apparmor_parser" [ 20.610060] type=1400 audit(1340502633.160:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=642 comm="apparmor_parser" [ 20.610476] type=1400 audit(1340502633.160:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=814 comm="apparmor_parser" [ 20.610829] type=1400 audit(1340502633.160:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=814 comm="apparmor_parser" [ 20.611035] type=1400 audit(1340502633.160:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=814 comm="apparmor_parser" [ 20.661912] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 [ 20.661982] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [ 20.662013] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 20.770289] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 20.770689] snd_hda_intel 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 20.770786] snd_hda_intel 0000:01:00.1: irq 48 for MSI/MSI-X [ 20.770815] snd_hda_intel 0000:01:00.1: setting latency timer to 64 [ 20.994040] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [ 20.994189] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input13 [ 21.554799] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [ 21.554802] vesafb: scrolling: redraw [ 21.554804] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [ 21.557342] vesafb: framebuffer at 0xd0000000, mapped to 0xffffc90011800000, using 3072k, total 3072k [ 21.557498] Console: switching to colour frame buffer device 128x48 [ 21.557516] fb0: VESA VGA frame buffer device [ 21.987338] EXT4-fs (sda2): re-mounted. Opts: errors=remount-ro [ 22.184693] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [ 27.362440] iwlwifi 0000:04:00.0: RF_KILL bit toggled to disable radio. [ 27.436988] init: failsafe main process (986) killed by TERM signal [ 27.970112] ppdev: user-space parallel port driver [ 28.198917] Bluetooth: Core ver 2.16 [ 28.198935] NET: Registered protocol family 31 [ 28.198937] Bluetooth: HCI device and connection manager initialized [ 28.198940] Bluetooth: HCI socket layer initialized [ 28.198941] Bluetooth: L2CAP socket layer initialized [ 28.198947] Bluetooth: SCO socket layer initialized [ 28.226135] Bluetooth: RFCOMM TTY layer initialized [ 28.226141] Bluetooth: RFCOMM socket layer initialized [ 28.226143] Bluetooth: RFCOMM ver 1.11 [ 28.445620] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 28.445623] Bluetooth: BNEP filters: protocol multicast [ 28.524578] type=1400 audit(1340502641.076:11): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=1052 comm="apparmor_parser" [ 28.525018] type=1400 audit(1340502641.076:12): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=1052 comm="apparmor_parser" [ 28.629957] type=1400 audit(1340502641.180:13): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1105 comm="apparmor_parser" [ 28.630325] type=1400 audit(1340502641.180:14): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1105 comm="apparmor_parser" [ 28.630535] type=1400 audit(1340502641.180:15): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1105 comm="apparmor_parser" [ 28.645266] type=1400 audit(1340502641.196:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1104 comm="apparmor_parser" **[ 28.751922] ADDRCONF(NETDEV_UP): wlan0: link is not ready** [ 28.753653] tg3 0000:08:00.0: irq 49 for MSI/MSI-X **[ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 28.857034] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 28.871080] type=1400 audit(1340502641.420:17): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1108 comm="apparmor_parser" [ 28.871519] type=1400 audit(1340502641.420:18): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1108 comm="apparmor_parser" [ 28.874905] type=1400 audit(1340502641.424:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1113 comm="apparmor_parser" [ 28.875354] type=1400 audit(1340502641.424:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1113 comm="apparmor_parser" [ 30.477976] tg3 0000:08:00.0: eth0: Link is up at 100 Mbps, full duplex [ 30.477979] tg3 0000:08:00.0: eth0: Flow control is on for TX and on for RX **[ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready** [ 31.110269] fglrx_pci 0000:01:00.0: irq 50 for MSI/MSI-X [ 31.110859] [fglrx] Firegl kernel thread PID: 1327 [ 31.111021] [fglrx] Firegl kernel thread PID: 1329 [ 31.111408] [fglrx] Firegl kernel thread PID: 1330 [ 31.111543] [fglrx] IRQ 50 Enabled [ 31.712938] [fglrx] Gart USWC size:1224 M. [ 31.712941] [fglrx] Gart cacheable size:486 M. [ 31.712945] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [ 31.712948] [fglrx] Reserved FB block: Unshared offset:fc2b000, size:3d5000 [ 31.712950] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [ 41.312020] eth0: no IPv6 routers present As you can see I get multiple instances of [ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready and then finally it becomes read and I get the message [ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready. I searched askubuntun, ubuntuforum, and the web but couldn't find a solution. Any help would be very much appreciated. Here is the bootchart

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • Wireless internet connection connects but internet does not work (no packets received). Wired does.

    - by Rodney
    When I connect my PC via ethernet cable to my ADSL router it works fine. When I connect via Wireless it connects and the internet will work for a random amount of time and then stop working. It stays connected with a strong signal but no packets are received. My laptop/iphone are right next to it and wireless works fine. If I open the Wireless USB status, it says it is connected to my SSID with full strength (54 mps - I am 3 meteres away from my router) and the activty shows as Packets 594 SENT and 105 RECEIVED (this goes up VERY slowly) I have tried the following: Turned off anitvirus and firewall completely. Tested the wifi signal- I am writing this on my laptop which is next to my PC and also has full wifi strength. Tried a different wireless adapter - I dug out an old PCI wireless card - it does the exact same thing. Compared all wireless settings to my laptop. I can ping google.com and it replies (sometimes with packet loss) When I reboot the PC it will connect for a minute or two (random time) and then just stops again. I tried Firefox, IE etc. no joy I have updated all latest versions (Netgear WG111v2) and drivers Checked Event Log - nothing unusual Ping the router (and even connect as admin for the few minutes when the internet does work) Changed the MTU down to 1200 using DrTCP Checked Device Manager for conflicts - none. I ping the router from the PC (192.168.0.10 - 192.168.0.1) and it replies with 4 packets. BUT, on my router admin page (which I access via http on my laptop wirelessly) - if I ping 192.168.0.10 all packets timeout (pinging my laptop 192.168.0.12 works fine) My router admin page shows the leased IP address for 192.168.0.10 (ie it is definitely talking to the router initially) Now I am out of ideas - please help. I think it is an OS/Software issue as I have tried 2 different wireless adapaters (PCI and USB) with the same result but all other wireless devices work fine around mine). It's not the firewall. It is getting assigned an IP address correctly (my PC gets 192.168.0.10, my laptop is .12) It is assigned by DHCP. As soon as I plug in the ethernet cable it all works fine. Repairing the adapter sometimes helps but it will always stop working after a random time. The wireless adapter always shows as connected with Excellent signal but the internet does not work. I am running Windows XP SP3 and have tried a Netgear WG111v2 USB adapter. Thanks in advance! UPDATE: The internet seems to be working, it is just either sending packets too small or slow to work (some small pages load bits of them very slowly but then hang). XP seems to have a networking diagnostic app - here is the output: Last diagnostic run time: 08/30/10 08:16:38 IP Configuration Diagnostic Invalid IP address info Valid IP address detected: 192.168.0.10 IP Layer Diagnostic Corrupted IP routing table info The default route is valid info The loopback route is valid info The local host route is valid info The local subnet route is valid Invalid ARP cache entries action The ARP cache has been flushed Gateway Diagnostic Gateway info The following proxy configuration is being used by IE: Automatically Detect Settings:Disabled Automatic Configuration Script: Proxy Server: Proxy Bypass list: info This computer has the following default gateway entry(ies): 192.168.0.1 info This computer has the following IP address(es): 192.168.0.10 info The default gateway is in the same subnet as this computer info The default gateway entry is a valid unicast address info The default gateway address was resolved via ARP in 1 try(ies) info The default gateway was reached via ICMP Ping in 1 try(ies) info TCP port 80 on host 65.55.12.249 was successfully reached info The Internet host www.microsoft.com was successfully reached info The default gateway is OK DNS Client Diagnostic DNS - Not a home user scenario info Using Web Proxy: no info Resolving name ok for (www.microsoft.com): yes No DNS servers DNS failure HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Passive): Successfully connected to ftp.microsoft.com. info HTTP: Successfully connected to www.microsoft.com. warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out error Could not make an HTTPS connection. info Redirecting user to support call WinSock Diagnostic WinSock status info All base service provider entries are present in the Winsock catalog. info The Winsock Service provider chains are valid. info Provider entry MSAFD Tcpip [TCP/IP] passed the loopback communication test. info Provider entry MSAFD Tcpip [UDP/IP] passed the loopback communication test. info Provider entry RSVP UDP Service Provider passed the loopback communication test. info Provider entry RSVP TCP Service Provider passed the loopback communication test. info Connectivity is valid for all Winsock service providers. Wireless Diagnostic Wireless - Service disabled Wireless - User SSID action User input required: Specify network name or SSID Wireless - First time setup info The Wireless Network name (SSID) to which the user would like to connect = RodSof Wifi. Wireless - Radio off info Valid IP address detected: 192.168.0.10 Wireless - Out of range Wireless - Hardware issue Wireless - Novice user Wireless - Ad-hoc network Wireless - Less preferred Wireless - 802.1x enabled Wireless - Configuration mismatch Wireless - Low SNR Network Adapter Diagnostic Network location detection info Using home Internet connection Network adapter identification info Network connection: Name=Local Area Connection 2, Device=Realtek RTL8168C(P)/8111C(P) PCI-E Gigabit Ethernet NIC, MediaType=LAN, SubMediaType=LAN info Network connection: Name=Wireless USB, Device=NETGEAR WG111v2 54Mbps Wireless USB 2.0 Adapter, MediaType=LAN, SubMediaType=WIRELESS info Both Ethernet and Wireless connections available, prompting user for selection action User input required: Select network connection info Wireless connection selected Network adapter status info Network connection status: Connected HTTP, HTTPS, FTP Diagnostic HTTP, HTTPS, FTP connectivity info FTP (Active): Successfully connected to ftp.microsoft.com. warn HTTP: Error 12007 connecting to www.microsoft.com: The server name or address could not be resolved warn HTTP: Error 12002 connecting to www.hotmail.com: The operation timed out warn HTTPS: Error 12002 connecting to www.passport.net: The operation timed out warn HTTPS: Error 12002 connecting to www.microsoft.com: The operation timed out error Could not make an HTTP connection. error Could not make an HTTPS connection.

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • SSH as root using public key still prompts for password on RHEL 6.1

    - by Dean Schulze
    I've generated rsa keys with cygwin ssh-keygen and copied them to the server with ssh-copy-id -i id_rsa.pub [email protected] I've got the following settings in my /etc/ssh/sshd_config file RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys PermitRootLogin yes When I ssh [email protected] it still prompts for a password. The output below from /usr/sbin/sshd -d says that a matching keys was found in the .ssh/authorized_keys file, but it still requires a password from the client. I've read a bunch of web postings about permissions on files and directories, but nothing works. Is it possible to ssh with keys in RHEL 6.1 or is this forbidden? The debug output from ssh and sshd is below. $ ssh -v [email protected] OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Connecting to my.ip.address [my.ip.address] port 22. debug1: Connection established. debug1: identity file /home/dschulze/.ssh/id_rsa type 1 debug1: identity file /home/dschulze/.ssh/id_rsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_dsa type 2 debug1: identity file /home/dschulze/.ssh/id_dsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 9f:00:e0:1e:a2:cd:05:53:c8:21:d5:69:25:80:39:92 debug1: Host 'my.ip.address' is known and matches the RSA host key. debug1: Found key in /home/dschulze/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/dschulze/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering DSA public key: /home/dschulze/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /home/dschulze/.ssh/id_ecdsa debug1: Next authentication method: password Here is the server output from /usr/sbin/sshd -d [root@ga2-lab .ssh]# /usr/sbin/sshd -d debug1: sshd version OpenSSH_5.3p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 172.60.254.24 port 53401 debug1: Client protocol version 2.0; client software version OpenSSH_6.1 debug1: match: OpenSSH_6.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user root service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "root" debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: PAM: setting PAM_RHOST to "172.60.254.24" debug1: PAM: setting PAM_TTY to "ssh" debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 Postponed publickey for root from 172.60.254.24 port 53401 ssh2 debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 2 failures 0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 debug1: ssh_rsa_verify: signature correct debug1: do_pam_account: called Accepted publickey for root from 172.60.254.24 port 53401 ssh2 debug1: monitor_child_preauth: root has been authenticated by privileged process debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: ssh_gssapi_storecreds: Not a GSSAPI mechanism debug1: restore_uid: 0/0 debug1: SELinux support enabled debug1: PAM: establishing credentials PAM: pam_open_session(): Authentication failure debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 ssh_selinux_setup_pty: security_compute_relabel: Invalid argument debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 17323 debug1: session_exit_message: session 0 channel 0 pid 17323 debug1: session_exit_message: release channel 0 debug1: session_pty_cleanup: session 0 release /dev/pts/1 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from 172.60.254.24: 11: disconnected by user debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: deleting credentials

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

< Previous Page | 889 890 891 892 893 894 895 896 897 898 899 900  | Next Page >