Search Results

Search found 749 results on 30 pages for 'alan spark'.

Page 30/30 | < Previous Page | 26 27 28 29 30 

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • Microsoft Enterprise Library Caching Application Block not thread safe?!

    - by AlanR
    Good aftenoon, I created a super simple console app to test out the Enterprise Library Caching Application Block, and the behavior is blaffling. I'm hoping I screwed something that's easy to fix in the setup. Have each item expire after 5 seconds for testing purposes. Basic setup -- "every second pick a number between 0 and 2. if the cache doesn't already have it, put it in there -- otherwise just grab it from the cache. Do this inside a LOCK statement to ensure thread safety. APP.CONFIG: <configuration> <configSections> <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <cachingConfiguration defaultCacheManager="Cache Manager"> <cacheManagers> <add expirationPollFrequencyInSeconds="1" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="Null Storage" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Cache Manager" /> </cacheManagers> <backingStores> <add encryptionProviderName="" type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Null Storage" /> </backingStores> </cachingConfiguration> </configuration> C#: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Common; using Microsoft.Practices.EnterpriseLibrary.Caching; using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations; namespace ConsoleApplication1 { class Program { public static ICacheManager cache = CacheFactory.GetCacheManager("Cache Manager"); static void Main(string[] args) { while (true) { System.Threading.Thread.Sleep(1000); // sleep for one second. var key = new Random().Next(3).ToString(); string value; lock (cache) { if (!cache.Contains(key)) { cache.Add(key, key, CacheItemPriority.Normal, null, new SlidingTime(TimeSpan.FromSeconds(5))); } value = (string)cache.GetData(key); } Console.WriteLine("{0} --> '{1}'", key, value); //if (null == value) throw new Exception(); } } } } OUPUT -- How can I prevent the cache to returning nulls? 2 --> '2' 1 --> '1' 2 --> '2' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 0 --> '0' 1 --> '1' 2 --> '' 0 --> '0' 2 --> '2' 0 --> '0' 1 --> '' 2 --> '2' 1 --> '1' Press any key to continue . . . Thanks in advance, -Alan.

    Read the article

  • How to make sliding button sidebar in Flex

    - by Tam
    Hi, I'm fairly new to Flex. I want to make a button (icon) on the far right in the middle of the page that display a sliding side bar with multiple buttons in it when you hover over it. I want when the user hover out of the button bar it slides back again. Conceptually I got the basics of that to work. The issue I'm having is that when the user moves the mouse between the buttons in the sidebar it kicks in changing state and side bar slides back again. I tried using different types of containers and I got the same results. Any Advice? Thanks, Tam Here is the code: <?xml version="1.0" encoding="utf-8"?> <s:VGroup xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo" xmlns:vld ="com.lal.validators.*" xmlns:effect="com.lal.effects.*" width="150" horizontalAlign="right" gap="0"> <fx:Script> <![CDATA[ import com.lal.model.LalModelLocator; var _model:LalModelLocator = LalModelLocator.getInstance(); ]]> </fx:Script> <fx:Declarations> <mx:ArrayCollection id="someData"> </mx:ArrayCollection> </fx:Declarations> <s:states> <s:State name="normal" /> <s:State name="expanded" /> </s:states> <fx:Style source="/styles.css"/> <s:transitions> <s:Transition fromState="normal" toState="expanded" > <s:Sequence> <s:Wipe direction="left" duration="250" target="{buttonsGroup}" /> </s:Sequence> </s:Transition> <s:Transition fromState="expanded" toState="normal" > <s:Sequence> <s:Wipe direction="right" duration="250" target="{buttonsGroup}" /> </s:Sequence> </s:Transition> </s:transitions> <s:Button skinClass="com.lal.skins.CalendarButtonSkin" id="calendarIconButton" includeIn="normal" verticalCenter="0" mouseOver="currentState = 'expanded'"/> <s:Panel includeIn="expanded" id="buttonsGroup" mouseOut="currentState = 'normal' " width="150" height="490" > <s:layout> <s:VerticalLayout gap="0" paddingRight="0" /> </s:layout> <s:Button id="mondayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="tuesdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="wednesdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="thursdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="fridayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="saturdayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> <s:Button id="sundayButton" width="120" height="70" mouseOver="currentState = 'expanded'"/> </s:Panel> </s:VGroup>

    Read the article

  • Flex list control itemrenderer issue

    - by Jerry
    Hi Guys I am working on a small photo Gallery. I create a xml file and try to link it to my List control with itemrenderer. However, when I tried to save the file, I got access of undefined property data. Here is my code...and thanks a lot! <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600"> <fx:Declarations> <fx:Model id="pictureXML" source="data/pictures.xml"/> <s:ArrayList id="pictureArray" source="{pictureXML.picture}"/> </fx:Declarations> <s:List id="pictureGrid" dataProvider="{pictureArray}" horizontalCenter="0" top="20"> <s:itemRenderer> <fx:Component> <s:HGroup> <mx:Image source="images/big/{data.source}" /> <s:Label text="{data.caption}"/> </s:HGroup> </fx:Component> </s:itemRenderer> </s:List> </s:Application> My xml <?xml version="1.0"?> <album> <picture> <source>city1.jpg </source> <caption>City View 1</caption> </picture> <picture> <source>city2.jpg </source> <caption>City View 2</caption> </picture> <picture> <source>city3.jpg </source> <caption>City View 3</caption> </picture> <picture> <source>city4.jpg </source> <caption>City View 4</caption> </picture> <picture> <source>city5.jpg </source> <caption>City View 5</caption> </picture> <picture> <source>city6.jpg </source> <caption>City View 6</caption> </picture> <picture> <source>city7.jpg </source> <caption>City View 7</caption> </picture> <picture> <source>city8.jpg </source> <caption>City View 8</caption> </picture> <picture> <source>city9.jpg </source> <caption>City View 9</caption> </picture> <picture> <source>city10.jpg </source> <caption>City View 10</caption> </picture> </album> I appreciate any helps!!!

    Read the article

  • Module Adminhtml blocks not loading

    - by David Tay
    I was working on a Magento module and it was working fine. At some point, I was trying to enable WYSIWYG in an edit form 'content' field and suddenly, my adminhtml grid and edit blocks stopped being generated. On my system are TinyMCE and Fontis FCKEditor WYSIWYG editors extensions. I'm not sure what I did wrong but my adminhtml blocks will no longer generate. Here's a dump of all the blocks from my module's adminhtml layout: array(17) { [0]=> string(4) "root" [1]=> string(4) "head" [2]=> string(13) "head.calendar" [3]=> string(14) "global_notices" [4]=> string(6) "header" [5]=> string(4) "menu" [6]=> string(11) "breadcrumbs" [7]=> string(7) "formkey" [8]=> string(12) "js_translate" [9]=> string(4) "left" [10]=> string(7) "content" [11]=> string(8) "messages" [12]=> string(2) "js" [13]=> string(6) "footer" [14]=> string(8) "profiler" [15]=> string(15) "before_body_end" [16]=> string(7) "wysiwyg" } As you can see, the last item is "wysiwyg" but on the layout output of other magento modules, there are more blocks. For example, on MathieuF's calendar extension, these are all the layout blocks: array(26) { [0]=> string(4) "root" [1]=> string(4) "head" [2]=> string(13) "head.calendar" [3]=> string(14) "global_notices" [4]=> string(6) "header" [5]=> string(4) "menu" [6]=> string(11) "breadcrumbs" [7]=> string(7) "formkey" [8]=> string(12) "js_translate" [9]=> string(4) "left" [10]=> string(7) "content" [11]=> string(8) "messages" [12]=> string(2) "js" [13]=> string(6) "footer" [14]=> string(8) "profiler" [15]=> string(15) "before_body_end" [16]=> string(7) "wysiwyg" [17]=> string(27) "adminhtml_event.grid.child0" [18]=> string(12) "ANONYMOUS_19" [19]=> string(27) "adminhtml_event.grid.child1" [20]=> string(12) "ANONYMOUS_21" [21]=> string(27) "adminhtml_event.grid.child2" [22]=> string(20) "adminhtml_event.grid" [23]=> string(12) "ANONYMOUS_24" [24]=> string(19) "ANONYMOUS_17.child1" [25]=> string(14) "content.child0" } Does anyone have any idea what's wrong? I've already tried Alan Storm's Layout and Config Viewers and cannot find any clues as to what I did wrong. Any help would be greatly appreciated.

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • CodePlex Daily Summary for Wednesday, April 07, 2010

    CodePlex Daily Summary for Wednesday, April 07, 2010New ProjectsAStar.net: AStar.net is a project for compute the A* path finding algorithm. It expose classes and interface that can be used for all purpose with multi-threa...Auto Complete for MOSS 2007: Auto Complete for MOSS 2007 (WSS 3.0) makes it easier for all user to fill list of nessesary string in the field. AutoPoco: AutoPoco is a framework with the purpose of fluently building test data from Plain Old CLR ObjectsBlueProject: BlueProjectColor Picker for MOSS 2007: Color picker for MOSS 2007 (WSS 3.0) makes it easier for some user to choose color in the field. ComBrowser2: ComBrowser2DCommunication: Communication Components and Software By Delphi On Windows.DirST: Allows one to replicate a DIRectory STructure without copying files. Written in C#.Discussion column for MOSS 2007: Discussion column for MOSS 2007/ (WSS 3.0) is a field column for different type lists such as Custom List, Document Library, Issue Tracking, not on...Effect Custom Tool for Visual Studio: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Energon: Energon is a framework to create and run software energy consumption tests. Requires a couple of PC with a TCP connections, a phidgets ammeter and ...Fiansa: Proyecto FiansaFirstSpark: FirstSpark is a sample Spark View Engine projectISOM: Internetowy System Obsługi Magisterek La Carta Mas Alta: La Carta Mas Alta is an open source card game totally written in PHP and HTML. This cross-platform and cross-browser game was tested under BeOS, Li...Near forums - ASP.NET MVC forum engine: Open source SEO friendly ASP.NET MVC forum engine. Features: Navigation in forums, topics and tags; login with Facebook Connect and Single sign-on;...PEC: Still editingPowershell Zip Export/Import Cmdlet Module: Powershellzip is a powershell module with a set of Cmdlets for zip file export, import and processingProgress bar for MOSS 2007: Progress bar for MOSS 2007 (WSS 3.0) makes it easier for some user to display progress in the field in percent (0-100%). SharePoint Accelerators: This project delivers a number of SharePoint accelerators that make your every-day SharePoint life easier.Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet is a simple SharePoint Foundation 2010 application that allows teams to do messaging and subscriptions in the style similar to twitter. De...SilverlightEncoder: Video and audio encoder for Silverlight 4 Out-of-BrowserStarMath: Static Array Math Library: While there are already countless math libraries for performing common matrix/array functions, StarMath is distinguished by its simplicity and flex...StringToNumber: StringToNumber is a .NET library for parsing numbers in their written form into their numeric equivalent. It's developed in C#. It was created to...SubstitutionITIS: <project name="Substitution" language="c-sharp" to="myschool" whatdo="manageSubstitutionTeachers" />TamTam.SharePoint2010.LinkedIn: SP2010 and LinkedIn working togetherThink And Explore: Think my own wayusenet-o-matic: small mobile optimised webscript to search various NZB sources like newzbin and NZBmatrix and send donwloads to your SABnzbd server. Valid HTML5 Templates for VisualStudio: This project will host valid HTML templates for Visual Studio. The primary focus however will be on the newer standard HTML5 and Visual Studio 2010XSS Attack: This tool will simulate an attack on your database and update up to 5000 rows in every table and replace your strings in your database with random ...Yazgelistir Beta: Yazgelistir'in beta sitesiNew ReleasesAStar.net: AStar.net 1.0 downloads: AStar.net 1.0 downloadsAvalaible downloads:Astar.net dll - Runtime library ready to be included in a project. Astar.net source project - The Visu...AutoPoco: 0.1 - Initial Release: This is the initial release, changes pending, lots left to do, check it out though!Bag of Tricks: Bag of Tricks - WPF - Libraries and Sample App: Here is a drop of the Bag of Tricks. It targets the 3.5 Client Profile.Boxee Launcher: Boxee Launcher 1.0.1.4: Taskbar will now be hiddenDesignit Umbraco Newsletter Package: ver 1.0.0 beta1: Please remember this is a beta version. If you have any problems or issues, don't hesitate to use the issue tracker and/or forum on this site. Ple...Effect Custom Tool for Visual Studio: Effect Custom Tool for Visual Studio 2008: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.1: Fluent Ribbon Control Suite 1.1 Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples Foundation (Tabs, Groups, Contextual Tab...GsGrid: gsgrid 1.6.5: gsgrid 1.6.5Home Access Plus+: v3.2.5.2: v3.2.5.1 Release Change Log: Attempt to fix Domain Admin Lookup box File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdbHulu Launcher: Hulu Launcher 1.0.1.4: Will now hide taskbar.Icarus Scene Engine: Icarus Professional 2 Alpha 2a v 1.10.404.936: Alpha release 2 of Icarus Professional. This release includes: IcarusX: The ActiveX-based browser control for rendering IPX projects online. Icaru...kdar: KDAR 0.0.19: KDAR - Kernel Debugger Anti Rootkit - thread start notifier routine check added - registry callback сheck added - NDIS6 checks added - some bug i...Mobile Device Browser File: Mobile Device Browser File (2010-04-07): The Mobile Browser Definition File contains definitions for individual mobile devices and browsers. At run time, ASP.NET uses the information in th...MvcContrib: a Codeplex Foundation project: Portable Area Template: Use this Visual Studio 2008 Project template to create new portable areas. Drop this file in your Documents\Visual Studio 2008\Templates\ProjectTe...Office Apps: 0.8.8: new ui for Document.Viewer bug fix'sProtoforma | Tactica Adversa: Skilful 0.3.4.477: RC1ROOT Builder: ROOT Builder 1.41: Simplifies building of ROOT on the windows platform by generating a Visual Studio C make project that will build and run (and debug ROOT). See Inst...SharePoint Accelerators: Search AutoComplete for SharePoint Lists: Search AutoComplete for SharePoint List is a Web Part that allows you to search contents of a SharePoint. Search is interactive and offers you resu...SharePoint Labs: SPLab4002A-FRA-Level100: SPLab4002A-FRA-Level100 This SharePoint Lab will teach you the 2nd best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab4003A-FRA-Level100: SPLab4003A-FRA-Level100 This SharePoint Lab will teach you the 3rd best practice you should apply when writing code with the SharePoint API. Lab La...SQL Compact data and schema script utility: Version 3.0: This release contains 4 downloadable files: - SSMS 2008 scripting add-in - SQL Server 2005/2008 command line utility to generate a script with sch...StarMath: Static Array Math Library: StarMath Source Files: The source file package includes the main library StarMath.dll (and it's source files), and an example exe project to invoke commands from StarMath.stefvanhooijdonk.com: Powershell Solution Install Script: Powershell Example script to install a SP2010 Solution, which actually waits for the Retraction and Deployment jobs.TamTam.SharePoint2010.LinkedIn: 0.0.0.1: How to use/install Small instruction to use this code/solution step 1 Add the Farm Solution to your SP2010 installation step 2 Go to the MySite H...usenet-o-matic: V 0.2: supports Newzbin and NZBmatrix as index sources and SABnzbd as download serverVCC: Latest build, v2.1.30406.0: Automatic drop of latest buildVisual Studio DSite: E-Z Image To PDF Converter Beta: This simple little program can convert all common image formats into pdfs and can even encrypt the pdf with an encrpytion key.Web and Load Test Plugins for Visual Studio Team Test: Release 2.0: Release 2.0 is targeted at VS 2010. VS 2010 exposes major new extensibility points: 1) Recorder plugins enable you to do custom correlations and o...Wicked Compression ASP.NET HTTP Module: WickedCompressionModule 4.0 Alpha: New Features, New Enhancements! - Visual Studio 2010 Projects! - Support for ASP.NET 2.0, 3.5, and 4.0 - AJAX Support for ASP.NET 3.5 and 4.0 - Bu...x5s - test encodings and character transformations to find XSS hotspots: x5s v1.0.0 beta: This is the v1.0 beta release of x5s. All feedback welcome in planning for the next release. Make sure Fiddler is installed prior to running th...xvanneste: Silverlight SharePoint: Fichiers du webcast sur silverlight: Silverlight OM Silverlight Webpart Silverlight Embedded ressource Silverlight List ViewYet Another Web App Monitoring Tool (YAWAMT): yawamt v0.5: A new release has seen the light :-) I've lowered the release to version 0.5 but changed some major things: - deletion of URLS works - settings can...Zinc Launcher: Zinc Launcher 1.0.1.1: Taskbar will now be hidden Delay to show Zinc was reduced to improve responsiveness.Most Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.RawrFacebook Developer ToolkitShweet: SharePoint 2010 Team Messaging built with Pexpatterns & practices – Enterprise LibraryNcqrs Framework - The CQRS framework for .NETjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterAcadsys

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

  • This is the End of Business as Usual...

    - by Michael Snow
    This week, we'll be hosting our last Social Business Thought Leader Series Webcast for 2012. Our featured guest this week will be Brian Solis of Altimeter Group. As we've been going through the preparations for Brian's webcast, it became very clear that an hour's time is barely scraping the surface of the depth of Brian's insights and analysis. Accordingly, in the spirit of sharing Brian's perspective for all of our readers, we'll be featuring guest posts all this week pulled from Brian's larger collection of blog postings on his own website. If you like what you've read here this week, we highly recommend digging deeper into his tome of wisdom. Guest Post by Brian Solis, Analyst, Altimeter Group as originally featured on his site with the minor change of the video addition at the beginning of the post. This is the End of Business as Usual and the Beginning of a New Era of Relevance - Brian Solis, Principal Analyst, Altimeter Group The Times They Are A-Changin’ Come gather ’round people Wherever you roam And admit that the waters Around you have grown And accept it that soon You’ll be drenched to the bone If your time to you Is worth savin’ Then you better start swimmin’ Or you’ll sink like a stone For the times they are a-changin’. - Bob Dylan I’m sure you are wondering why I chose lyrics to open this article. If you skimmed through them, stop here for a moment. Go back through the Dylan’s words and take your time. Carefully read, and feel, what it is he’s saying and savor the moment to connect the meaning of his words to the challenges you face today. His message is as important and true today as it was when they were first written in 1964. The tide is indeed once again turning. And even though the 60s now live in the history books, right here, right now, Dylan is telling us once again that this is our time to not only sink or swim, but to do something amazing. This is your time. This is our time. But, these times are different and what comes next is difficult to grasp. How people communicate. How people learn and share. How people make decisions. Everything is different now. Think about this…you’re reading this article because it was sent to you via email. Yet more people spend their online time in social networks than they do in email. Duh. According to Nielsen, of the total time spent online 22.5% are connecting and communicating in social networks. To put that in perspective, the time spent in the likes of Facebook, Twitter, and Youtube is greater than online gaming at 9.8%, email at 7.6% and search at 4%. Imagine for a moment if you and I were connected to one another in Facebook, which just so happens to be the largest social network in the world. How big? Well, Facebook is the size today of the entire Internet in 2004. There are over 1 billion people friending, Liking, commenting, sharing, and engaging in Facebook…that’s roughly 12% of the world’s population. Twitter has over 200 million users. Ever hear of tumblr? More time is spent on this popular microblogging community than Twitter. The point is that the landscape for communication and all that’s affected by human interaction is profoundly different than how you and I learned, shared or talked to one another yesterday. This transformation is only becoming more pervasive and, it’s not going back. Survival of the Fitting But social media is just one of the channels we can use to reach people. I must be honest. I’m as much a part of tomorrow as I am of yesteryear. It’s why I spend all of my time researching the evolution of media and its impact on business and culture. Because of you, I share everything I learn in newsletters, emails, blogs, Youtube videos, and also traditional books. I’m dedicated to helping everyone not only understand, but grasp the change that’s before you. Technologies such as social, mobile, virtual, augmented, et al compel us adapt our story and value proposition and extend our reach to be part of communities we don’t realize exist. The people who will keep you in business or running tomorrow are the very people you’re not reaching today. Before you continue to read on, allow me to clarify my point of view. My inspiration for writing this is to help you augment, not necessarily replace, the programs you’re running today. We must still reach those whom matter to us in the ways they prefer to be engaged. To reach what I call the connected consumer of Geneeration-C we must too reach them in the ways they wish to be engaged. And in all of my work, how they connect, talk to one another, influence others, and make decisions are not at all like the traditional consumers of the past. Nor are they merely the kids…the Millennial. Connected consumers are representative across every age group and demographic. As you can see, use of social networks, media sharing sites, microblogs, blogs, etc. equally span across Gen Y, Gen X, and Baby Boomers. The DNA of connected customers is indiscriminant of age or any other demographic for that matter. This is more about psychographics, the linkage of people through common interests (than it is their age, gender, education, nationality or level of income. Once someone is introduced to the marvels of connectedness, the sensation becomes a contagion. It touches and affects everyone. And, that’s why this isn’t going anywhere but normalcy. Social networking isn’t just about telling people what you’re doing. Nor is it just about generic, meaningless conversation. Today’s connected consumer is incredibly influential. They’re connected to hundreds and even thousands of other like-minded people. What they experiences, what they support, it’s shared throughout these networks and as information travels, it shapes and steers impressions, decisions, and experiences of others. For example, if we revisit the Nielsen research, we get an idea of just how big this is becoming. 75% spend heavily on music. How does that translate to the arts? I’d imagine the number is equally impressive. If 53% follow their favorite brand or organization, imagine what’s possible. Just like this email list that connects us, connections in social networks are powerful. The difference is however, that people spend more time in social networks than they do in email. Everything begins with an understanding of the “5 W’s and H.E.” – Who, What, When, Where, How, and to What Extent? The data that comes back tells you which networks are important to the people you’re trying to reach, how they connect, what they share, what they value, and how to connect with them. From there, your next steps are to create a community strategy that extends your mission, vision, and value and it align it with the interests, behavior, and values of those you wish to reach and galvanize. To help, I’ve prepared an action list for you, otherwise known as the 10 Steps Toward New Relevance: 1. Answer why you should engage in social networks and why anyone would want to engage with you 2. Observe what brings them together and define how you can add value to the conversation 3. Identify the influential voices that matter to your world, recognize what’s important to them, and find a way to start a dialogue that can foster a meaningful and mutually beneficial relationship 4. Study the best practices of not just organizations like yours, but also those who are successfully reaching the type of people you’re trying to reach – it’s benching marking against competitors and benchmarking against undefined opportunities 5. Translate all you’ve learned into a convincing presentation written to demonstrate tangible opportunity to your executive board, make the case through numbers, trends, data, insights – understanding they have no idea what’s going on out there and you are both the scout and the navigator (start with a recommended pilot so everyone can learn together) 6. Listen to what they’re saying and develop a process to learn from activity and adapt to interests and steer engagement based on insights 7. Recognize how they use social media and innovate based on what you observe to captivate their attention 8. Align your objectives with their objectives. If you’re unsure of what they’re looking for…ask 9. Invest in the development of content, engagement 10. Build a community, invest in values, spark meaningful dialogue, and offer tangible value…the kind of value they can’t get anywhere else. Take advantage of the medium and the opportunity! The reality is that we live and compete in a perpetual era of Digital Darwinism, the evolution of consumer behavior when society and technology evolve faster than our ability to adapt. This is why it’s our time to alter our course. We must connect with those who are defining the future of engagement, commerce, business, and how the arts are appreciated and supported. Even though it is the end of business as usual, it is the beginning of a new age of opportunity. The consumer revolution is already underway, and the question is: How do you better understand the role you play in this production as a connected or social consumer as well as business professional? Again, this is your time to define a new era of engagement and relevance. Originally written for The National Arts Marketing Project Connect with Brian via: Twitter | LinkedIn | Facebook | Google+ --- Note from Michael: If you really like this post above - check out Brian's TEDTalk and his thought process for preparing it in this post: 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} http://www.briansolis.com/2012/10/tedtalk-reinventing-consumer-capitalism-screw-business-as-usual/

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system. The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me. At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach. Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements. My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work. Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way. DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal. Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info: You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client. You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features. I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically. When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions: On the Choose the installation you want page of the installation wizard, I chose Server Farm. On the Server Type page, I chose Complete. At the end of the installation, I did not run the configuration wizard. Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end. It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective. I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed: Install SQL Server 2008 R2 to get a database engine instance installed. Run the SharePoint configuration wizard to set up the SharePoint databases. In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization. Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment. I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon. Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment. I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Guest Post: Using IronRuby and .NET to produce the &lsquo;Hello World of WPF&rsquo;

    - by Eric Nelson
    [You might want to also read other GuestPosts on my blog – or contribute one?] On the 26th and 27th of March (2010) myself and Edd Morgan of Microsoft will be popping along to the Scottish Ruby Conference. I dabble with Ruby and I am a huge fan whilst Edd is a “proper Ruby developer”. Hence I asked Edd if he was interested in creating a guest post or two for my blog on IronRuby. This is the second of those posts. If you should stumble across this post and happen to be attending the Scottish Ruby Conference, then please do keep a look out for myself and Edd. We would both love to chat about all things Ruby and IronRuby. And… we should have (if Amazon is kind) a few books on IronRuby with us at the conference which will need to find a good home. This is me and Edd and … the book: Order on Amazon: http://bit.ly/ironrubyunleashed Using IronRuby and .NET to produce the ‘Hello World of WPF’ In my previous post I introduced, to a minor extent, IronRuby. I expanded a little on the basics of by getting a Rails app up-and-running on this .NET implementation of the Ruby language — but there wasn't much to it! So now I would like to go from simply running a pre-existing project under IronRuby to developing a whole new application demonstrating the seamless interoperability between IronRuby and .NET. In particular, we'll be using WPF (Windows Presentation Foundation) — the component of the .NET Framework stack used to create rich media and graphical interfaces. Foundations of WPF To reiterate, WPF is the engine in the .NET Framework responsible for rendering rich user interfaces and other media. It's not the only collection of libraries in the framework with the power to do this — Windows Forms does the trick, too — but it is the most powerful and flexible. Put simply, WPF really excels when you need to employ eye candy. It's all about creating impact. Whether you're presenting a document, video, a data entry form, some kind of data visualisation (which I am most hopeful for, especially in terms of IronRuby - more on that later) or chaining all of the above with some flashy animations, you're likely to find that WPF gives you the most power when developing any of these for a Windows target. Let's demonstrate this with an example. I give you what I like to consider the 'hello, world' of WPF applications: the analogue clock. Today, over my lunch break, I created a WPF-based analogue clock using IronRuby... Any normal person would have just looked at their watch. - Twitter The Sample Application: Click here to see this sample in full on GitHub. Using Windows Presentation Foundation from IronRuby to create a Clock class Invoking the Clock class   Gives you The above is by no means perfect (it was a lunch break), but I think it does the job of illustrating IronRuby's interoperability with WPF using a familiar data visualisation. I'm sure you'll want to dissect the code yourself, but allow me to step through the important bits. (By the way, feel free to run this through ir first to see what actually happens). Now we're using IronRuby - unlike my previous post where we took pure Ruby code and ran it through ir, the IronRuby interpreter, to demonstrate compatibility. The main thing of note is the very distinct parallels between .NET namespaces and Ruby modules, .NET classes and Ruby classes. I guess there's not much to say about it other than at this point, you may as well be working with a purely Ruby graphics-drawing library. You're instantiating .NET objects, but you're doing it with the standard Ruby .new method you know from Ruby as Object#new — although, the root object of all your IronRuby objects isn't actually Object, it's System.Object. You're calling methods on these objects (and classes, for example in the call to System.Windows.Controls.Canvas.SetZIndex()) using the underscored, lowercase convention established for the Ruby language. The integration is so seamless. The fact that you're using a dynamic language on top of .NET's CLR is completely abstracted from you, allowing you to just build your software. A Brief Note on Events Events are a big part of developing client applications in .NET as well as under every other environment I can think of. In case you aren't aware, event-driven programming is essentially the practice of telling your code to call a particular method, or other chunk of code (a delegate) when something happens at an unpredictable time. You can never predict when a user is going to click a button, move their mouse or perform any other kind of input, so the advent of the GUI is what necessitated event-driven programming. This is where one of my favourite aspects of the Ruby language, blocks, can really help us. In traditional C#, for instance, you may subscribe to an event (assign a block of code to execute when an event occurs) in one of two ways: by passing a reference to a named method, or by providing an anonymous code block. You'd be right for seeing the parallel here with Ruby's concept of blocks, Procs and lambdas. As demonstrated at the very end of this rather basic script, we are using .NET's System.Timers.Timer to (attempt to) update the clock every second (I know it's probably not the best way of doing this, but for example's sake). Note: Diverting a little from what I said above, the ticking of a clock is very predictable, yet we still use the event our Timer throws to do this updating as one of many ways to perform that task outside of the main thread. You'll see that all that's needed to assign a block of code to be triggered on an event is to provide that block to the method of the name of the event as it is known to the CLR. This drawback to this is that it only allows the delegation of one code block to each event. You may use the add method to subscribe multiple handlers to that event - pushing that to the end of a queue. Like so: def tick puts "tick tock" end timer.elapsed.add method(:tick) timer.elapsed.add proc { puts "tick tock" } tick_handler = lambda { puts "tick tock" } timer.elapsed.add(tick_handler)   The ability to just provide a block of code as an event handler helps IronRuby towards that very important term I keep throwing around; low ceremony. Anonymous methods are, of course, available in other more conventional .NET languages such as C# and VB but, as usual, feel ever so much more elegant and natural in IronRuby. Note: Whether it's a named method or an anonymous chunk o' code, the block you delegate to the handling of an event can take arguments - commonly, a sender object and some args. Another Brief Note on Verbosity Personally, I don't mind verbose chaining of references in my code as long as it doesn't interfere with performance - as evidenced in the example above. While I love clean code, there's a certain feeling of safety that comes with the terse explicitness of long-winded addressing and the describing of objects as opposed to ambiguity (not unlike this sentence). However, when working with IronRuby, even I grow tired of typing System::Whatever::Something. Some people enjoy simply assuming namespaces and forgetting about them, regardless of the language they're using. Don't worry, IronRuby has you covered. It is completely possible to, with a call to include, bring the contents of a .NET-converted module into context of your IronRuby code - just as you would if you wanted to bring in an 'organic' Ruby module. To refactor the style of the above example, I could place the following at the top of my Clock class: class Clock include System::Windows::Shape include System::Windows::Media include System::Windows::Threading # and so on...   And by doing so, reduce calls to System::Windows::Shapes::Ellipse.new to simply Ellipse.new or references to System::Windows::Threading::DispatcherPriority.Render to a friendlier DispatcherPriority.Render. Conclusion I hope by now you can understand better how IronRuby interoperates with .NET and how you can harness the power of the .NET framework with the dynamic nature and elegant idioms of the Ruby language. The manner and parlance of Ruby that makes it a joy to work with sets of data is, of course, present in IronRuby — couple that with WPF's capability to produce great graphics quickly and easily, and I hope you can visualise the possibilities of data visualisation using these two things. Using IronRuby and WPF together to create visual representations of data and infographics is very exciting to me. Although today, with this project, we're only presenting one simple piece of information - the time - the potential is much grander. My day-to-day job is centred around software development and UI design, specifically in the realm of healthcare, and if you were to pay a visit to our office you would behold, directly above my desk, a large plasma TV with a constantly rotating, animated slideshow of charts and infographics to help members of our team do their jobs. It's an app powered by WPF which never fails to spark some conversation with visitors whose gaze has been hooked. If only it was written in IronRuby, the pleasantly low ceremony and reduced pre-processing time for my brain would have helped greatly. Edd Morgan blog Related Links: Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • CodePlex Daily Summary for Thursday, May 22, 2014

    CodePlex Daily Summary for Thursday, May 22, 2014Popular ReleasesTerraMap (Terraria World Map Viewer): TerraMap 1.0.5: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors. Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed installer not uninstalling older versions T...MDX Parser,Builder,DOM and OLAP visual controls with Writeback for Silverlight: Ranet.UILibrary.Olap-2.5.434.0: Issue hot fixed: 102.127666 - Incorrectly formed command EXCEPT () in MDX query for the exclusion of several elements among of the subordinates 102.132877 - Incorrectly generate MDX query with using VisualTotals, the function HIERARCHIZE - should be inside 102.132887 - If in the MemberChoice select an item with child, and then delete one of the child, the parent misses the result build the test project (...\Users\Public\Documents\Ranet.UILibrary.Olap-2.5 Samples\UILibrary.Olap\Cs\Ranet...R.NET: R.NET 1.5.13: R.NET 1.5.13 is a beta release towards R.NET 1.6. You are encouraged to use it now and give feedback. See the documentation for setup and usage instructions. Main changes for R.NET 1.5.13 Changed the nuget packaging to distribute via nuget.org at R.NET Community and R.NET FSharp Utility. Without entering into details, this was necessary to facilitate the distribution of the packages. You are strongly encouraged to use nuget to manage the dependency of your work on R.NET, rather than the bin...Adaptive Access Layers: AAL 2.0: Major rework with breaking changes. Much more flexible registration of implementation strategy and support for methods, properties and events.Google Analytics SDK for Windows 8 and Windows Phone: Google Analytics SDK 1.2.08: Recommended for Xaml/C# developers: Download the package through NuGet. Recommended for JS and C++ developers: Download the new native vsix (Visual Studio SDK) package above. NEW FEATURES & FIXESSee the full list of changes since the last public release SHOUT OUTSDacianMujdar for the pull request to add support for campaigns. aclassen for the pull request to add support for resolved phone models in the WP7 & 8 Silverlight versions. Alan Mendelevich for great open source library PhoneNam...DbSharp: DbSharpApplication (Binary files): A zip file that include DbSharpApplication.exe. Initial release.Multiwfn: Multiwfn 3.3.3: Multiwfn 3.3.3WebExtras: v1.4.0-Beta-1: Enh: Adding support for jQuery UI framework Enh: Adding support for jqPlot charting library Dropping dependency on MoreLinq library Note: Html.LabelForV2(...) extension method has now been deprecated. You should use Html.RequiredFieldLabelFor(...) extension method instead. This extension method will be removed in future versions.????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。MISAO: Ver. 5.4: Fix bugs (Nicovideo viwer add-in) Add Masakari option (Nicovideo viwer add-in)QuickMon: Version 3.11: This release adds some major changes to the core monitoring engine. 1. Polling overrides: Each collector entry can specify a minimum time updating is allowed for it and dependent collector entries. 2. Polling frequency sliding: Additional to polling overrides a collector entry can specify 'sliding' polling frequency if the state remains the same. This means the frequency slows down reducing overhead of polling on a stagnant resource. 3. The monitor pack has an overriding frequency. If used...Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.WordMat: WordMat v. 1.06: Check WordMat.blogspot.com for a complete description of new features.Wsus Package Publisher: Release v1.3.1405.17: Add Russian translation (thanks to VSharmanov) Fix a bug that make WPP to crash if the user click on "Connect/Reload" while the Report Tab is loading. Enhance the way WPP store the password for remote computers command.MoreTerra (Terraria World Viewer): More Terra 1.12.9: =========== = Compatibility = =========== Updated to account for new format 1.2.4.1 =========== = Issues = =========== all items have not been added. Some colors for new tiles may be off. I wanted to get this out so people have a usable program.LINQ to Twitter: LINQ to Twitter v3.0.3: Supports .NET 4.5x, Windows Phone 8.x, Windows 8.x, Windows Azure, Xamarin.Android, and Xamarin.iOS. New features include Status/Lookup, Mute APIs, and bug fixes. 100% Twitter API v1.1 coverage, Async, Portable Class Library (PCL).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.26.0: Added access to the Release Notes during 'Check for Updates...'' Debug panels Added support for generic types members Members are grouped into 'Raw View' and 'Non-Public members' categories Implemented dedicated (array-like) view for Lists and Dictionaries http://download-codeplex.sec.s-msft.com/Download?ProjectName=csscriptnpp&DownloadId=846498ClosedXML - The easy way to OpenXML: ClosedXML 0.70.0: A lot of fixes. See history.SFDL.NET: SFDL.NET (2.2.9.2): Changelog: Neues Icon Xup.in CnL Plugin BugfixSEToolbox: SEToolbox 01.030.008 Release 1: Fixed cube editor failing to apply color to cubes. Added to cube editor, replace cube dialog, and Build Percent dialog. Corrected for hidden asteroid ore, allowing rare ore to show when importing an asteroid, or converting a 3d model to an asteroid (still appears to be limitations on rare ore in small asteroids). Allowed ore selection to Asteroid file import. (Can copy/import and convert existing asteroid to another ore). Added progress bars to common long running operations. Fixed ...New Projects<a href="jAvAsCrIpT&colon;alert&lpar;69&rpar;">CLICK HERE TO GET FREE MONEY</a>: <a href="jAvAsCrIpT&colon;alert&lpar;69&rpar;">CLICK HERE TO GET FREE MONEY</a> canopyazure: canopyazureCI&T ULS Log Viewer: Ferramenta para ajudar o desenvolvedor Sharepoint analisar os arquivos de log.EmissorCTE: E uma DLL que ira enviar, receber e cancelar o CTE, também ira fazer geração do DACTEF5 BIG-IP Local Traffic Manager: A .NET wrapper for F5 iControl service-enabled management API.Hydrodesktop Excel-Addin: HydroDesktop ExcelPage Manifest Extractor: Extract a Sharepoint Page Manifest/XMLProject Euler Solutions By multiple1902: Project Euler Solutions in FSharp. By Weisi Dai (multiple1902) <weisi@x-research.com>VerySimpleBackup: Simple command line tool for backup any directory (using Volume Shadow Copy), using archiving (ZIP) with optional password and copy it to FTP (cycle supported).Waf Music Manager: The Waf Music Manager is a simple and fast application that makes fun to manage the local music collection.?????-?????【??】?????????: ??????????????????,???,??????????、???????????????????。??????,????、????,??????! ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ???????????????????????,????,????“???、???、???”?????,?????,?????????????????。??????! ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ??????-??????【??】??????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ??????-??????【??】??????????: ???????????????:??????!?????!???:????、????、????、????。??,??????????!??????. ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ?????1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????!

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

  • DotNetNuke + XPath = Custom navigation menu DNNMenu HTML render

    - by Rui Santos
    I'm developing a skin for DotNetNuke 5 using the Component DNN Done Right menu by Mark Alan which uses XSL-T to convert the XML sitemap into an HTML navigation. The XML sitemap outputs the following structure: <Root > <root > <node id="40" text="Home" url="http://localhost/dnn/Home.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="0" > <node id="58" text="Child1" url="http://localhost/dnn/Home/Child1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="1" > <keywords >Child1</keywords> <description >Child1</description> <node id="59" text="Child1 SubItem1" url="http://localhost/dnn/Home/Child1/Child1SubItem1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="2" > <keywords >Child1 SubItem1</keywords> <description >Child1 SubItem1</description> </node> <node id="60" text="Child1 SubItem2" url="http://localhost/dnn/Home/Child1/Child1SubItem2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="0" only="0" depth="2" > <keywords >Child1 SubItem2</keywords> <description >Child1 SubItem2</description> </node> <node id="61" text="Child1 SubItem3" url="http://localhost/dnn/Home/Child1/Child1SubItem3.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="2" > <keywords >Child1 SubItem3</keywords> <description >Child1 SubItem3</description> </node> </node> <node id="65" text="Child2" url="http://localhost/dnn/Home/Child2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="1" > <keywords >Child2</keywords> <description >Child2</description> <node id="66" text="Child2 SubItem1" url="http://localhost/dnn/Home/Child2/Child2SubItem1.aspx" enabled="1" selected="0" breadcrumb="0" first="1" last="0" only="0" depth="2" > <keywords >Child2 SubItem1</keywords> <description >Child2 SubItem1</description> </node> <node id="67" text="Child2 SubItem2" url="http://localhost/dnn/Home/Child2/Child2SubItem2.aspx" enabled="1" selected="0" breadcrumb="0" first="0" last="1" only="0" depth="2" > <keywords >Child2 SubItem2</keywords> <description >Child2 SubItem2</description> </node> </node> </node> </root> </Root> My Goal is to render this XML block into this HTML Navigation structure only using UL's LI's, etc.. <ul id="topnav"> <li> <a href="#" class="home">Home</a> <!-- Parent Node - Depth0 --> <div class="sub"> <ul> <li><h2><a href="#">Child1</a></h2></li> <!-- Parent Node 1 - Depth1 --> <li><a href="#">Child1 SubItem1</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child1 SubItem2</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child1 SubItem3</a></li ><!-- ChildNode - Depth2 --> </ul> <ul> <li><h2><a href="#">Child2</a></h2></li> <!-- Parent Node 2 - Depth1 --> <li><a href="#">Child2 SubItem1</a></li> <!-- ChildNode - Depth2 --> <li><a href="#">Child2 SubItem2</a></li> <!-- ChildNode - Depth2 --> </ul> </div> </li> </ul> Can anyone help with the XSL coding? I'm just starting now with XSL..

    Read the article

  • Allowing pop-up's and downloads with flex

    - by ShadowVariable
    I'm building an flex app that is basically a wrapper for a few websites. One of them is a google docs website, and I'm trying to get flex to allow downloads or popups or something that will allow me to do it. I've tried a whole bunch of solutions online and none of them have worked out. Here's the code so far: <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="100%" height="100%" creationComplete="onCreationComplete()"> <s:layout> <s:HorizontalLayout/> </s:layout> <fx:Style source="style.css"/> <fx:Script> <![CDATA[ include "CustomHTMLLoader.as"; private function onCreationComplete():void { // ... other stuff ... var custom:object; custom.htmlHost = new MyHTMLHost(); } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:BorderContainer width="100%" height="100%" backgroundColor="#87BED0" styleName="container"> <s:Panel x="188" y="17" width="826" height="112" borderAlpha="0.15" chromeColor="#0C5A74" color="#FFFFFF" cornerRadius="20" dropShadowVisible="false" enabled="true" title="Customer Service Control Panel"> <s:controlBarContent/> <s:Button id="home" x="13" y="10" height="44" label="Phones" click="myViewStack.selectedChild=Home;" enabled="true" icon="@Embed('assets/iconmonstr-mobile-phone-6-icon-32.png')"/> <s:Button id="liveagent" x="131" y="10" height="44" label="Live Agent" click="myViewStack.selectedChild=live_agent;" icon="@Embed('assets/iconmonstr-speech-bubble-11-icon-32.png')"/> <s:Button id="bigcommerce" x="260" y="10" width="158" height="44" label="Big Commerce" click="myViewStack.selectedChild=bigcommerce_home;" icon="@Embed('assets/iconmonstr-coin-6-icon-48.png')"/> <s:Button id="faq" x="436" y="10" width="88" height="44" label="FAQ" click="myViewStack.selectedChild=freqaskquestions;" fontFamily="Arial" icon="@Embed('assets/iconmonstr-help-4-icon-32.png')"/> <s:Button id="call" x="540" y="10" width="131" height="44" label="Google Docs" click="myViewStack.selectedChild=call_notes;" icon="@Embed('assets/iconmonstr-text-file-4-icon-32.png')"/> <s:Button id="hoot" x="684" y="10" width="122" height="44" label="HootSuite" click="myViewStack.selectedChild=hoot_suite;" icon="@Embed('assets/iconmonstr-facebook-icon-32.png')"/> </s:Panel> <mx:ViewStack id="myViewStack" x="0" y="140" width="100%" height="100%" borderStyle="solid"> <s:NavigatorContent id="Home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/vbx/messages/inbox" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="bigcommerce_home"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://www.mtbakervapor.com/admin" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="live_agent"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/agent/#login" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="freqaskquestions"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="http://mbvphone.mtbakervapor.org/liveagent/" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="call_notes"> <s:BorderContainer width="100%" height="100%"> <mx:HTML id="html" x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://drive.google.com/a/mtbakervapor.com/" htmlHost="{new CustomHost()}" /> </s:BorderContainer> </s:NavigatorContent> <s:NavigatorContent id="hoot_suite"> <s:BorderContainer width="100%" height="100%"> <mx:HTML x="0" y="0" width="100%" height="100%" borderVisible="false" horizontalScrollPolicy="off" location="https://hootsuite.com/login" /> </s:BorderContainer> </s:NavigatorContent> </mx:ViewStack> <s:Image x="0" y="0" width="180" height="140" scaleMode="letterbox" smooth="false" source="assets/mbvlogo_black.png"/> </s:BorderContainer> </s:WindowedApplication> and the custom class code: package { import flash.html.HTMLHost; import flash.html.HTMLWindowCreateOptions; import flash.html.HTMLLoader; public class MyHTMLHost extends HTMLHost { public function MyHTMLHost(defaultBehaviors:Boolean=true) { super(defaultBehaviors); } override public function createWindow(windowCreateOptions:HTMLWindowCreateOptions):HTMLLoader { // all JS calls and HREFs to open a new window should use the existing window return HTMLLoader.createRootWindow(); } } } any help would be appreciated.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • How to shoot yourself in the foot (DO NOT Read in the office)

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/21/how-to-shoot-yourself-in-the-foot-do-not-read.aspxLet me make it absolutely clear - the following is:merely collated by your Geek from http://www.codeproject.com/Lounge.aspx?msg=3917012#xx3917012xxvery, very very funny so you read it in the presence of others at your own riskso here is the list - you have been warned!C You shoot yourself in the foot.   C++ You accidently create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying "That's me, over there."   FORTRAN You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you have no exception-handling facility.   Modula-2 After realizing that you can't actually accomplish anything in this language, you shoot yourself in the head.   COBOL USEing a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be retied.   Lisp You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds...   BASIC Shoot yourself in the foot with a water pistol. On big systems, continue until entire lower body is waterlogged.   Forth Foot yourself in the shoot.   APL You shoot yourself in the foot; then spend all day figuring out how to do it in fewer characters.   Pascal The compiler won't let you shoot yourself in the foot.   Snobol If you succeed, shoot yourself in the left foot. If you fail, shoot yourself in the right foot.   HyperTalk Put the first bullet of the gun into foot left of leg of you. Answer the result.   Prolog You tell your program you want to be shot in the foot. The program figures out how to do it, but the syntax doesn't allow it to explain.   370 JCL You send your foot down to MIS with a 4000-page document explaining how you want it to be shot. Three years later, your foot comes back deep-fried.   FORTRAN-77 You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you still can't do exception-processing.   Modula-2 (alternative) You perform a shooting on what might be currently a foot with what might be currently a bullet shot by what might currently be a gun.   BASIC (compiled) You shoot yourself in the foot with a BB using a SCUD missile launcher.   Visual Basic You'll really only appear to have shot yourself in the foot, but you'll have so much fun doing it that you won't care.   Forth (alternative) BULLET DUP3 * GUN LOAD FOOT AIM TRIGGER PULL BANG! EMIT DEAD IF DROP ROT THEN (This takes about five bytes of memory, executes in two to ten clock cycles on any processor and can be used to replace any existing function of the language as well as in any future words). (Welcome to bottom up programming - where you, too, can perform compiler pre-processing instead of writing code)   APL (alternative) You hear a gunshot and there's a hole in your foot, but you don't remember enough linear algebra to understand what happened. or @#&^$%&%^ foot   Pascal (alternative) Same as Modula-2 except that the bullet is not the right type for the gun and your hand is blown off.   Snobol (alternative) You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).   Prolog (alternative) You attempt to shoot yourself in the foot, but the bullet, failing to find its mark, backtracks to the gun, which then explodes in your face.   COMAL You attempt to shoot yourself in the foot with a water pistol, but the bore is clogged, and the pressure build-up blows apart both the pistol and your hand. or draw_pistol aim_at_foot(left) pull_trigger hop(swearing)   Scheme As Lisp, but none of the other appendages are aware of this happening.   Algol You shoot yourself in the foot with a musket. The musket is aesthetically fascinating and the wound baffles the adolescent medic in the emergency room.   Ada If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up in front of a firing squad and tell the soldiers, "Shoot at the feet." or The Department of Defense shoots you in the foot after offering you a blindfold and a last cigarette. or After correctly packaging your foot, you attempt to concurrently load the gun, pull the trigger, scream and shoot yourself in the foot. When you try, however, you discover that your foot is of the wrong type. or After correctly packing your foot, you attempt to concurrently load the gun, pull the trigger, scream, and confidently aim at your foot knowing it is safe. However the cordite in the round does an Unchecked Conversion, fires and shoots you in the foot anyway.   Eiffel   You create a GUN object, two FOOT objects and a BULLET object. The GUN passes both the FOOT objects a reference to the BULLET. The FOOT objects increment their hole counts and forget about the BULLET. A little demon then drives a garbage truck over your feet and grabs the bullet (both of it) on the way. Smalltalk You spend so much time playing with the graphics and windowing system that your boss shoots you in the foot, takes away your workstation and makes you develop in COBOL on a character terminal. or You send the message shoot to gun, with selectors bullet and myFoot. A window pops up saying Gunpowder doesNotUnderstand: spark. After several fruitless hours spent browsing the methods for Trigger, FiringPin and IdealGas, you take the easy way out and create ShotFoot, a subclass of Foot with an additional instance variable bulletHole. Object Oriented Pascal You perform a shooting on what might currently be a foot with what might currently be a bullet fired from what might currently be a gun.   PL/I You consume all available system resources, including all the offline bullets. The Data Processing & Payroll Department doubles its size, triples its budget, acquires four new mainframes and drops the original one on your foot. Postscript foot bullets 6 locate loadgun aim gun shoot showpage or It takes the bullet ten minutes to travel from the gun to your foot, by which time you're long since gone out to lunch. The text comes out great, though.   PERL You stab yourself in the foot repeatedly with an incredibly large and very heavy Swiss Army knife. or You pick up the gun and begin to load it. The gun and your foot begin to grow to huge proportions and the world around you slows down, until the gun fires. It makes a tiny hole, which you don't feel. Assembly Language You crash the OS and overwrite the root disk. The system administrator arrives and shoots you in the foot. After a moment of contemplation, the administrator shoots himself in the foot and then hops around the room rabidly shooting at everyone in sight. or You try to shoot yourself in the foot only to discover you must first reinvent the gun, the bullet, and your foot.or The bullet travels to your foot instantly, but it took you three weeks to load the round and aim the gun.   BCPL You shoot yourself somewhere in the leg -- you can't get any finer resolution than that. Concurrent Euclid You shoot yourself in somebody else's foot.   Motif You spend days writing a UIL description of your foot, the trajectory, the bullet and the intricate scrollwork on the ivory handles of the gun. When you finally get around to pulling the trigger, the gun jams.   Powerbuilder While attempting to load the gun you discover that the LoadGun system function is buggy; as a work around you tape the bullet to the outside of the gun and unsuccessfully attempt to fire it with a nail. In frustration you club your foot with the butt of the gun and explain to your client that this approximates the functionality of shooting yourself in the foot and that the next version of Powerbuilder will fix it.   Standard ML By the time you get your code to typecheck, you're using a shoot to foot yourself in the gun.   MUMPS You shoot 583149 AK-47 teflon-tipped, hollow-point, armour-piercing bullets into even-numbered toes on odd-numbered feet of everyone in the building -- with one line of code. Three weeks later you shoot yourself in the head rather than try to modify that line.   Java You locate the Gun class, but discover that the Bullet class is abstract, so you extend it and write the missing part of the implementation. Then you implement the ShootAble interface for your foot, and recompile the Foot class. The interface lets the bullet call the doDamage method on the Foot, so the Foot can damage itself in the most effective way. Now you run the program, and call the doShoot method on the instance of the Gun class. First the Gun creates an instance of Bullet, which calls the doFire method on the Gun. The Gun calls the hit(Bullet) method on the Foot, and the instance of Bullet is passed to the Foot. But this causes an IllegalHitByBullet exception to be thrown, and you die.   Unix You shoot yourself in the foot or % ls foot.c foot.h foot.o toe.c toe.o % rm * .o rm: .o: No such file or directory % ls %   370 JCL (alternative) You shoot yourself in the head just thinking about it.   DOS JCL You first find the building you're in in the phone book, then find your office number in the corporate phone book. Then you have to write this down, then describe, in cubits, your exact location, in relation to the door (right hand side thereof). Then you need to write down the location of the gun (loading it is a proprietary utility), then you load it, and the COBOL program, and run them, and, with luck, it may be run tonight.   VMS   $ MOUNT/DENSITY=.45/LABEL=BULLET/MESSAGE="BYE" BULLET::BULLET$GUN SYS$BULLET $ SET GUN/LOAD/SAFETY=OFF/SIGHT=NONE/HAND=LEFT/CHAMBER=1/ACTION=AUTOMATIC/ LOG/ALL/FULL SYS$GUN_3$DUA3:[000000]GUN.GNU $ SHOOT/LOG/AUTO SYS$GUN SYS$SYSTEM:[FOOT]FOOT.FOOT   %DCL-W-ACTIMAGE, error activating image GUN -CLI-E-IMGNAME, image file $3$DUA240:[GUN]GUN.EXE;1 -IMGACT-F-NOTNATIVE, image is not an OpenVMS Alpha AXP image or %SYS-F-FTSHT, foot shot (fifty lines of traceback omitted) sh,csh, etc You can't remember the syntax for anything, so you spend five hours reading manual pages, then your foot falls asleep. You shoot the computer and switch to C.   Apple System 7 Double click the gun icon and a window giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small bomb appears with note "Error of Type 1 has occurred."   Windows 3.1 Double click the gun icon and wait. Eventually a window opens giving a selection for guns, target areas, plus balloon help with medical remedies, and assorted sound effects. Click "shoot" button and a small box appears with note "Unable to open Shoot.dll, check that path is correct."   Windows 95 Your gun is not compatible with this OS and you must buy an upgrade and install it before you can continue. Then you will be informed that you don't have enough memory.   CP/M I remember when shooting yourself in the foot with a BB gun was a big deal.   DOS You finally found the gun, but can't locate the file with the foot for the life of you.   MSDOS You shoot yourself in the foot, but can unshoot yourself with add-on software.   Access You try to point the gun at your foot, but it shoots holes in all your Borland distribution diskettes instead.   Paradox Not only can you shoot yourself in the foot, your users can too.   dBase You squeeze the trigger, but the bullet moves so slowly that by the time your foot feels the pain, you've forgotten why you shot yourself anyway. or You buy a gun. Bullets are only available from another company and are promised to work so you buy them. Then you find out that the next version of the gun is the one scheduled to actually shoot bullets.   DBase IV, V1.0 You pull the trigger, but it turns out that the gun was a poorly designed hand grenade and the whole building blows up.   SQL You cut your foot off, send it out to a service bureau and when it returns, it has a hole in it but will no longer fit the attachment at the end of your leg. or Insert into Foot Select Bullet >From Gun.Hand Where Chamber = 'LOADED' And Trigger = 'PULLED'   Clipper You grab a bullet, get ready to insert it in the gun so that you can shoot yourself in the foot and discover that the gun that the bullets fits has not yet been built, but should be arriving in the mail _REAL_SOON_NOW_. Oracle The menus for coding foot_shooting have not been implemented yet and you can't do foot shooting in SQL.   English You put your foot in your mouth, then bite it off. (For those who don't know, English is a McDonnell Douglas/PICK query language which allegedly requires 110% of system resources to run happily.) Revelation [an implementation of the PICK Operating System] You'll be able to shoot yourself in the foot just as soon as you figure out what all these bullets are for.   FlagShip Starting at the top of your head, you aim the gun at yourself repeatedly until, half an hour later, the gun is finally pointing at your foot and you pull the trigger. A new foot with a hole in it appears but you can't work out how to get rid of the old one and your gun doesn't work anymore.   FidoNet You put your foot in your mouth, then echo it internationally.   PicoSpan [a UNIX-based computer conferencing system] You can't shoot yourself in the foot because you're not a host. or (host variation) Whenever you shoot yourself in the foot, someone opens a topic in policy about it.   Internet You put your foot in your mouth, shoot it, then spam the bullet so that everybody gets shot in the foot.   troff rmtroff -ms -Hdrwp | lpr -Pwp2 & .*place bullet in footer .B .NR FT +3i .in 4 .bu Shoot! .br .sp .in -4 .br .bp NR HD -2i .*   Genetic Algorithms You create 10,000 strings describing the best way to shoot yourself in the foot. By the time the program produces the optimal solution, humans have evolved wings and the problem is moot.   CSP (Communicating Sequential Processes) You only fail to shoot everything that isn't your foot.   MS-SQL Server MS-SQL Server’s gun comes pre-loaded with an unlimited supply of Teflon coated bullets, and it only has two discernible features: the muzzle and the trigger. If that wasn't enough, MS-SQL Server also puts the gun in your hand, applies local anesthetic to the skin of your forefinger and stitches it to the gun's trigger. Meanwhile, another process has set up a spinal block to numb your lower body. It will then proceeded to surgically remove your foot, cryogenically freeze it for preservation, and attach it to the muzzle of the gun so that no matter where you aim, you will shoot your foot. In order to avoid shooting yourself in the foot, you need to unstitch your trigger finger, remove your foot from the muzzle of the gun, and have it surgically reattached. Then you probably want to get some crutches and go out to buy a book on SQL Server Performance Tuning.   Sybase Sybase's gun requires assembly, and you need to go out and purchase your own clip and bullets to load the gun. Assembly is complicated by the fact that Sybase has hidden the gun behind a big stack of reference manuals, but it hasn't told you where that stack is. While you were off finding the gun, assembling it, buying bullets, etc., Sybase was also busy surgically removing your foot and cryogenically freezing it for preservation. Instead of attaching it to the muzzle of the gun, though, it packed your foot on dry ice and sent it UPS-Ground to an unnamed hookah bar somewhere in the middle east. In order to shoot your foot, you must modify your gun with a GPS system for targeting and hire some guy named "Indy" to find the hookah bar and wire the coordinates back to you. By this time, you've probably become so daunted at the tasks stand between you and shooting your foot that you hire a guy who's read all the books on Sybase to help you shoot your foot. If you're lucky, he'll be smart enough both to find your foot and to stop you from shooting it.   Magic software You spend 1 week looking up the correct syntax for GUN. When you find it, you realise that GUN will not let you shoot in your own foot. It will allow you to shoot almost anything but your foot. You then decide to build your own gun. You can't use the standard barrel since this will only allow for standard bullets, which will not fire if the barrel is pointed at your foot. After four weeks, you have created your own custom gun. It blows up in your hand without warning, because you failed to initialise the safety catch and it doesn't know whether the initial state is "0", 0, NULL, "ZERO", 0.0, 0,0, "0.0", or "0,00". You fix the problem with your remaining hand by nesting 12 safety catches, and then decide to build the gun without safety catch. You then shoot the management and retire to a happy life where you code in languages that will allow you to shoot your foot in under 10 days.FirefoxLets you shoot yourself in as many feet as you'd like, while using multiple great addons! IEA moving target in terms of standard ammunition size and doesn't always work properly with non-Microsoft ammunition, so sometimes you shoot something other than your foot. However, it's the corporate world's standard foot-shooting apparatus. Hackers seem to enjoy rigging websites up to trigger cascading foot-shooting failures. Windows 98 About the same as Windows 95 in terms of overall bullet capacity and triggering mechanisms. Includes updated DirectShot API. A new version was released later on to support USB guns, Windows 98 SE.WPF:You get your baseball glove and a ball and you head out to your backyard, where you throw balls to your pitchback. Then your unkempt-haired-cargo-shorts-and-sandals-with-white-socks-wearing neighbor uses XAML to sculpt your arm into a gun, the ball into a bullet and the pitchback into your foot. By now, however, only the neighbor can get it to work and he's only around from 6:30 PM - 3:30 AM. LOGO: You very carefully lay out the trajectory of the bullet. Then you start the gun, which fires very slowly. You walk precisely to the point where the bullet will travel and wait, but just before it gets to you, your class time is up and one of the other kids has already used the system to hack into Sony's PS3 network. Flash: Someone has designed a beautiful-looking gun that anyone can shoot their feet with for free. It weighs six hundred pounds. All kinds of people are shooting themselves in the feet, and sending the link to everyone else so that they can too. That is, except for the criminals, who are all stealing iOS devices that the gun won't work with.APL: Its (mostly) all greek to me. Lisp: Place ((gun in ((hand sight (foot then shoot))))) (Lots of Insipid Stupid Parentheses)Apple OS/X and iOS Once a year, Steve Jobs returns from sick leave to tell millions of unwavering fans how they will be able to shoot themselves in the foot differently this year. They retweet and blog about it ad nauseam, and wait in line to be the first to experience "shoot different".Windows ME Usually fails, even at shooting you in the foot. Yo dawg, I heard you like shooting yourself in the foot. So I put a gun in your gun, so you can shoot yourself in the foot while you shoot yourself in the foot. (Okay, I'm not especially proud of this joke.) Windows 2000 Now you really do have to log in, before you are allowed to shoot yourself in the foot.Windows XPYou thought you learned your lesson: Don't use Windows ME. Then, along came this new creature, built on top of Windows NT! So you spend the next couple days installing antivirus software, patches and service packs, just so you can get that driver to install, and then proceed to shoot yourself in the foot. Windows Vista Newer! Glossier! Shootier! Windows 7 The bullets come out a lot smoother. Active Directory Each bullet now has an attached Bullet Identifier, and can be uniquely identified. Policies can be applied to dictate fragmentation, and the gun will occasionally have a confusing delay after the trigger has been pulled. PythonYou try to use import foot; foot.shoot() only to realize that's only available in 3.0, to which you can't yet upgrade from 2.7 because of all those extension libs lacking support. Solaris Shoots best when used on SPARC hardware, but still runs the trigger GUI under Java. After weeks of learning the appropriate STOP command to prevent the trigger from automatically being pressed on boot, you think you've got it under control. Then the one time you ever use dtrace, it hits a bug that fires the gun. MySQL The feature that allows you to shoot yourself in the foot has been in development for about 6 years, and they are adding it into the next version, which is coming out REAL SOON NOW, promise! But you can always check it out of source control and try it yourself (just not in any environment where data integrity is important because it will probably explode.) PostgreSQLAllows you to have a smug look on your face while you shoot yourself in the foot, because those MySQL guys STILL don't have that feature. NoSQL Barrel? Who needs a barrel? Just put the bullet on your foot, and strike it with a hammer. See? It's so much simpler and more efficient that way. You can even strike multiple bullets in one swing if you swing with a good enough arc, because hammers are easy to use. Getting them to synchronize is a little difficult, though.Eclipse There are about a dozen different packages for shooting yourself in the foot, with weird interdependencies on outdated components. Once you finally navigate the morass and get one installed, you then have something to look at while you shoot yourself in the foot with that package: You can watch the screen redraw.Outlook Makes it really easy to let everyone know you shot yourself in the foot!Shooting yourself in the foot using delegates.You really need to shoot yourself in the foot but you hate firearms (you don't want any dependency on the specifics of shooting) so you delegate it to somebody else. You don't care how it is done as long is shooting your foot. You can do it asynchronously in case you know you may faint so you are called back/slapped in the face by your shooter/friend (or background worker) when everything is done.C#You prepare the gun and the bullet, carefully modeling all of the physics of a bullet traveling through a foot. Just before you're about to pull the trigger, you stumble on System.Windows.BodyParts.Foot.ShootAt(System.Windows.Firearms.IGun gun) in the extended framework, realize you just wasted the entire afternoon, and shoot yourself in the head.PHP<?phprequire("foot_safety_check.php");?><!DOCTYPE HTML><html><head> <!--Lower!--><title>Shooting me in the foot</title></head> <body> <!--LOWER!!!--><leg> <!--OK, I made this one up...--><footer><?php echo (dungSift($_SERVER['HTTP_USER_AGENT'], "ie"))?("Your foot is safe, but you might want to wear a hard hat!"):("<div class=\"shot\">BANG!</div>"); ?></footer></leg> </body> </html>

    Read the article

  • How to remove the hint in the terminal?

    - by jiangchengwu
    As a normal user , when I run some command like ps\netstat, the terminal hint me: (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) I know could redirect STDERR to /dev/null can remove this hint. But I want to know is there any way to remove it , such as edit some configuration files ? [deploy@storage2 ~]$ ps -V (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) procps version 3.2.7 [deploy@storage2 ~]$ ps -V 2>/dev/null procps version 3.2.7 My OS info: [deploy@storage2 ~]$ uname -a Linux storage2 2.6.18-243.el5 #1 SMP Mon Feb 7 18:47:27 EST 2011 x86_64 x86_64 x86_64 GNU/Linux [deploy@storage2 ~]$ lsb_release LSB Version: :core-3.1-amd64:core-3.1-ia32:core-3.1-noarch:graphics-3.1-amd64:graphics-3.1-ia32:graphics-3.1-noarch [deploy@storage2 ~]$ netstat -V (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) net-tools 1.60 netstat 1.42 (2001-04-15) Fred Baumgarten, Alan Cox, Bernd Eckenfels, Phil Blundell, Tuan Hoang and others +NEW_ADDRT +RTF_IRTT +RTF_REJECT +FW_MASQUERADE +I18N AF: (inet) +UNIX +INET +INET6 +IPX +AX25 +NETROM +X25 +ATALK +ECONET +ROSE HW: +ETHER +ARC +SLIP +PPP +TUNNEL +TR +AX25 +NETROM +X25 +FR +ROSE +ASH +SIT +FDDI +HIPPI +HDLC/LAPB There are more info from strace: [deploy@storage2 ~]$ strace ps -V execve("/bin/ps", ["ps", "-V"], [/* 27 vars */]) = 0 brk(0) = 0x929a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=99752, ...}) = 0 mmap2(NULL, 99752, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7fde000 close(3) = 0 open("/lib/libnsl.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0 \241\210\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=101404, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fdd000 mmap2(0x887000, 92104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x887000 mmap2(0x89a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12) = 0x89a000 mmap2(0x89c000, 6088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x89c000 close(3) = 0 open("/lib/libdl.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0Pzt\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=16428, ...}) = 0 mmap2(0x747000, 12408, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x747000 mmap2(0x749000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0x749000 close(3) = 0 open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\20\204p\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=208352, ...}) = 0 mmap2(0x705000, 155760, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x705000 mmap2(0x72a000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x24) = 0x72a000 close(3) = 0 open("/lib/libcrypt.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340\246q\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=45288, ...}) = 0 mmap2(0x71a000, 201020, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7fab000 mmap2(0xf7fb4000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0xfffffffff7fb4000 mmap2(0xf7fb6000, 155964, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fb6000 close(3) = 0 open("/lib/libutil.so.1", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0 \n\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=13420, ...}) = 0 mmap2(NULL, 12428, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7fa7000 mmap2(0xf7fa9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1) = 0xfffffffff7fa9000 close(3) = 0 open("/lib/libpthread.so.0", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0@(s\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=129716, ...}) = 0 mmap2(0x72e000, 90596, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x72e000 mmap2(0x741000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13) = 0x741000 mmap2(0x743000, 4580, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x743000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\340?]\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1611564, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fa6000 mmap2(0x5be000, 1328580, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x5be000 mmap2(0x6fd000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13f) = 0x6fd000 mmap2(0x700000, 9668, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x700000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7fa5000 set_thread_area(0xffd61bb4) = 0 mprotect(0x6fd000, 8192, PROT_READ) = 0 mprotect(0x741000, 4096, PROT_READ) = 0 mprotect(0xf7fa9000, 4096, PROT_READ) = 0 mprotect(0xf7fb4000, 4096, PROT_READ) = 0 mprotect(0x72a000, 4096, PROT_READ) = 0 mprotect(0x749000, 4096, PROT_READ) = 0 mprotect(0x89a000, 4096, PROT_READ) = 0 mprotect(0x5ba000, 4096, PROT_READ) = 0 munmap(0xf7fde000, 99752) = 0 set_tid_address(0xf7fa5708) = 20214 set_robust_list(0xf7fa5710, 0xc) = 0 futex(0xffd61f74, FUTEX_WAKE_PRIVATE, 1) = 0 rt_sigaction(SIGRTMIN, {0x4007323d0, [], 0}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x10000004007322e0, [], 0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=-4284481536, rlim_max=67108864*1024}) = 0 uname({sys="Linux", node="storage2", ...}) = 0 readlink("/proc/self/exe", "/bin/ps"..., 260) = 7 brk(0) = 0x929a000 brk(0x92bb000) = 0x92bb000 open("/bin/ps", O_RDONLY|O_LARGEFILE) = 3 _llseek(3, -12, [711660], SEEK_END) = 0 read(3, "\274U!\253\2\0\0\0\224\237\t\0", 12) = 12 mmap2(NULL, 634880, PROT_READ, MAP_SHARED, 3, 0x13) = 0xfffffffff7f0a000 mmap2(NULL, 630784, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7e70000 close(3) = 0 futex(0x74a06c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 geteuid32() = 501 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 socket(PF_FILE, SOCK_STREAM, 0) = 3 fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"...}, 110) = -1 ENOENT (No such file or directory) close(3) = 0 open("/etc/nsswitch.conf", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=1696, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7ff6000 read(3, "#\n# /etc/nsswitch.conf\n#\n# An ex"..., 4096) = 1696 read(3, "", 4096) = 0 close(3) = 0 munmap(0xf7ff6000, 4096) = 0 open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=99752, ...}) = 0 mmap2(NULL, 99752, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7fde000 close(3) = 0 open("/lib/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300\30\0\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=46680, ...}) = 0 mmap2(NULL, 41616, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7e65000 mmap2(0xf7e6e000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8) = 0xfffffffff7e6e000 close(3) = 0 mprotect(0xf7e6e000, 4096, PROT_READ) = 0 munmap(0xf7fde000, 99752) = 0 open("/etc/passwd", O_RDONLY) = 3 fcntl64(3, F_GETFD) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=2166, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7ff6000 read(3, "root:x:0:0:root:/root:/bin/bash\n"..., 4096) = 2166 close(3) = 0 munmap(0xf7ff6000, 4096) = 0 mkdir("/tmp/pdk-deploy/", 0755) = -1 EEXIST (File exists) mkdir("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a", 0755) = -1 EEXIST (File exists) open("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a/libperl.so", O_RDONLY|O_LARGEFILE) = 3 close(3) = 0 open("/tmp/pdk-deploy/fcb734befe617ec3ae1edc38da810a5a/libperl.so", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\300!\2\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0664, st_size=1264090, ...}) = 0 mmap2(NULL, 1140104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xfffffffff7d4e000 mmap2(0xf7e5a000, 45056, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x10b) = 0xfffffffff7e5a000 close(3) = 0 rt_sigaction(SIGFPE, {0x1000000000000001, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|SA_SIGINFO|0x3d61cb8, (nil)}, {SIG_DFL, ~[HUP INT ILL ABRT BUS SEGV USR2 PIPE ALRM TERM STOP TSTP TTIN TTOU XCPU WINCH IO PWR SYS RTMIN RT_1 RT_2 RT_4 RT_5 RT_8 RT_9 RT_11 RT_12 RT_13 RT_16 RT_17 RT_18 RT_22 RT_24 RT_25 RT_26 RT_27 RT_28 RT_29 RT_30 RT_31], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 getuid32() = 501 geteuid32() = 501 getgid32() = 502 getegid32() = 502 open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=56454896, ...}) = 0 mmap2(NULL, 2097152, PROT_READ, MAP_PRIVATE, 3, 0) = 0xfffffffff7b4e000 mmap2(NULL, 241664, PROT_READ, MAP_PRIVATE, 3, 0x13ec) = 0xfffffffff7b13000 mmap2(NULL, 4096, PROT_READ, MAP_PRIVATE, 3, 0x1466) = 0xfffffffff7b12000 close(3) = 0 mmap2(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xfffffffff7af1000 time(NULL) = 1348210009 readlink("/proc/self/exe", "/bin/ps"..., 4095) = 7 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 _llseek(0, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd618a8) = -1 EINVAL (Invalid argument) _llseek(1, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) ioctl(2, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd618a8) = -1 EINVAL (Invalid argument) _llseek(2, 0, 0xffd618d0, SEEK_CUR) = -1 ESPIPE (Illegal seek) open("/dev/null", O_RDONLY|O_LARGEFILE) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61978) = -1 ENOTTY (Inappropriate ioctl for device) _llseek(3, 0, [0], SEEK_CUR) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 brk(0x92dc000) = 0x92dc000 getppid() = 20212 stat64("/opt/ActivePerl-5.8/site/lib/sitecustomize.pl", 0xffd61560) = -1 ENOENT (No such file or directory) close(3) = 0 open("/usr/lib/.khostd/.hostconf", O_RDONLY|O_LARGEFILE) = 3 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61828) = -1 ENOTTY (Inappropriate ioctl for device) _llseek(3, 0, [0], SEEK_CUR) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=334, ...}) = 0 fcntl64(3, F_SETFD, FD_CLOEXEC) = 0 read(3, "bindport=9001\ntrustip=221.122.57"..., 4096) = 334 read(3, "", 4096) = 0 close(3) = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20215 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 read(3, (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) "tcp 0 0 0.0.0.0:9001"..., 4096) = 109 read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- fstat64(3, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [TRAP BUS FPE USR1 CHLD CONT TTOU VTALRM IO RTMIN], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_RESTART|SA_RESETHAND|0x22302d0}, 8) = 0 waitpid(20215, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20215 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [TRAP BUS FPE USR1 CHLD CONT TTOU VTALRM IO RTMIN], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 chdir("/usr/lib/.khostd") = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20218 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|0x3d61850, (nil)}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 waitpid(20218, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20218 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 chdir("/home/deploy") = 0 stat64("/etc/cron.hourly/hichina", {st_mode=S_IFREG|0755, st_size=711660, ...}) = 0 pipe([3, 4]) = 0 pipe([5, 6]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0) = 20230 close(6) = 0 close(4) = 0 read(5, "", 4) = 0 close(5) = 0 ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xffd61868) = -1 EINVAL (Invalid argument) _llseek(3, 0, 0xffd61890, SEEK_CUR) = -1 ESPIPE (Illegal seek) read(3, "procps version 3.2.7\n", 4096) = 21 read(3, "", 4096) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- close(3) = 0 rt_sigaction(SIGHUP, {0x1, [], SA_RESTORER|SA_STACK|SA_RESTART|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|0x3d61850, (nil)}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGINT, {0x1, [], SA_STACK|0x129b3d8}, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], 0}, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, 8) = 0 waitpid(20230, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0) = 20230 rt_sigaction(SIGHUP, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGINT, {SIG_DFL, [HUP INT], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 rt_sigaction(SIGQUIT, {SIG_DFL, ~[HUP INT ILL TRAP KILL SEGV ALRM TERM STKFLT CHLD TSTP TTOU RT_1 RT_2 RT_3 RT_6 RT_9 RT_11 RT_14 RT_15 RT_16 RT_17 RT_20 RT_22], SA_NOCLDSTOP|SA_NOCLDWAIT}, NULL, 8) = 0 write(1, "procps version 3.2.7\n", 21procps version 3.2.7 ) = 21 munmap(0xf7af1000, 135168) = 0 munmap(0xf7e70000, 630784) = 0 munmap(0xf7f0a000, 634880) = 0 munmap(0xf7d4e000, 1140104) = 0 exit_group(0) = ? [ Process PID=20214 runs in 32 bit mode. ] Thank you very much.

    Read the article

  • StackOverflowException throws often when .net application built with Debug mode

    - by user1487950
    I have an application which access an external webservice often, when i are trying to debug it, means debuging in vistual studio. it often throws out StackOverflowException at the webserverice call point. when building in Release mode , the exception thrown out only occasionally. I checked the call stack, looks like there is no recursive call. can you please suggest? thank you very much. call statck attached. [In a sleep, wait, or join] mscorlib.dll!System.Threading.WaitHandle.InternalWaitOne(System.Runtime.InteropServices.SafeHandle waitableSafeHandle, long millisecondsTimeout, bool hasThreadAffinity, bool exitContext) + 0x2b bytes mscorlib.dll!System.Threading.WaitHandle.WaitOne(int millisecondsTimeout, bool exitContext) + 0x2d bytes System.dll!System.Net.NetworkAddressChangePolled.CheckAndReset() + 0x9d bytes System.dll!System.Net.NclUtilities.LocalAddresses.get() + 0x49 bytes System.dll!System.Net.WebProxyScriptHelper.myIpAddress() + 0x27 bytes [Native to Managed Transition] System.dll!System.Net.WebProxyScriptHelper.MyMethodInfo.Invoke(object target, System.Reflection.BindingFlags bindingAttr, System.Reflection.Binder binder, object[] args, System.Globalization.CultureInfo culture) + 0x6b bytes MTOqoHCT.dll!JScript 0.myIpAddress(object this, Microsoft.JScript.Vsa.VsaEngine vsa Engine, object arguments) + 0x91 bytes MTOqoHCT.dll!JScript 0.FindProxyForURL(object this, Microsoft.JScript.Vsa.VsaEngine vsa Engine, object arguments, object url, object host) + 0x3c6e bytes MTOqoHCT.dll!__WebProxyScript.__WebProxyScript.ExecuteFindProxyForURL(object url, object host) + 0x11d bytes [Native to Managed Transition] Microsoft.JScript.dll!System.Net.VsaWebProxyScript.CallMethod(object targetObject, string name, object[] args) + 0x11a bytes Microsoft.JScript.dll!System.Net.VsaWebProxyScript.Run(string url, string host) + 0x74 bytes [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage msg, int methodPtr, bool fExecuteInContext) + 0x1ef bytes mscorlib.dll!System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage msg) + 0xf bytes mscorlib.dll!System.Runtime.Remoting.Messaging.ServerObjectTerminatorSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage reqMsg) + 0x66 bytes mscorlib.dll!System.Runtime.Remoting.Messaging.ServerContextTerminatorSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage reqMsg) + 0x8a bytes mscorlib.dll!System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessageCallback(object[] args) + 0x94 bytes mscorlib.dll!System.Threading.Thread.CompleteCrossContextCallback(System.Threading.InternalCrossContextDelegate ftnToCall, object[] args) + 0x8 bytes [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.Runtime.Remoting.Channels.CrossContextChannel.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage reqMsg) + 0xa7 bytes mscorlib.dll!System.Runtime.Remoting.Channels.ChannelServices.SyncDispatchMessage(System.Runtime.Remoting.Messaging.IMessage msg) + 0x92 bytes mscorlib.dll!System.Runtime.Remoting.Channels.CrossAppDomainSink.DoDispatch(byte[] reqStmBuff, System.Runtime.Remoting.Messaging.SmuggledMethodCallMessage smuggledMcm, out System.Runtime.Remoting.Messaging.SmuggledMethodReturnMessage smuggledMrm) + 0xed bytes mscorlib.dll!System.Runtime.Remoting.Channels.CrossAppDomainSink.DoTransitionDispatchCallback(object[] args) + 0x8a bytes mscorlib.dll!System.Threading.Thread.CompleteCrossContextCallback(System.Threading.InternalCrossContextDelegate ftnToCall, object[] args) + 0x8 bytes [Appdomain Transition] mscorlib.dll!System.Runtime.Remoting.Channels.CrossAppDomainSink.DoTransitionDispatch(byte[] reqStmBuff, System.Runtime.Remoting.Messaging.SmuggledMethodCallMessage smuggledMcm, out System.Runtime.Remoting.Messaging.SmuggledMethodReturnMessage smuggledMrm) + 0x74 bytes mscorlib.dll!System.Runtime.Remoting.Channels.CrossAppDomainSink.SyncProcessMessage(System.Runtime.Remoting.Messaging.IMessage reqMsg) + 0xa3 bytes mscorlib.dll!System.Runtime.Remoting.Proxies.RemotingProxy.CallProcessMessage(System.Runtime.Remoting.Messaging.IMessageSink ms, System.Runtime.Remoting.Messaging.IMessage reqMsg, System.Runtime.Remoting.Contexts.ArrayWithSize proxySinks, System.Threading.Thread currentThread, System.Runtime.Remoting.Contexts.Context currentContext, bool bSkippingContextChain) + 0x50 bytes mscorlib.dll!System.Runtime.Remoting.Proxies.RemotingProxy.InternalInvoke(System.Runtime.Remoting.Messaging.IMethodCallMessage reqMcmMsg, bool useDispatchMessage, int callType) + 0x1d5 bytes mscorlib.dll!System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(System.Runtime.Remoting.Messaging.IMessage reqMsg) + 0x66 bytes mscorlib.dll!System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref System.Runtime.Remoting.Proxies.MessageData msgData, int type) + 0xee bytes System.dll!System.Net.NetWebProxyFinder.GetProxies(System.Uri destination, out System.Collections.Generic.IList<string> proxyList) + 0x83 bytes System.dll!System.Net.AutoWebProxyScriptEngine.GetProxies(System.Uri destination, out System.Collections.Generic.IList<string> proxyList, ref int syncStatus) + 0x84 bytes System.dll!System.Net.WebProxy.GetProxiesAuto(System.Uri destination, ref int syncStatus) + 0x2e bytes System.dll!System.Net.ProxyScriptChain.GetNextProxy(out System.Uri proxy) + 0x2e bytes System.dll!System.Net.ProxyChain.ProxyEnumerator.MoveNext() + 0x98 bytes System.dll!System.Net.ServicePointManager.FindServicePoint(System.Uri address, System.Net.IWebProxy proxy, out System.Net.ProxyChain chain, ref System.Net.HttpAbortDelegate abortDelegate, ref int abortState) + 0x120 bytes System.dll!System.Net.HttpWebRequest.FindServicePoint(bool forceFind) + 0xb1 bytes System.dll!System.Net.HttpWebRequest.GetRequestStream(out System.Net.TransportContext context) + 0x247 bytes System.dll!System.Net.HttpWebRequest.GetRequestStream() + 0xe bytes System.Web.Services.dll!System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(string methodName, object[] parameters) + 0xc0 bytes Gfinet.Config.dll!Gfinet.Config.Service.cfg_webservice.addOrUpdateProperties(string string, int intVal, Gfinet.Config.Service.PropertiesDataM[] propertiesDataMs) + 0xa3 bytes Gfinet.Config.dll!Gfinet.Config.Service.WSServiceImpl.AddOrUpdateProperties(int setId, Gfinet.Config.Service.PropertiesDataM[] properties) + 0x46 bytes [Native to Managed Transition] Gfinet.Config.dll!Gfinet.Config.Service.ServiceAspect.InvocationHandler(object target, System.Reflection.MethodBase method, object[] parameters) + 0x49e bytes Gfinet.Config.dll!Gfinet.Config.DynamicProxy.DynamicProxyImpl.Invoke(System.Runtime.Remoting.Messaging.IMessage message) + 0x110 bytes mscorlib.dll!System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref System.Runtime.Remoting.Proxies.MessageData msgData, int type) + 0xee bytes Tici.Kraps.Services.dll!Tici.Kraps.Services.Configuration.GFINetConfiguration.StoreElement(string application, string category, string id, string elementValue, bool save) Line 303 + 0x55 bytes C# Tici.Kraps.Services.dll!Tici.Kraps.Services.Configuration.GFINetConfiguration.SaveAllInternal() Line 582 + 0x6e bytes C# Tici.Kraps.Services.dll!Tici.Kraps.Services.Configuration.GFINetConfiguration.SaveAll(bool async) Line 434 + 0x8 bytes C# Tici.Kraps.Services.dll!Tici.Kraps.Services.Configuration.GFINetConfiguration.SaveAll() Line 406 + 0xa bytes C# Tici.Kraps.Services.dll!Tici.Kraps.Services.Container.Persistor.Save() Line 59 + 0xc bytes C# Spark.exe!Tici.Kraps.RibbonShell.OnBtnSaveWorkspaceItemClick(object sender, DevExpress.XtraBars.ItemClickEventArgs e) Line 642 + 0xf bytes C# DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.BarItem.OnClick(DevExpress.XtraBars.BarItemLink link) + 0x108 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.BarBaseButtonItem.OnClick(DevExpress.XtraBars.BarItemLink link) + 0x47 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.BarItemLink.OnLinkClick() + 0x245 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.BarItemLink.OnLinkAction(DevExpress.XtraBars.BarLinkAction action, object actionArgs) + 0xb3 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.BarButtonItemLink.OnLinkAction(DevExpress.XtraBars.BarLinkAction action, object actionArgs) + 0x47e bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.BarItemLink.OnLinkActionCore(DevExpress.XtraBars.BarLinkAction action, object actionArgs) + 0x82 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.ViewInfo.BarSelectionInfo.ClickLink(DevExpress.XtraBars.BarItemLink link) + 0x85 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.ViewInfo.BarSelectionInfo.UnPressLink(DevExpress.XtraBars.BarItemLink link) + 0x1e5 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.Ribbon.Handler.BaseRibbonHandler.OnUnPressItem(DevExpress.Utils.DXMouseEventArgs e, DevExpress.XtraBars.Ribbon.ViewInfo.RibbonHitInfo hitInfo) + 0xa7 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.Ribbon.Handler.BaseRibbonHandler.OnUnPress(DevExpress.Utils.DXMouseEventArgs e, DevExpress.XtraBars.Ribbon.ViewInfo.RibbonHitInfo hitInfo) + 0x5f bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.Ribbon.Handler.BaseRibbonHandler.OnMouseUp(DevExpress.Utils.DXMouseEventArgs e) + 0x19a bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.Ribbon.Handler.RibbonHandler.OnMouseUp(DevExpress.Utils.DXMouseEventArgs e) + 0x47 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.Ribbon.RibbonControl.OnMouseUp(System.Windows.Forms.MouseEventArgs e) + 0x95 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.WmMouseUp(ref System.Windows.Forms.Message m, System.Windows.Forms.MouseButtons button, int clicks) + 0x2d1 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.WndProc(ref System.Windows.Forms.Message m) + 0x93a bytes DevExpress.Utils.v11.2.dll!DevExpress.Utils.Controls.ControlBase.WndProc(ref System.Windows.Forms.Message m) + 0x81 bytes DevExpress.XtraBars.v11.2.dll!DevExpress.XtraBars.Ribbon.RibbonControl.WndProc(ref System.Windows.Forms.Message m) + 0x85 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.ControlNativeWindow.OnMessage(ref System.Windows.Forms.Message m) + 0x13 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.ControlNativeWindow.WndProc(ref System.Windows.Forms.Message m) + 0x31 bytes System.Windows.Forms.dll!System.Windows.Forms.NativeWindow.Callback(System.IntPtr hWnd, int msg, System.IntPtr wparam, System.IntPtr lparam) + 0x96 bytes [Native to Managed Transition] [Managed to Native Transition] DevExpress.Utils.v11.2.dll!DevExpress.Utils.Win.Hook.ControlWndHook.WindowProc(System.IntPtr hWnd, int message, System.IntPtr wParam, System.IntPtr lParam) + 0x159 bytes [Native to Managed Transition] [Managed to Native Transition] System.Windows.Forms.dll!System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(System.IntPtr dwComponentID, int reason, int pvLoopData) + 0x287 bytes System.Windows.Forms.dll!System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(int reason, System.Windows.Forms.ApplicationContext context) + 0x16c bytes System.Windows.Forms.dll!System.Windows.Forms.Application.ThreadContext.RunMessageLoop(int reason, System.Windows.Forms.ApplicationContext context) + 0x61 bytes System.Windows.Forms.dll!System.Windows.Forms.Application.Run(System.Windows.Forms.Form mainForm) + 0x31 bytes Tici.Kraps.Services.dll!Tici.Kraps.Services.Container.DefaultApplicationRunner.Run() Line 41 + 0x17 bytes C# Kraps.exe!Tici.Kraps.Program.Main() Line 105 + 0x9 bytes C# [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.AppDomain.ExecuteAssembly(string assemblyFile, System.Security.Policy.Evidence assemblySecurity, string[] args) + 0x6d bytes Microsoft.VisualStudio.HostingProcess.Utilities.dll!Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() + 0x2a bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart_Context(object state) + 0x63 bytes mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool ignoreSyncCtx) + 0xb0 bytes mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) + 0x2c bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart() + 0x44 bytes [Native to Managed Transition]

    Read the article

  • YouTube API Security Error Flex

    - by 23tux
    Hi, I've tried to use the YoutTube API within a Flex project. But i got this error: *** Security Sandbox Violation *** SecurityDomain 'http://www.youtube.com/apiplayer?version=3' tried to access incompatible context 'file:///Users/YouTubePlayer/bin-debug/YouTubePlayer.html' Here are the two files: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo" minWidth="1024" minHeight="768" xmlns:youtube="youtube.*" creationComplete="init();"> <fx:Script> <![CDATA[ [Bindable] private var ready:Boolean = true; private function init():void { Security.allowInsecureDomain("*"); Security.allowDomain("*"); Security.allowDomain('www.youtube.com'); Security.allowDomain('youtube.com'); Security.allowDomain('s.ytimg.com'); Security.allowDomain('i.ytimg.com'); } private function changing():void { /* trace("currentTime: " + player.getCurrentTime()); trace("startTime: " + player.startTime); trace("stopTime: " + player.stopTime); timeSlider.value = player.getCurrentTime() */ } private function startPlaying():void { player.play(); } private function checkStartSlider():void { if(startSlider.value > stopSlider.value) stopSlider.value = startSlider.value + 1; } private function checkStopSlider():void { if(stopSlider.value < startSlider.value) startSlider.value = stopSlider.value - 1; } ]]> </fx:Script> <s:VGroup> <youtube:Player id="player" videoID="DVFvcVuWyfE" change="changing();" ready="ready=true"/> <s:HGroup> <s:Button label="play" click="startPlaying();" /> </s:HGroup> <s:HGroup> <s:HSlider id="timeSlider" width="250" minimum="0" maximum="{player.stopTime}" snapInterval=".01" enabled="{ready}"/> <s:Label id="currentTimeLbl" text="current time: 0" /> </s:HGroup> <s:HGroup> <s:HSlider id="startSlider" width="250" minimum="0" maximum="{player.stopTime}" snapInterval=".01" change="checkStartSlider();" enabled="{ready}" value="0"/> <s:Label id="startTimeLbl" text="start time: {player.startTime}" /> </s:HGroup> <s:HGroup> <s:HSlider id="stopSlider" width="250" minimum="0" maximum="{player.stopTime}" snapInterval=".01" change="checkStopSlider();" enabled="{ready}" value="{player.stopTime}"/> <s:Label id="stopTimeLbl" text="stop time: {player.stopTime}" /> </s:HGroup> </s:VGroup> </s:Application> Here is the player package youtube { import flash.display.Loader; import flash.events.Event; import flash.events.TimerEvent; import flash.net.URLRequest; import flash.system.Security; import flash.utils.Timer; import mx.core.UIComponent; [Event(name="change", type="flash.events.Event")] [Event(name="ready", type="flash.events.Event")] public class Player extends UIComponent { private var player:Object; private var loader:Loader; private var _startTime:Number = 0; private var _stopTime:Number = 0; private var _videoID:String; private var metadataTimer:Timer = new Timer(200); private var playTimer:Timer = new Timer(200); public function Player() { // The player SWF file on www.youtube.com needs to communicate with your host // SWF file. Your code must call Security.allowDomain() to allow this // communication. Security.allowInsecureDomain("*"); Security.allowDomain("*"); // This will hold the API player instance once it is initialized. loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.INIT, onLoaderInit); loader.load(new URLRequest("http://www.youtube.com/apiplayer?version=3")); } private function onLoaderInit(event:Event):void { addChild(loader); loader.content.addEventListener("onReady", onPlayerReady); loader.content.addEventListener("onError", onPlayerError); loader.content.addEventListener("onStateChange", onPlayerStateChange); loader.content.addEventListener("onPlaybackQualityChange", onVideoPlaybackQualityChange); } private function onPlayerReady(event:Event):void { // Event.data contains the event parameter, which is the Player API ID trace("player ready:", Object(event).data); // Once this event has been dispatched by the player, we can use // cueVideoById, loadVideoById, cueVideoByUrl and loadVideoByUrl // to load a particular YouTube video. player = loader.content; // Set appropriate player dimensions for your application player.setSize(0, 0); } private function onPlayerError(event:Event):void { // Event.data contains the event parameter, which is the error code trace("player error:", Object(event).data); } private function onPlayerStateChange(event:Event):void { // Event.data contains the event parameter, which is the new player state trace("player state:", Object(event).data); } private function onVideoPlaybackQualityChange(event:Event):void { // Event.data contains the event parameter, which is the new video quality trace("video quality:", Object(event).data); } [Bindable] public function get videoID():String { return _videoID; } public function set videoID(value:String):void { _videoID = value; } [Bindable] public function get stopTime():Number { return _stopTime; } public function set stopTime(value:Number):void { _stopTime = value; } [Bindable] public function get startTime():Number { return _startTime; } public function set startTime(value:Number):void { _startTime = value; } public function play():void { if(_videoID!="") { player.loadVideoById(_videoID, 0); // add the event listener, so that all 200 milliseconds is an event dispatched metadataTimer.addEventListener(TimerEvent.TIMER, metadataTimeHandler); // if the timer is running, stop and reset it if(metadataTimer.running) metadataTimer.reset(); else metadataTimer.start(); } } private function metadataTimeHandler(e:TimerEvent):void { if(player.getDuration() > 0) { startTime = 0; stopTime = player.getDuration(); metadataTimer.reset(); metadataTimer.stop(); metadataTimer.removeEventListener(TimerEvent.TIMER, metadataTimeHandler); player.playVideo(); playTimer.addEventListener(TimerEvent.TIMER, playTimerHandler); dispatchEvent(new Event("ready")); } } private function playTimerHandler(e:TimerEvent):void { if(getCurrentTime() > _stopTime) { seekTo(startTime); } dispatchEvent(new Event(Event.CHANGE)); } public function getCurrentTime():Number { if(!player.getCurrentTime()) return 0; else return player.getCurrentTime(); } public function seekTo(time:uint):void { player.seekTo(time); } } } Hope someone can help. thx, tux

    Read the article

  • Magento My Account Layout XML Problem

    - by Remy
    Hi there, I'm having issues getting the customer.xml layout file to work properly for the customer's "my account" pages. The navigation links and the previously ordered items that are usually on the left hand side of the page won't show up on the page, but if I change the reference name to "content" in the xml file, it shows up (except it's obviously then on the right hand side). I've checked the template it's referencing (2columns-left.phtml), and the getChildHtml('left') is there in the correct position. The block that's causing the problem: <customer_account> <!-- Mage_Customer --> <reference name="root"> <action method="setTemplate"><template>page/2columns-left.phtml</template></action> </reference> <reference name="left"> <action method="unsetChild"><name>catalog.navigation.all</name></action> <action method="unsetChild"><name>callout.sendcard</name></action> <action method="unsetChild"><name>callout.specialorder</name></action> <block type="customer/account_navigation" name="customer_account_navigation" before="-" template="customer/account/navigation.phtml"> <action method="addLink" translate="label" module="customer"><name>account</name><path>customer/account/</path><label>Account Dashboard</label></action> <action method="addLink" translate="label" module="customer"><name>account_edit</name><path>customer/account/edit/</path><label>Account Information</label></action> <action method="addLink" translate="label" module="customer"><name>address_book</name><path>customer/address/</path><label>Address Book</label></action> </block> <block type="sales/reorder_sidebar" name="sale.reorder.sidebar" as="reorder" template="sales/reorder/sidebar.phtml"/> <remove name="tags_popular"/> </reference> </customer_account> This was basically copied straight over from another one of our sites where this works 100%. I've tried everything I can think of (changing the name of the reference in both the template and the layout xml, for example) to no avail. The templates that the layout is referencing are obviously working because they do show up when put into the "content" area. This installation of magento is version 1.3.1.1. I appreciate any advice you have to give me... *Update: I tried changing the reference to "global_messages", and it doesn't show there either. It only seems to work in the "content" section.* Update 2: These are the results of using the "showLayout=page" query string on the page when used with Alan Storm's very handy debugging module (which you'll find in his answer below). <?xml version="1.0"?> <layout><block type="page/html" name="root" output="toHtml" template="page/3columns.phtml"> <block type="page/html_head" name="head" as="head"> <action method="addJs"> <script>prototype/prototype.js</script> </action> <action method="addJs"> <script>prototype/validation.js</script> </action> <action method="addJs"> <script>paypoint/validation.js</script> </action> <action method="addJs"> <script>scriptaculous/builder.js</script> </action> <action method="addJs"> <script>scriptaculous/effects.js</script> </action> <action method="addJs"> <script>scriptaculous/dragdrop.js</script> </action> <action method="addJs"> <script>scriptaculous/controls.js</script> </action> <action method="addJs"> <script>scriptaculous/slider.js</script> </action> <action method="addJs"> <script>varien/js.js</script> </action> <action method="addJs"> <script>varien/form.js</script> </action> <action method="addJs"> <script>varien/menu.js</script> </action> <action method="addJs"> <script>mage/translate.js</script> </action> <action method="addJs"> <script>mage/cookies.js</script> </action> <action method="addCss"> <stylesheet>css/reset.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/boxes.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/clears.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/menu.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/calendar-blue.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/styles.css</stylesheet> </action> <action method="addItem"> <type>skin_css</type> <name>css/iestyles.css</name> <params/> <if>IE</if> </action> <action method="addItem"> <type>skin_css</type> <name>css/ie7.css</name> <params/> <if>IE 7</if> </action> <action method="addItem"> <type>skin_css</type> <name>css/ie7minus.css</name> <params/> <if>lt IE 7</if> </action> <action method="addItem"> <type>js</type> <name>lib/ds-sleight.js</name> <params/> <if>lt IE 7</if> </action> <action method="addItem"> <type>js</type> <name>varien/iehover-fix.js</name> <params/> <if>lt IE 7</if> </action> <action method="addCss"> <stylesheet>css/print.css</stylesheet> <params>media="print"</params> </action> </block> <block type="page/html_header" name="header" as="header"> <block type="page/template_links" name="top.links" as="topLinks"/> <block type="page/switch" name="store_language" as="store_language" template="page/switch/languages.phtml"/> <block type="core/template" name="top.nav" template="page/html/top.nav.phtml"/> </block> <block type="core/messages" name="global_messages" as="global_messages"/> <block type="core/messages" name="messages" as="messages"/> <block type="core/text_list" name="content" as="content"/> <block type="core/text_list" name="right" as="right"/> <block type="page/html_footer" name="footer" as="footer" template="page/html/footer.phtml"/> <block type="core/text_list" name="before_body_end" as="before_body_end"/> </block> <block type="core/profiler" output="toHtml"/> <reference name="top.links"> <action method="addLink" translate="label title" module="customer"> <label>My Account</label> <url helper="customer/getAccountUrl"/> <title>My Account</title> <prepare/> <urlParams/> <position>10</position> </action> </reference> <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> <reference name="top.menu"> <block type="catalog/navigation" name="catalog.topnav" template="catalog/navigation/top.phtml"/> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="catalog" ifconfig="catalog/seo/site_map"> <label>Site Map</label> <url helper="catalog/map/getCategoryUrl"/> <title>Site Map</title> </action> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="catalogsearch" ifconfig="catalog/seo/search_terms"> <label>Search Terms</label> <url helper="catalogsearch/getSearchTermUrl"/> <title>Search Terms</title> </action> <action method="addLink" translate="label title" module="catalogsearch"> <label>Advanced Search</label> <url helper="catalogsearch/getAdvancedSearchUrl"/> <title>Advanced Search</title> </action> </reference> <reference name="top.links"> <block type="checkout/links" name="checkout_cart_link"> <action method="addCartLink"/> <action method="addCheckoutLink"/> </block> </reference> <reference name="footer"> <block type="cms/block" name="cms_footer_links" before="footer_links"> <action method="setBlockId"> <block_id>footer_links</block_id> </action> </block> </reference> <reference name="left"> <block type="tag/popular" name="tags_popular" template="tag/popular.phtm" ignore="1"> <action method="setTemplate"> <template>tag/popular.phtml</template> </action> </block> </reference> <reference name="left"> </reference> <reference name="before_body_end"> <block type="googleanalytics/ga" name="google_analytics" as="google_analytics"/> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="contacts" ifconfig="contacts/contacts/enabled"> <label>Contact Us</label> <url>contact-us</url> <title>Contact Us</title> <prepare>true</prepare> </action> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="rss" ifconfig="rss/config/active"> <label>RSS</label> <url>rss</url> <title>RSS testing</title> <prepare>true</prepare> <urlParams/> <position/> <li/> <a>class="link-feed"</a> </action> </reference> <reference name="wishlist_sidebar"> <action method="addPriceBlockType"> <type>bundle</type> <block>bundle/catalog_product_price</block> <template>bundle/catalog/product/price.phtml</template> </action> </reference> <reference name="cart_sidebar"> <action method="addItemRender"> <type>bundle</type> <block>bundle/checkout_cart_item_renderer</block> <template>checkout/cart/sidebar/default.phtml</template> </action> </reference> <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> <reference name="left"> <action method="unsetChild"> <name>catalog.navigation.all</name> </action> <action method="unsetChild"> <name>callout.sendcard</name> </action> <action method="unsetChild"> <name>callout.specialorder</name> </action> <block type="customer/account_navigation" name="customer_account_navigation" before="-" template="customer/account/navigation.phtml"> <action method="addLink" translate="label" module="customer"> <name>account</name> <path>customer/account/</path> <label>Account Dashboard</label> </action> <action method="addLink" translate="label" module="customer"> <name>account_edit</name> <path>customer/account/edit/</path> <label>Account Information</label> </action> <action method="addLink" translate="label" module="customer"> <name>address_book</name> <path>customer/address/</path> <label>Address Book</label> </action> </block> <block type="sales/reorder_sidebar" name="sale.reorder.sidebar" as="reorder" template="sales/reorder/sidebar.phtml"/> <remove name="tags_popular"/> </reference> <reference name="customer_account_navigation"> <action method="addLink" translate="label" module="sales"> <name>orders</name> <path>sales/order/history/</path> <label>My Orders</label> </action> </reference> <reference name="customer_account_navigation"> <action method="addLink" translate="label" module="tag"> <name>tags</name> <path>tag/customer/</path> <label>My Tags</label> </action> </reference> <reference name="customer_account_navigation"> <action method="addLink" translate="label" module="newsletter"> <name>newsletter</name> <path>newsletter/manage/</path> <label>Newsletter Subscriptions</label> </action> </reference> <reference name="cart_sidebar"> <action method="addItemRender"> <type>bundle</type> <block>bundle/checkout_cart_item_renderer</block> <template>checkout/cart/sidebar/default.phtml</template> </action> </reference> <update handle="customer_account"/> <reference name="content"> <block type="customer/account_dashboard" name="customer_account_dashboard" template="customer/account/dashboard.phtml"> <block type="customer/account_dashboard_hello" name="customer_account_dashboard_hello" as="hello" template="customer/account/dashboard/hello.phtml"/> <block type="core/template" name="customer_account_dashboard_top" as="top"/> <block type="customer/account_dashboard_info" name="customer_account_dashboard_info" as="info" template="customer/account/dashboard/info.phtml"/> <block type="customer/account_dashboard_newsletter" name="customer_account_dashboard_newsletter" as="newsletter" template="customer/account/dashboard/newsletter.phtml"/> <block type="customer/account_dashboard_address" name="customer_account_dashboard_address" as="address" template="customer/account/dashboard/address.phtml"/> <block type="core/template" name="customer_account_dashboard_info1" as="info1"/> <block type="core/template" name="customer_account_dashboard_info2" as="info2"/> </block> </reference> <reference name="right"> <action method="unsetChild"> <name>catalog_compare_sidebar</name> </action> </reference> <reference name="customer_account_dashboard"> <action method="unsetChild"> <name>top</name> </action> <block type="sales/order_recent" name="customer_account_dashboard_top" as="top" template="sales/order/recent.phtml"/> </reference> <reference name="right"> <action method="unsetChild"> <name>right.poll</name> </action> </reference> <reference name="customer_account_dashboard"> <action method="unsetChild"> <name>customer_account_dashboard_info2</name> </action> <block type="tag/customer_recent" name="customer_account_dashboard_info2" as="info2" template="tag/customer/recent.phtml"/> </reference> <reference name="right"> <action method="unsetChild"> <name>right.newsletter</name> </action> </reference> <reference name="top.links"> <action method="addLink" translate="label title" module="customer"> <label>Log Out</label> <url helper="customer/getLogoutUrl"/> <title>Log Out</title> <prepare/> <urlParams/> <position>100</position> </action> </reference></layout>

    Read the article

< Previous Page | 26 27 28 29 30