Search Results

Search found 5346 results on 214 pages for 'before filter'.

Page 200/214 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • CodePlex Daily Summary for Thursday, June 17, 2010

    CodePlex Daily Summary for Thursday, June 17, 2010New ProjectsAstalanumerator: A JavaScript based recursive DOM/JS object inspector. Uses a simple tree menu to enumerate all properties of a object.BDD Log Converter: A simple .NET class and console application that will convert BDD logs (MDT) into XML format.CastleInvestProj: Castle Investigating project Easy Callback: This library facilitates the use of multiple asynchronous calls on the same page, and asynchronous calls from a user control also have a clean cod...Easy Wings: Small webApp to manage aircraft booking in flying club. French only for the moment.EPiServer Template Foundation: EPiServer Template Foundation builds on top of Page Type Builder to provide a framework for common site features such as basic page type properties...guidebook: a project to plan your road trip.Look into documents for e-discovery: Search, browse, tag, annotate documents such as MS Word, PDF, e-mail, etc. Good for legal professionals do e-discovery. One Bus Away for Windows Phone: A Windows Phone 7 application written in Silverlight for the OneBusAway (www.onebusaway.org) website. Allows mobile users to search for public tra...OneBusAway for Windows Phone 7: OneBusAway is a service with transit information for the Seattle, WA region. We are creating a mobile application for Windows Phone 7 utilizing th...PoFabLab - Poetry Generation Library and Editor in .NET: PoFabLab is an open source library and word processor designed for digital poets. The library can scan lines, perform Markov analysis, filter text...Project Axure: More details coming soon.Чат кутежа 2.0: ИРЦ чат специально для форума ЕНЕ简易代码生成器: 初次使用CodePlex,这只是一个测试项目。打算用WPF做一个简单的代码生成器,兼具SQL Server Client功能。使用.Net 4.0, C#开发。运营工作系统: TRAS(Team resource assist system) is a toolkit that help the studio to manage and distribute the daily work, like publish the news, GM broadcast a...New ReleasesAmuse - A New MU* Client For Windows: 2010 June: Important Notice to TestersPlease uninstall any previous versions of Amuse prior to this one before installing. Changes and InformationFirst relea...ASP.NET Generic Data Source Control: V1.0: GenericDataSource - Version 1.0Binary This is the first official binary release of the GenericDataSource for ASP.NET - stable and ready for product...Astalanumerator: Astalanumerator 0.7: I wanted to map all properties in javascript and inspect them regardless if they were objects or not. IE doesn’t support for(i in..) for native pro...BDD Log Converter: BDD Log Converter 0.1.0: First release (0.1.0).DVD Swarm: 0.8.10.616: Major update with improvements to encoding speed.Easy Callback: Easy Callback 1.0.0.0: Easy Callback library 1.0.0.0Facebook Connect Authentication for ASP.NET: Facebook Connect Authentication for ASP.NET - v1.0: Now supporting Facebook's new Open Graph API JavaScript SDK, this release of FBConnectAuth also adds support for running in partially trusted envir...FlickrNet API Library: 3.0 Beta 3: Another small Beta. Changed parsing code so exceptions aren't raised when new attributes are added by Flickr. This affects searches where you are ...Infragistics Analytics Framework: Infragistics Analytics Framework 10.2: An updated version of Infragistics Analytics Framework, which utilizes the newest version (v.1.4.4) of MSAF as well as the newest release (v.10.2) ...NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 1: Version 1.0 build 1:[change] Test run failure notification now disappears automaticallyOpen Source PLM Activities: 3dxml player integration for Aras Innovator: This is just a simple html file you need to add to your Aras Innovator install directory. It loads the 3Dxml player for your 3dxml files. Tested o...patterns & practices - Windows Azure Guidance: WAAG - Part 2 - Drop 1: First code and docs drop for Part 2 of the Windows Azure Architecture Guide Part 1 of the Guide is released here. Highlights of this release are:...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (June 2010): Installer of the latest binaries of Phalanger 2.0 (June 2010) and its integration into Visual Studio 2008 SP1. * Improved compatibility with P...RIA Services Essentials: Book Club Application (June 16, 2010): Added some XAML to hide/show link to BookShelf page based on whether the user is logged in or not. Updated IsBookOwner authorization rule implement...secs4net: Relase 1.01: version 1.01 releasesELedit: sELedit v1.1c: Added: Tool for exporting NPC/Mob database file that is used by sNPCeditSharePoint Ad Rotator: SPAdRotator 2.0 Beta 2: Added: Open tool pane link to default Web Part text Made all images except the first hidden by default, so the Web Part will degrade gracefully w...sMAPtool: sMAPtool v0.7f (without Maps): Added: 3rd party magnifier softwaresNPCedit: sNPCedit v0.9c: Added: npc/mob names and corresponding datbaseSolidWorks Addin Development: GenericAddinFrameworkR1-06.17.2010: .sTASKedit: sTASKedit v0.8: Important BugFix: there was an mistake in the structure, team-member block and get-items block was swapped internally. Tasks that contains both blo...stefvanhooijdonk.com: UnitTesting-SP2010-TFS2010: Files for my post on TFS2010 and NUnit testing with SP2010 projects. see the post here: http://wp.me/pMnlQ-88 The XSLT here is from http://nunit4t...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2010.1.10.504: What's new in v2010.1.0610 (Beta): RadDocking component has been replaced with the latest RadDock control Requirements: Visual Studio 2005+ Tele...TFS Buddy: TFS Buddy 1.2: Fixes a problem with notificationsThales Simulator Library: Version 0.9: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Triton Application Framework: Tools - Code Generator - Build 1.0: This is the first release of the Generator. This is buggy but works.VCC: Latest build, v2.1.30616.0: Automatic drop of latest buildXsltDb - DotNetNuke Module Builder: 01.01.27: Code completion for XsltDb, HTML and XSL stuff!! Full screen editing Some bugs are still in EditArea component and object lists in code completi...Чат кутежа 2.0: 0.9a build 2 версия: вторая сборка первой альфа-версии ирц-клиента.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsdotSpatialpatterns & practices: Enterprise Library Contribpatterns & practices – Enterprise LibraryBlogEngine.NETLightweight Fluent WorkflowRhyduino - Arduino and Managed CodeSunlit World SchemeNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSolidWorks Addin DevelopmentN2 CMS

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • Charms and the App Bar

    - by Dennis Vroegop
    Ok. I admit. I made a mistake in the last post about our planespotter app. I have dedicated a full part of the hub to Social. I also had a section called Friends but that made sense since I said that “Friends” is a special group of people that connect to each other through our app and only our app. Social however is sharing our spots with Twitter, Facebook and so on. Now, we could write that functionality in our app in a different section but there is one small problem with that: users don’t expect that. Ok, I admit. The mistake was quite deliberate to give me an excuse to write this part. But still: the mistake is one I see a lot. People are trying to do stuff in their application that they shouldn’t be doing. This always strike me as slightly odd: why do some work when others have already done it for you and you can just use it? After all: good developers are lazy (lazy people will always try to find the easiest way to do something and in development land this usually means the cleanest and best to support way…) So. What is that part that Microsoft has done for us and we don’t have to do ourselves? The answer lies on the right hand of your Win8 screen: This is a screenshot of my tablet (as you can see I am writing this right now….) When I swipe my finger from out of the screen on the right inside the screen (or move the mouse to the upper right corner) this menu will appear. Next to settings and the start menu button we’ll find the Search and the Share charms. These are two ways that your app can share the information it contains with the rest of the world, or at least: the rest of your system. So don’t write a Search feature in your app. Don’t write a Share feature in your app. It’s here already. Users, once they are used to Windows 8, will use that feature and expect it to work. If it doesn’t, they won’t like your app and you can kiss you dreams of everlasting fame goodbye. So use these two. What are they? Well, simply they are parts of a contract. In your app you say somewhere in code that you are supporting Search and Share. So when the user selects Share the system will interrogate the current app in the foreground if it supports this feature. Your app will say “But why, yes, I do!” Then the system will ask the app “Ok then, wisecrack, then share!” and you will have to provide the system with some information about the format. Other applications have subscribed to be at the receiving end of the Share contract. They have told the system that they support Sharing (receiving) and which formats they understand. If one or more of them support the formats you specify, the user will see them. The user clicks / taps on the app of their choice and data is moved from your app to the new one. So if you say you support Facebook and Twitter users can post data from your app to these networks by selecting Share. The same applies to Search. Don’t make a “search” button in your app but use the contract to tell the system that you support search and use that instead. Users will be grateful (remember that bar with men/women/creatures that are waiting for you?) The more and more people get to know Windows 8, the more they will use this. And if you are one of the people who wrote an app that helped them learn the system, well, that’s even better. So. We don’t have a Share or a Search button. We do have other buttons. Most important: we probably need a “New Spot” button. And a “Filter” might be useful. Or someway to open the camera so you can add a picture to the spot. Where will be put those? The answer is the “Appbar” . This is a application / context aware menu that slides up from the bottom of the screen when you move your finger / mouse from below the screen into it. From above downwards works just as well. Here you see an example of the appbar from the People app. (click on it for a larger version). This appears whenever you slide your finger up from below of down from above. This is where you put your commands. Remember, this is context aware so this menu will change when you are in different parts of your app or when you have selected different items. There are a few conventions when you create this appbar. First, the items on the right are “General” items, meaning they have little to do with what is on the screen right now. I think this would be a great place to add our “New Spot” icon. On the far left are items associated with the current selected item or screen. So if you have a spot selected, the button for Add Photo should be visible here and on the left hand side. Not everything is as clear as this, but this is what you should strive for. Group items together. And please note: this is the only place in Metro design where we are allowed to use lines as separators. So when you want to separate a group of icons from another group, add a line. Also note the simplicity of the buttons. No colors, no lights or shadows, no 3D. After a couple of years of fancy almost realistic looking icons people have finally decided that hey, this is a virtual world: it’s ok to look virtual as well. So make things as readable and clear as possible and don’t try to duplicate nature. It’s all about the information, remember? (If you don’t remember I’d like to point you to a older blog post of mine about the what and why of Metro). So.. think about the buttons a bit and think about Share and Search. What will you put there? Remember: this is the way the users interact with your apps and while you shouldn’t judge a book by its covers when it comes to people, this isn’t entirely so when it comes to apps. People DO judge an app by its looks and the way it feels. Take advantage of that. History has learned that a crappy app with a GREAT user interface gets better reviews than a GREAT app with a lousy UI… I know: developers will find this extremely unfair but that’s the world we live in (No, I am not saying you should deliver rubbish apps). Next time: we’ll start by building the darn thing!

    Read the article

  • Working With Extended Events

    - by Fatherjack
    SQL Server 2012 has made working with Extended Events (XE) pretty simple when it comes to what sessions you have on your servers and what options you have selected and so forth but if you are like me then you still have some SQL Server instances that are 2008 or 2008 R2. For those servers there is no built-in way to view the Extended Event sessions in SSMS. I keep coming up against the same situations – Where are the xel log files? What events, actions or predicates are set for the events on the server? What sessions are there on the server already? I got tired of this being a perpetual question and wrote some TSQL to save as a snippet in SQL Prompt so that these details are permanently only a couple of clicks away. First, some history. If you just came here for the code skip down a few paragraphs and it’s all there. If you want a little time to reminisce about SQL Server then stick with me through the next paragraph or two. We are in a bit of a cross-over period currently, there are many versions of SQL Server but I would guess that SQL Server 2008, 2008 R2 and 2012 comprise the majority of installations. With each of these comes a set of management tools, of which SQL Server Management Studio (SSMS) is one. In 2008 and 2008 R2 Extended Events made their first appearance and there was no way to work with them in the SSMS interface. At some point the Extended Events guru Jonathan Kehayias (http://www.sqlskills.com/blogs/jonathan/) created the SQL Server 2008 Extended Events SSMS Addin which is really an excellent tool to ease XE session administration. This addin will install in SSMS 2008 or 2008R2 but not SSMS 2012. If you use a compatible version of SSMS then I wholly recommend downloading and using it to make your work with XE much easier. If you have SSMS 2012 installed, and there is no reason not to as it will let you work with all versions of SQL Server, then you cannot install this addin. If you are working with SQL Server 2012 then SSMS 2012 has built in functionality to manage XE sessions – this functionality does not apply for 2008 or 2008 R2 instances though. This means you are somewhat restricted and have to use TSQL to manage XE sessions on older versions of SQL Server. OK, those of you that skipped ahead for the code, you need to start from here: So, you are working with SSMS 2012 but have a SQL Server that is an earlier version that needs an XE session created or you think there is a session created but you aren’t sure, or you know it’s there but can’t remember if it is running and where the output is going. How do you find out? Well, none of the information is hidden as such but it is a bit of a wrangle to locate it and it isn’t a lot of code that is unlikely to remain in your memory. I have created two pieces of code. The first examines the SYS.Server_Event_… management views in combination with the SYS.DM_XE_… management views to give the name of all sessions that exist on the server, regardless of whether they are running or not and two pieces of TSQL code. One piece will alter the state of the session: if the session is running then the code will stop the session if executed and vice versa. The other piece of code will drop the selected session. If the session is running then the code will stop it first. Do not execute the DROP code unless you are sure you have the Create code to hand. It will be dropped from the server without a second chance to change your mind. /**************************************************************/ /***   To locate and describe event sessions on a server    ***/ /***                                                        ***/ /***   Generates TSQL to start/stop/drop sessions           ***/ /***                                                        ***/ /***        Jonathan Allen - @fatherjack                    ***/ /***                 June 2013                                ***/ /***                                                        ***/ /**************************************************************/ SELECT  [EES].[name] AS [Session Name - all sessions] ,         CASE WHEN [MXS].[name] IS NULL THEN ISNULL([MXS].[name], 'Stopped')              ELSE 'Running'         END AS SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'ALTER EVENT SESSION [' + [EES].[name]                          + '] ON SERVER STATE = START;')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP;'         END AS ALTER_SessionState ,         CASE WHEN [MXS].[name] IS NULL              THEN ISNULL([MXS].[name],                          'DROP EVENT SESSION [' + [EES].[name]                          + '] ON SERVER; -- This WILL drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.')              ELSE 'ALTER EVENT SESSION [' + [EES].[name]                   + '] ON SERVER STATE = STOP; ' + CHAR(10)                   + '-- DROP EVENT SESSION [' + [EES].[name]                   + '] ON SERVER; -- This WILL stop and drop the session. It will no longer exist. Don't do it unless you are certain you can recreate it if you need it.'         END AS DROP_Session FROM    [sys].[server_event_sessions] AS EES         LEFT JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name] WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) ORDER BY SessionState GO I have excluded the system_health and AlwaysOn sessions as I don’t want to accidentally execute the drop script for these sessions that are created as part of the SQL Server installation. It is possible to recreate the sessions but that is a whole lot of aggravation I’d rather avoid. The second piece of code gathers details of running XE sessions only and provides information on the Events being collected, any predicates that are set on those events, the actions that are set to be collected, where the collected information is being logged and if that logging is to a file target, where that file is located. /**********************************************/ /***    Running Session summary                ***/ /***                                        ***/ /***    Details key values of XE sessions     ***/ /***    that are in a running state            ***/ /***                                        ***/ /***        Jonathan Allen - @fatherjack    ***/ /***        June 2013                        ***/ /***                                        ***/ /**********************************************/ SELECT  [EES].[name] AS [Session Name - running sessions] ,         [EESE].[name] AS [Event Name] ,         COALESCE([EESE].[predicate], 'unfiltered') AS [Event Predicate Filter(s)] ,         [EESA].[Action] AS [Event Action(s)] ,         [EEST].[Target] AS [Session Target(s)] ,         ISNULL([EESF].[value], 'No file target in use') AS [File_Target_UNC] -- select * FROM    [sys].[server_event_sessions] AS EES         INNER JOIN [sys].[dm_xe_sessions] AS MXS ON [EES].[name] = [MXS].[name]         INNER JOIN [sys].[server_event_session_events] AS [EESE] ON [EES].[event_session_id] = [EESE].[event_session_id]         LEFT JOIN [sys].[server_event_session_fields] AS EESF ON ( [EES].[event_session_id] = [EESF].[event_session_id]                                                               AND [EESF].[name] = 'filename'                                                               )         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + sest.name                                         FROM    [sys].[server_event_session_targets]                                                 AS SEST                                         WHERE   [EES].[event_session_id] = [SEST].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Target]                     ) AS EEST         CROSS APPLY ( SELECT    STUFF(( SELECT  ', ' + [sesa].NAME                                         FROM    [sys].[server_event_session_actions]                                                 AS sesa                                         WHERE   [sesa].[event_session_id] = [EES].[event_session_id]                                       FOR                                         XML PATH('')                                       ), 1, 2, '') AS [Action]                     ) AS EESA WHERE   [EES].[name] NOT IN ( 'system_health', 'AlwaysOn_health' ) /*Optional to exclude 'out-of-the-box' traces*/ I hope that these scripts are useful to you and I would be obliged if you would keep my name in the script comments. I have no problem with you using it in production or personal circumstances, however it has no warranty or guarantee. Don’t use it unless you understand it and are happy with what it is going to do. I am not ever responsible for the consequences of executing this script on your servers.

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • Recap: Oracle Fusion Middleware Strategies Driving Business Innovation

    - by Harish Gaur
    Hasan Rizvi, Executive Vice President of Oracle Fusion Middleware & Java took the stage on Tuesday to discuss how Oracle Fusion Middleware helps enable business innovation. Through a series of product demos and customer showcases, Hassan demonstrated how Oracle Fusion Middleware is a complete platform to harness the latest technological innovations (cloud, mobile, social and Fast Data) throughout the application lifecycle. Fig 1: Oracle Fusion Middleware is the foundation of business innovation This Session included 4 demonstrations to illustrate these strategies: 1. Build and deploy native mobile applications using Oracle ADF Mobile 2. Empower business user to model processes, design user interface and have rich mobile experience for process interaction using Oracle BPM Suite PS6. 3. Create collaborative user experience and integrate social sign-on using Oracle WebCenter Portal, Oracle WebCenter Content, Oracle Social Network & Oracle Identity Management 11g R2 4. Deploy and manage business applications on Oracle Exalogic Nike, LA Department of Water & Power and Nintendo joined Hasan on stage to share how their organizations are leveraging Oracle Fusion Middleware to enable business innovation. Managing Performance in the Wrld of Social and Mobile How do you provide predictable scalability and performance for an application that monitors active lifestyle of 8 million users on a daily basis? Nike’s answer is Oracle Coherence, a component of Oracle Fusion Middleware and Oracle Exadata. Fig 2: Oracle Coherence enabled data grid improves performance of Nike+ Digital Sports Platform Nicole Otto, Sr. Director of Consumer Digital Technology discussed the vision of the Nike+ platform, a platform which represents a shift for NIKE from a  "product"  to  a "product +" experience.  There are currently nearly 8 million users in the Nike+ system who are using digitally-enabled Nike+ devices.  Once data from the Nike+ device is transmitted to Nike+ application, users access the Nike+ website or via the Nike mobile applicatoin, seeing metrics around their daily active lifestyle and even engage in socially compelling experiences to compare, compete or collaborate their data with their friends. Nike expects the number of users to grow significantly this year which will drive an explosion of data and potential new experiences. To deal with this challenge, Nike envisioned building a shared platform that would drive a consumer-centric model for the company. Nike built this new platform using Oracle Coherence and Oracle Exadata. Using Coherence, Nike built a data grid tier as a distributed cache, thereby provide low-latency access to most recent and relevant data to consumers. Nicole discussed how Nike+ Digital Sports Platform is unique in the way that it utilizes the Coherence Grid.  Nike takes advantage of Coherence as a traditional cache using both cache-aside and cache-through patterns.  This new tier has enabled Nike to create a horizontally scalable distributed event-driven processing architecture. Current data grid volume is approximately 150,000 request per minute with about 40 million objects at any given time on the grid. Improving Customer Experience Across Multiple Channels Customer experience is on top of every CIO's mind. Customer Experience needs to be consistent and secure across multiple devices consumers may use.  This is the challenge Matt Lampe, CIO of Los Angeles Department of Water & Power (LADWP) was faced with. Despite being the largest utilities company in the country, LADWP had been relying on a 38 year old customer information system for serving its customers. Their prior system  had been unable to keep up with growing customer demands. Last year, LADWP embarked on a journey to improve customer experience for 1.6million LA DWP customers using Oracle WebCenter platform. Figure 3: Multi channel & Multi lingual LADWP.com built using Oracle WebCenter & Oracle Identity Management platform Matt shed light on his efforts to drive customer self-service across 3 dimensions – new website, new IVR platform and new bill payment service. LADWP has built a new portal to increase customer self-service while reducing the transactions via IVR. LADWP's website is powered Oracle WebCenter Portal and is accessible by desktop and mobile devices. By leveraging Oracle WebCenter, LADWP eliminated the need to build, format, and maintain individual mobile applications or websites for different devices. Their entire content is managed using Oracle WebCenter Content and secured using Oracle Identity Management. This new portal automated their paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account -  like Bill Pay, Payment History, Bill History and Usage Analysis. LADWP's solution went live in April 2012. Matt indicated that LADWP's Self-Service Portal has greatly improved customer satisfaction.  In a JD Power Associates website satisfaction survey, results indicate rankings have climbed by 25+ points, marking a remarkable increase in user experience. Bolstering Performance and Simplifying Manageability of Business Applications Ingvar Petursson, Senior Vice Preisdent of IT at Nintendo America joined Hasan on-stage to discuss their choice of Exalogic. Nintendo had significant new requirements coming their way for business systems, both internal and external, in the years to come, especially with new products like the WiiU on the horizon this holiday season. Nintendo needed a platform that could give them performance, availability and ease of management as they deploy business systems. Ingvar selected Engineered Systems for two reasons: 1. High performance  2. Ease of management Figure 4: Nintendo relies on Oracle Exalogic to run ATG eCommerce, Oracle e-Business Suite and several business applications Nintendo made a decision to run their business applications (ATG eCommerce, E-Business Suite) and several Fusion Middleware components on the Exalogic platform. What impressed Ingvar was the "stress” testing results during evaluation. Oracle Exalogic could handle their 3-year load estimates for many functions, which was better than Nintendo expected without any hardware expansion. Faster Processing of Big Data Middleware plays an increasingly important role in Big Data. Last year, we announced at OpenWorld the introduction of Oracle Data Integrator for Hadoop and Oracle Loader for Hadoop which helps in the ability to move, transform, load data to and from Big Data Appliance to Exadata.  This year, we’ve added new capabilities to find, filter, and focus data using Oracle Event Processing. This product can natively integrate with Big Data Appliance or runs standalone. Hasan briefly discussed how NTT Docomo, largest mobile operator in Japan, leverages Oracle Event Processing & Oracle Coherence to process mobile data (from 13 million smartphone users) at a speed of 700K events per second before feeding it Hadoop for distributed processing of big data. Figure 5: Mobile traffic data processing at NTT Docomo with Oracle Event Processing & Oracle Coherence    

    Read the article

  • How to block the ASP.NET page while ajax UpdateProgress is being displayed.

    Step 1: Copy the following styles to your aspx page. <style type="text/css">       .hide       {           display: none;       }       .show       {           display: inherit;       }        .progressBackgroundFilter       {           position: absolute;           top: 0px;           bottom: 0px;           left: 0px;           right: 0px;           overflow: hidden;           padding: 0;           margin: 0;           background-color: #000;           filter: alpha(opacity=50);           opacity: 0.5;           z-index: 1000;       }       .processMessage       {           position: absolute;           font-family:Verdana;           font-size:12px;           font-weight:normal;           color:#000066;           top: 30%;           left: 43%;           padding: 10px;           width: 18%;           z-index: 1001;           background-color: #fff;       }   </style> Step 2: Put the divs as shown below in UpdateProgress control. <asp:UpdateProgress ID="updPrgsBaselineTab" runat="server">        <ProgressTemplate>            <div id="progressBackgroundFilter" class="progressBackgroundFilter">            </div>            <div id="processMessage" class="processMessage">                <table width="100%">                    <tr style="width: 100%">                        <td style="width: 100%">                            Please Wait..........                        </td>                    </tr>                    <tr style="width: 100%">                        <td style="width: 100%" align="center">                            <img src="../Images/Update_Progress.gif" />                        </td>                    </tr>                </table>            </div>        </ProgressTemplate>    </asp:UpdateProgress> span.fullpost {display:none;}

    Read the article

  • More Fun with C# Iterators and Generators

    - by James Michael Hare
    In my last post, I talked quite a bit about iterators and how they can be really powerful tools for filtering a list of items down to a subset of items.  This had both pros and cons over returning a full collection, which, in summary, were:   Pros: If traversal is only partial, does not have to visit rest of collection. If evaluation method is costly, only incurs that cost on elements visited. Adds little to no garbage collection pressure.    Cons: Very slight performance impact if you know caller will always consume all items in collection. And as we saw in the last post, that con for the cost was very, very small and only really became evident on very tight loops consuming very large lists completely.    One of the key items to note, though, is the garbage!  In the traditional (return a new collection) method, if you have a 1,000,000 element collection, and wish to transform or filter it in some way, you have to allocate space for that copy of the collection.  That is, say you have a collection of 1,000,000 items and you want to double every item in the collection.  Well, that means you have to allocate a collection to hold those 1,000,000 items to return, which is a lot especially if you are just going to use it once and toss it.   Iterators, though, don't have this problem.  Each time you visit the node, it would return the doubled value of the node (in this example) and not allocate a second collection of 1,000,000 doubled items.  Do you see the distinction?  In both cases, we're consuming 1,000,000 items.  But in one case we pass back each doubled item which is just an int (for example's sake) on the stack and in the other case, we allocate a list containing 1,000,000 items which then must be garbage collected.   So iterators in C# are pretty cool, eh?  Well, here's one more thing a C# iterator can do that a traditional "return a new collection" transformation can't!   It can return **unbounded** collections!   I know, I know, that smells a lot like an infinite loop, eh?  Yes and no.  Basically, you're relying on the caller to put the bounds on the list, and as long as the caller doesn't you keep going.  Consider this example:   public static class Fibonacci {     // returns the infinite fibonacci sequence     public static IEnumerable<int> Sequence()     {         int iteration = 0;         int first = 1;         int second = 1;         int current = 0;         while (true)         {             if (iteration++ < 2)             {                 current = 1;             }             else             {                 current = first + second;                 second = first;                 first = current;             }             yield return current;         }     } }   Whoa, you say!  Yes, that's an infinite loop!  What the heck is going on there?  Yes, that was intentional.  Would it be better to have a fibonacci sequence that returns only a specific number of items?  Perhaps, but that wouldn't give you the power to defer the execution to the caller.   The beauty of this function is it is as infinite as the sequence itself!  The fibonacci sequence is unbounded, and so is this method.  It will continue to return fibonacci numbers for as long as you ask for them.  Now that's not something you can do with a traditional method that would return a collection of ints representing each number.  In that case you would eventually run out of memory as you got to higher and higher numbers.  This method, though, never runs out of memory.   Now, that said, you do have to know when you use it that it is an infinite collection and bound it appropriately.  Fortunately, Linq provides a lot of these extension methods for you!   Let's say you only want the first 10 fibonacci numbers:       foreach(var fib in Fibonacci.Sequence().Take(10))     {         Console.WriteLine(fib);     }   Or let's say you only want the fibonacci numbers that are less than 100:       foreach(var fib in Fibonacci.Sequence().TakeWhile(f => f < 100))     {         Console.WriteLine(fib);     }   So, you see, one of the nice things about iterators is their power to work with virtually any size (even infinite) collections without adding the garbage collection overhead of making new collections.    You can also do fun things like this to make a more "fluent" interface for for loops:   // A set of integer generator extension methods public static class IntExtensions {     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> Every(this int start)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; ++i)         {             yield return i;         }     }     // Begins counting to infinity by the given step value, use To() to     public static IEnumerable<int> Every(this int start, int byEvery)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; i += byEvery)         {             yield return i;         }     }     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> To(this int start, int end)     {         for (var i = start; i <= end; ++i)         {             yield return i;         }     }     // Ranges the count by specifying the upper range of the count.     public static IEnumerable<int> To(this IEnumerable<int> collection, int end)     {         return collection.TakeWhile(item => item <= end);     } }   Note that there are two versions of each method.  One that starts with an int and one that starts with an IEnumerable<int>.  This is to allow more power in chaining from either an existing collection or from an int.  This lets you do things like:   // count from 1 to 30 foreach(var i in 1.To(30)) {     Console.WriteLine(i); }     // count from 1 to 10 by 2s foreach(var i in 0.Every(2).To(10)) {     Console.WriteLine(i); }     // or, if you want an infinite sequence counting by 5s until something inside breaks you out... foreach(var i in 0.Every(5)) {     if (someCondition)     {         break;     }     ... }     Yes, those are kinda play functions and not particularly useful, but they show some of the power of generators and extension methods to form a fluid interface.   So what do you think?  What are some of your favorite generators and iterators?

    Read the article

  • PowerShell Script to Enumerate SharePoint 2010 or 2013 Permissions and Active Directory Group Membership

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/07/01/powershell-script-to-enumerate-sharepoint-2010-or-2013-permissions-and.aspx   In this post I will present a script to enumerate SharePoint 2010 or 2013 permissions across the entire farm down to the site (SPWeb) level.  As a bonus this script also recursively expands the membership of any Active Directory (AD) group including nested groups which you wouldn’t be able to find through the SharePoint UI.   History     Back in 2009 (over 4 years ago now) I published one my most read blog posts about enumerating SharePoint 2007 permissions.  I finally got around to updating that script to remove deprecated APIs, supporting the SharePoint 2010 commandlets, and fixing a few bugs.  There are 2 things that script did that I had to remove due to major architectural or procedural changes in the script. Indenting the XML output Ability to search for a specific user    I plan to add back the ability to search for a specific user but wanted to get this version published first.  As for indenting the XML that could be added but would take some effort.  If there is user demand for it (let me know in the comments or email me using the contact button at top of blog) I’ll move it up in priorities.    As a side note you may also notice that I’m not using the Active Directory commandlets.  This was a conscious decision since not all environments have them available.  Instead I’m relying on the older [ADSI] type accelerator and APIs.  It does add a significant amount of code to the script but it is necessary for compatibility.  Hopefully in a few years if I need to update again I can remove that legacy code.   Solution    Below is the script to enumerate SharePoint 2010 and 2013 permissions down to site level.  You can also download it from my SkyDrive account or my posting on the TechNet Script Center Repository. SkyDrive TechNet Script Center Repository http://gallery.technet.microsoft.com/scriptcenter/Enumerate-SharePoint-2010-35976bdb   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ########################################################### #DisplaySPWebApp8.ps1 # #Author: Brian T. Jackett #Last Modified Date: 2013-07-01 # #Traverse the entire web app site by site to display # hierarchy and users with permissions to site. ########################################################### function Expand-ADGroupMembership {     Param     (         [Parameter(Mandatory=$true,                    Position=0)]         [string]         $ADGroupName,         [Parameter(Position=1)]         [string]         $RoleBinding     )     Process     {         $roleBindingText = ""         if(-not [string]::IsNullOrEmpty($RoleBinding))         {             $roleBindingText = " RoleBindings=`"$roleBindings`""         }         Write-Output "<ADGroup Name=`"$($ADGroupName)`"$roleBindingText>"         $domain = $ADGroupName.substring(0, $ADGroupName.IndexOf("\") + 1)         $groupName = $ADGroupName.Remove(0, $ADGroupName.IndexOf("\") + 1)                                     #BEGIN - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         #http://www.microsoft.com/technet/scriptcenter/scripts/powershell/search/users/srch106.mspx         #GET AD GROUP FROM DIRECTORY SERVICES SEARCH         $strFilter = "(&(objectCategory=Group)(name="+($groupName)+"))"         $objDomain = New-Object System.DirectoryServices.DirectoryEntry         $objSearcher = New-Object System.DirectoryServices.DirectorySearcher         $objSearcher.SearchRoot = $objDomain         $objSearcher.Filter = $strFilter         # specify properties to be returned         $colProplist = ("name","member","objectclass")         foreach ($i in $colPropList)         {             $catcher = $objSearcher.PropertiesToLoad.Add($i)         }         $colResults = $objSearcher.FindAll()         #END - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         foreach ($objResult in $colResults)         {             if($objResult.Properties["Member"] -ne $null)             {                 foreach ($member in $objResult.Properties["Member"])                 {                     $indMember = [adsi] "LDAP://$member"                     $fullMemberName = $domain + ($indMember.Name)                                         #if($indMember["objectclass"]                         # if child AD group continue down chain                         if(($indMember | Select-Object -ExpandProperty objectclass) -contains "group")                         {                             Expand-ADGroupMembership -ADGroupName $fullMemberName                         }                         elseif(($indMember | Select-Object -ExpandProperty objectclass) -contains "user")                         {                             Write-Output "<ADUser>$fullMemberName</ADUser>"                         }                 }             }         }                 Write-Output "</ADGroup>"     } } #end Expand-ADGroupMembership # main portion of script if((Get-PSSnapin -Name microsoft.sharepoint.powershell) -eq $null) {     Add-PSSnapin Microsoft.SharePoint.PowerShell } $farm = Get-SPFarm Write-Output "<Farm Guid=`"$($farm.Id)`">" $webApps = Get-SPWebApplication foreach($webApp in $webApps) {     Write-Output "<WebApplication URL=`"$($webApp.URL)`" Name=`"$($webApp.Name)`">"     foreach($site in $webApp.Sites)     {         Write-Output "<SiteCollection URL=`"$($site.URL)`">"                 foreach($web in $site.AllWebs)         {             Write-Output "<Site URL=`"$($web.URL)`">"             # if site inherits permissions from parent then stop processing             if($web.HasUniqueRoleAssignments -eq $false)             {                 Write-Output "<!-- Inherits role assignments from parent -->"             }             # else site has unique permissions             else             {                 foreach($assignment in $web.RoleAssignments)                 {                     if(-not [string]::IsNullOrEmpty($assignment.Member.Xml))                     {                         $roleBindings = ($assignment.RoleDefinitionBindings | Select-Object -ExpandProperty name) -join ","                         # check if assignment is SharePoint Group                         if($assignment.Member.XML.StartsWith('<Group') -eq "True")                         {                             Write-Output "<SPGroup Name=`"$($assignment.Member.Name)`" RoleBindings=`"$roleBindings`">"                             foreach($SPGroupMember in $assignment.Member.Users)                             {                                 # if SharePoint group member is an AD Group                                 if($SPGroupMember.IsDomainGroup)                                 {                                     Expand-ADGroupMembership -ADGroupName $SPGroupMember.Name                                 }                                 # else SharePoint group member is an AD User                                 else                                 {                                     # remove claim portion of user login                                     #Write-Output "<ADUser>$($SPGroupMember.UserLogin.Remove(0,$SPGroupMember.UserLogin.IndexOf("|") + 1))</ADUser>"                                     Write-Output "<ADUser>$($SPGroupMember.UserLogin)</ADUser>"                                 }                             }                             Write-Output "</SPGroup>"                         }                         # else an indivdually listed AD group or user                         else                         {                             if($assignment.Member.IsDomainGroup)                             {                                 Expand-ADGroupMembership -ADGroupName $assignment.Member.Name -RoleBinding $roleBindings                             }                             else                             {                                 # remove claim portion of user login                                 #Write-Output "<ADUser>$($assignment.Member.UserLogin.Remove(0,$assignment.Member.UserLogin.IndexOf("|") + 1))</ADUser>"                                                                 Write-Output "<ADUser RoleBindings=`"$roleBindings`">$($assignment.Member.UserLogin)</ADUser>"                             }                         }                     }                 }             }             Write-Output "</Site>"             $web.Dispose()         }         Write-Output "</SiteCollection>"         $site.Dispose()     }     Write-Output "</WebApplication>" } Write-Output "</Farm>"      The output from the script can be sent to an XML which you can then explore using the [XML] type accelerator.  This lets you explore the XML structure however you see fit.  See the screenshot below for an example.      If you do view the XML output through a text editor (Notepad++ for me) notice the format.  Below we see a SharePoint site that has a SharePoint group Demo Members with Edit permissions assigned.  Demo Members has an AD group corp\developers as a member.  corp\developers has a child AD group called corp\DevelopersSub with 1 AD user in that sub group.  As you can see the script recursively expands the AD hierarchy.   Conclusion    It took me 4 years to finally update this script but I‘m happy to get this published.  I was able to fix a number of errors and smooth out some rough edges.  I plan to develop this into a more full fledged tool over the next year with more features and flexibility (copy permissions, search for individual user or group, optional enumerate lists / items, etc.).  If you have any feedback, feature requests, or issues running it please let me know.  Enjoy the script!         -Frog Out

    Read the article

  • Exception Handling Differences Between 32/64 Bit

    - by Alois Kraus
    I do quite a bit of debugging .NET applications but from time to time I see things that are impossible (at a first look). I may ask you dear reader what your mental exception handling model is. Exception handling is easy after all right? Lets suppose the following code:         private void F1(object sender, EventArgs e)         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new Exception("even worse Exception");             }           }           private void F2()         {             try             {                 F3();             }             finally             {                 throw new Exception("other exception");             }         }           private void F3()         {             throw new NotImplementedException();         }   What will the call stack look like when you break into the catch(Exception) clause in Windbg (32 and 64 bit on .NET 3.5 SP1)? The mental model I have is that when an exception is thrown the stack frames are unwound until the catch handler can execute. An exception does propagate the call chain upwards.   So when F3 does throw an exception the control flow will resume at the finally handler in F2 which does throw another exception hiding the original one (that is nasty) and then the new Exception will be catched in F1 where the catch handler is executed. So we should see in the catch handler in F1 as call stack only the F1 stack frame right? Well lets try it out in Windbg. For this I created a simple Windows Forms application with one button which does execute the F1 method in its click handler. When you compile the application for 64 bit and the catch handler is reached you will find with the following commands in Windbg   Load sos extension from the same path where mscorwks was loaded in the current process .loadby sos mscorwks   Beak on clr exceptions sxe clr   Continue execution g   Dump mixed call stack container C++  and .NET Stacks interleaved 0:000> !DumpStack OS Thread Id: 0x1d8 (0) Child-SP         RetAddr          Call Site 00000000002c88c0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002c8990 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002c8a60 000007ff005dd7f4 mscorwks!JIT_Throw+0x130 00000000002c8c10 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0xb4 00000000002c8c60 000007fefa661012 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002c8d60 000007fefa711a72 mscorwks!ExceptionTracker::CallCatchHandler+0x9e 00000000002c8df0 0000000077b055cd mscorwks!ProcessCLRException+0x25e 00000000002c8e90 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002c8ec0 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002c9560 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002c9a70 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002c9b10 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002c9b40 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002ca220 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002ca7e0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002ca8b0 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002ca980 000007ff005dd8df mscorwks!JIT_Throw+0x130 00000000002cab30 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x9f 00000000002cab80 000007fefa71b5b3 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002cac80 000007fefa70dcd0 mscorwks!ExceptionTracker::ProcessManagedCallFrame+0x683 00000000002caed0 000007fefa7119af mscorwks!ExceptionTracker::ProcessOSExceptionNotification+0x430 00000000002cbd90 0000000077b055cd mscorwks!ProcessCLRException+0x19b 00000000002cbe30 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002cbe60 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002cc500 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002cca10 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002ccab0 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002ccae0 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002cd1c0 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002cd780 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002cd850 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002cd920 000007ff005dd968 mscorwks!JIT_Throw+0x130 00000000002cdad0 000007ff005dd875 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F3()+0x48 00000000002cdb10 000007ff005dd786 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x35 00000000002cdb60 000007ff005dbe6a WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0x46 00000000002cdbc0 000007ff005dd452 System_Windows_Forms!System.Windows.Forms.Control.OnClick(System.EventArgs)+0x5a   Hm okaaay. I see my method F1 two times in this call stack. Looks like we did get some recursion bug. But that can´t be given the obvious code above. Let´s try the same thing in a 32 bit process.  0:000> !DumpStack OS Thread Id: 0x33e4 (0) Current frame: KERNELBASE!RaiseException+0x58 ChildEBP RetAddr  Caller,Callee 0028ed38 767db727 KERNELBASE!RaiseException+0x58, calling ntdll!RtlRaiseException 0028ed4c 68b9008c mscorwks!Binder::RawGetClass+0x20, calling mscorwks!Module::LookupTypeDef 0028ed5c 68b904ff mscorwks!Binder::IsClass+0x23, calling mscorwks!Binder::RawGetClass 0028ed68 68bfb96f mscorwks!Binder::IsException+0x14, calling mscorwks!Binder::IsClass 0028ed78 68bfb996 mscorwks!IsExceptionOfType+0x23, calling mscorwks!Binder::IsException 0028ed80 68bfbb1c mscorwks!RaiseTheExceptionInternalOnly+0x2a8, calling KERNEL32!RaiseExceptionStub 0028eda8 68ba0713 mscorwks!Module::ResolveStringRef+0xe0, calling mscorwks!BaseDomain::GetStringObjRefPtrFromUnicodeString 0028edc8 68b91e8d mscorwks!SetObjectReferenceUnchecked+0x19 0028ede0 68c8e910 mscorwks!JIT_Throw+0xfc, calling mscorwks!RaiseTheExceptionInternalOnly 0028ee44 68c8e734 mscorwks!JIT_StrCns+0x22, calling mscorwks!LazyMachStateCaptureState 0028ee54 68c8e865 mscorwks!JIT_Throw+0x1e, calling mscorwks!LazyMachStateCaptureState 0028eea4 02ffaecd (MethodDesc 0x7af08c +0x7d WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)), calling mscorwks!JIT_Throw 0028eeec 02ffaf19 (MethodDesc 0x7af098 +0x29 WindowsFormsApplication1.Form1.F2()), calling 06370634 0028ef58 02ffae37 (MethodDesc 0x7a7bb0 +0x4f System.Windows.Forms.Control.OnClick(System.EventArgs))   That does look more familar. The call stack has been unwound and we do see only some frames into the history where the debugger was smart enough to find out that we have called F2 from F1. The exception handling on 64 bit systems does work quite differently which seems to have the nice property to remember the called methods not only during the first pass of exception filter clauses (during first pass all catch handler are called if they are going to catch the exception which is about to be thrown)  but also when the actual stack unwind has taken place. This makes it possible to follow not only the call stack right at the moment but also to look into the “history” of the catch/finally clauses. In a 64 bit process you only need to look at the ExceptionTracker to find out if a catch or finally handler was called. The two frames ProcessManagedCallFrame/CallHandler does indicate a finally clause whereas CallCatchHandler/CallHandler indicates a catch clause. That was a interesting one. Oh and by the way if you manage to load the Microsoft symbols you can also find out the hidden exception which. When you encounter in the call stack a line 0016eb34 75b79617 KERNELBASE!RaiseException+0x58 ====> Exception Code e0434f4d cxr@16e850 exr@16e838 Then it is a good idea to execute .exr 16e838 !analyze –v to find out more. In the managed world it is even easier since we can dump the objects allocated on the stack which have not yet been garbage collected to look at former method parameters. The command !dso which is the abbreviation for dump stack objects will give you 0:000> !dso OS Thread Id: 0x46c (0) ESP/REG  Object   Name 0016dd4c 020737f0 System.Exception 0016dd98 020737f0 System.Exception 0016dda8 01f5c6cc System.Windows.Forms.Button 0016ddac 01f5d2b8 System.EventHandler 0016ddb0 02071744 System.Windows.Forms.MouseEventArgs 0016ddc0 01f5d2b8 System.EventHandler 0016ddcc 01f5c6cc System.Windows.Forms.Button 0016dddc 020737f0 System.Exception 0016dde4 01f5d2b8 System.EventHandler 0016ddec 02071744 System.Windows.Forms.MouseEventArgs 0016de40 020737f0 System.Exception 0016de80 02071744 System.Windows.Forms.MouseEventArgs 0016de8c 01f5d2b8 System.EventHandler 0016de90 01f5c6cc System.Windows.Forms.Button 0016df10 02073784 System.SByte[] 0016df5c 02073684 System.NotImplementedException 0016e2a0 02073684 System.NotImplementedException 0016e2e8 01ed69f4 System.Resources.ResourceManager From there it is easy to do 0:000> !pe 02073684 Exception object: 02073684 Exception type: System.NotImplementedException Message: Die Methode oder der Vorgang sind nicht implementiert. InnerException: <none> StackTrace (generated):     SP       IP       Function     0016ECB0 006904AD WindowsFormsApplication2!WindowsFormsApplication2.Form1.F3()+0x35     0016ECC0 00690411 WindowsFormsApplication2!WindowsFormsApplication2.Form1.F2()+0x29     0016ECF0 0069038F WindowsFormsApplication2!WindowsFormsApplication2.Form1.F1(System.Object, System.EventArgs)+0x3f StackTraceString: <none> HResult: 80004001 to see the former exception. That´s all for today.

    Read the article

  • What more a Business Service can do?

    - by Rajesh Sharma
    Business services can be accessed from outside the application via XAI inbound service, or from within the application via scripting, Java, or info zones. Below is an example to what you can do with a business service wrapping an info zone.   Generally, a business service is specific to a page service program which references a maintenance object, that means one business service = one service program = one maintenance object. There have been quite a few threads in the forum around this topic where the business service is misconstrued to perform services only on a single object, for e.g. only for CILCSVAP - SA Page Maintenance, CILCPRMP - Premise Page Maintenance, CILCACCP - Account Page Maintenance, etc.   So what do you do when you want to retrieve some "non-persistent" field or information associated with some object/entity? Consider few business requirements: ·         Retrieve all the field activities associated to an account. ·         Retrieve the last bill date for an account. ·         Retrieve next bill date for an account.   It can be as simple as described below, for this post, we'll use the first scenario - Retrieve all the field activities associated to an account. To achieve this we'll have to do the following:   Step 1: Define an info zone   (A basic Zone of type F1-DE-SINGLE - Info Data Explorer - Single SQL has been used; you can use F1-DE - Info Data Explorer - Multiple SQLs for more complex scenarios)   Parameter Description Value To Enter User Filter 1 F1 Initial Display Columns C1 C2 C3 SQL Condition F1 SQL Statement SELECT     FA_ID, FA_STATUS_FLG, CRE_DTTM FROM     CI_FA WHERE     SP_ID IN         (SELECT SP_ID         FROM CI_SA_SP         WHERE             SA_ID IN                 (SELECT SA_ID                  FROM CI_SA                  WHERE                     ACCT_ID = :F1)) Column 1 source=SQLCOL sqlcol=FA_ID Column 2 source=SQLCOL sqlcol=FA_STATUS_FLG Column 3 type=TIME source=SQLCOL sqlcol=CRE_DTTM order=DESC   Note: Zone code specified was 'CM_ACCTFA'   Step 2: Define a business service Create a business service linked to 'Service Name' FWLZDEXP - Data Explorer. Schema will look like this:   <schema> <zoneCd mapField="ZONE_CD" default="CM_ACCTFA"/>      <accountId mapField="F1_VALUE"/>      <rowCount mapField="ROW_CNT"/>      <result type="group">         <selectList type="list" mapList="DE">             <faId mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="1"/>                 </row>             </faId>              <status mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="2"/>                 </row>             </status>              <createdDateTime mapField="COL_VALUE">                 <row mapList="DE_VAL">                     <SEQNO is="3"/>                 </row>             </createdDateTime>         </selectList>     </result> </schema>      What's next? As mentioned above, you can invoke this business service from an outside application via XAI inbound service or call this business service from within a script.   Step 3: Create a XAI inbound service for above created business service         Step 4: Test the inbound service   Go to XAI Submission and test the newly created service   <RXS_AccountFA>       <accountId>5922116763</accountId> </RXS_AccountFA>  

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Elegance, thy Name is jQuery

    - by SGWellens
    So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too. I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue. Now I know what a one-legged man feels like in an ass-kicking contest. Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible. The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div></form></body>  The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways… Here is the marvelously, succinct posted answer: $('.heading').each(function(){ $(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');}); I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments: function Test(){ // $Sections is a jQuery object and it will contain three elements var $Sections = $('.heading'); // use each to iterate over each of the three elements $Sections.each(function () { // $this is a jquery object containing the current element // being iterated var $this = $(this); // nextUntil gets the following sibling elements until it reaches // an element with the CSS class 'heading' // andSelf adds in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}  The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line of code. Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF). Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it. Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on… I created a CSS class to enhance the results: .section{ background-color: yellow; border: 2px solid black; margin: 5px;} Here is the rendered output before:   …and after the jQuery code runs.   Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped collecting elements because it was at the end of the page.  But what if there were additional HTML elements? I added an anchor tag and another div to the HTML: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div><a>this is a link</a><div>unrelated div</div> </form></body> The code as-is will include both the anchor and the unrelated div. This isn't what we want.   My first attempt to correct this used the filter parameter of the nextUntil function: nextUntil('.heading', 'div')  This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:   The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue! nextUntil('.heading, a')  This tells nextUntil to stop collecting elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.   Bingo! One final note, we could have broken the code down even more: We could have replaced the andSelf function here: $this = $this.nextUntil('.heading, a').andSelf(); With this: // get all the following siblings and then add the current item$this = $this.nextUntil('.heading, a');$this.add(this);  But in this case, the andSelf function reads real nice. In my opinion. Here's a link to a jsFiddle if you want to play with it. I hope someone finds this useful Steve Wellens CodeProject

    Read the article

  • Can you help me fix my broken packages?

    - by Andreas Hartmann
    I would like to upgrade from 13.04 to 13.10, but some broken packages are preventing upgrade success: grep Broken /var/log/dist-upgrade/apt.log output: Broken libwayland-client0:amd64 Conflicts on libwayland0 [ amd64 ] < 1.0.5-0ubuntu1 > ( libs ) (< 1.1.0) Broken libunity9:amd64 Breaks on unity-common [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 > ( gnome ) (< 7.1.2) Broken cups-filters:amd64 Conflicts on ghostscript-cups [ amd64 ] < 9.07~dfsg2-0ubuntu3.1 > ( text ) Broken libpam-systemd:amd64 Conflicts on libpam-xdg-support [ amd64 ] < 0.2-0ubuntu2 > ( admin ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ amd64 ] < 0.9.13-1 > ( libs ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ i386 ] < 0.9.13-1 > ( libs ) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ amd64 ] < 6.90.2daily13.04.05-0ubuntu1 > ( gnome ) (< 7.0.7) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ i386 ] < none > ( none ) (< 7.0.7) Broken libaccount-plugin-generic-oauth:amd64 Conflicts on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libaccount-plugin-generic-oauth:amd64 Breaks on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libmutter0b:amd64 Breaks on libmutter0a [ amd64 ] < 3.6.3-0ubuntu2 > ( libs ) Broken python3-aptdaemon.pkcompat:amd64 Breaks on libpackagekit-glib2-14 [ amd64 ] < 0.7.6-3ubuntu1 > ( libs ) (<= 0.7.6-4) Broken apache2:amd64 Conflicts on apache2.2-common [ amd64 ] < 2.2.22-6ubuntu5.1 > ( httpd ) Broken chromium-codecs-ffmpeg-extra:amd64 Conflicts on chromium-codecs-ffmpeg [ amd64 ] < 28.0.1500.71-0ubuntu1.13.04.1 -> 29.0.1547.65-0ubuntu2 > ( universe/web ) Broken unity-scope-home:amd64 Conflicts on unity-lens-shopping [ amd64 ] < 6.8.0daily13.03.04-0ubuntu1 > ( gnome ) Broken libsnmp30:amd64 Breaks on libsnmp15 [ amd64 ] < 5.4.3~dfsg-2.7ubuntu1 > ( libs ) Broken apache2.2-bin:amd64 Breaks on gnome-user-share [ amd64 ] < 3.0.4-0ubuntu1 > ( gnome ) (< 3.8.0-2~) Broken libgjs0d:amd64 Conflicts on libgjs0c [ amd64 ] < 1.34.0-0ubuntu1 > ( libs ) Broken unity-gtk2-module:amd64 Conflicts on appmenu-gtk [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken lib32asound2:amd64 Depends on libasound2 [ amd64 ] < 1.0.25-4ubuntu3.1 -> 1.0.27.2-1ubuntu6 > ( libs ) (= 1.0.25-4ubuntu3.1) Broken unity-gtk3-module:amd64 Conflicts on appmenu-gtk3 [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken activity-log-manager:amd64 Conflicts on activity-log-manager-common [ amd64 ] < 0.9.4-0ubuntu6.2 > ( utils ) Broken libgtksourceview-3.0-0:amd64 Depends on libgtksourceview-3.0-common [ amd64 ] < 3.6.3-0ubuntu1 -> 3.8.2-0ubuntu1 > ( libs ) (< 3.7) Broken icaclient:amd64 Depends on lib32asound2 [ amd64 ] < 1.0.25-4ubuntu3.1 > ( libs ) Broken libunity-core-6.0-5:amd64 Depends on unity-services [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 -> 7.1.2+13.10.20131014.1-0ubuntu1 > ( gnome ) (= 7.0.0daily13.06.19~13.04-0ubuntu1) Broken libbamf3-1:amd64 Depends on bamfdaemon [ amd64 ] < 0.4.0daily13.06.19~13.04-0ubuntu1 -> 0.5.1+13.10.20131011-0ubuntu1 > ( libs ) (= 0.4.0daily13.06.19~13.04-0ubuntu1) Broken apache2-bin:amd64 Conflicts on apache2.2-bin [ amd64 ] < 2.2.22-6ubuntu5.1 -> 2.4.6-2ubuntu2 > ( httpd ) (< 2.3~) Output for cat /etc/apt/sources.list /etc/apt/sources.list.d/*.list # deb cdrom:[Ubuntu 13.04 _Raring Ringtail_ - Release amd64 (20130424)]/ raring main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://de.archive.ubuntu.com/ubuntu/ raring universe deb http://de.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://de.archive.ubuntu.com/ubuntu/ raring multiverse deb http://de.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner # deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main # deb-src http://extras.ubuntu.com/ubuntu raring main # deb http://linux.dropbox.com/ubuntu precise main output for sudo dpkg -l | grep -e "^iU" -e "^rc": rc ibm-lotus-cae 8.5.2-20100805.0821 i386 IBM Lotus Composite Application Editor rc ibm-lotus-cae-nl1 8.5.2-20100805.0821 i386 IBM Lotus CAE NL1 rc ibm-lotus-feedreader 8.5.2-20100805.0821 i386 Feeds for IBM Lotus Notes 8.5.2 rc ibm-lotus-feedreader-nl1 8.5.2-20100805.0821 i386 IBM Lotus Feed Reader NL1 rc ibm-lotus-notes 8.5.2-20100805.0821 i386 IBM Lotus Notes rc ibm-lotus-notes-core-de 8.5.2-20100805.0821 i386 IBM Lotus Notes Native German (de) rc ibm-lotus-notes-nl1 8.5.2-20100805.0821 i386 IBM Lotus Notes Java NL1 rc ibm-lotus-sametime 8.5.2-20100805.0821 i386 IBM Lotus Sametime rc ibm-lotus-symphony 8.5.2-20100805.0821 i386 IBM Lotus Symphony rc ibm-lotus-symphony-nl1 8.5.2-20100805.0821 i386 IBM Lotus Symphony NL1 rc libapache2-mod-php5filter 5.4.9-4ubuntu2.2 amd64 server-side, HTML-embedded scripting language (apache 2 filter module) rc libavcodec53:amd64 6:0.8.6-1ubuntu2 amd64 Libav codec library rc libavutil51:amd64 6:0.8.6-1ubuntu2 amd64 Libav utility library rc libmotif4:amd64 2.3.3-7ubuntu1 amd64 Open Motif - shared libraries rc linux-image-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Elegance, thy Name is jQuery

    - by SGWellens
    So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too. I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue. Now I know what a one-legged man feels like in an ass-kicking contest. Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible. The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div></form></body>  The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways… Here is the marvelously, succinct posted answer: $('.heading').each(function(){ $(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');}); I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments: function Test(){ // $Sections is a jQuery object and it will contain three elements var $Sections = $('.heading'); // use each to iterate over each of the three elements $Sections.each(function () { // $this is a jquery object containing the current element // being iterated var $this = $(this); // nextUntil gets the following sibling elements until it reaches // an element with the CSS class 'heading' // andSelf adds in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}  The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line. Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF). Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it. Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on… I created a CSS class to see the results: .section{ background-color: yellow; border: 2px solid black; margin: 5px;} Here is the rendered output before:   …and after the jQuery code runs.   Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped because it was at the end of the page.  But what if there were additional HTML elements? I added an anchor tag and another div to the HTML: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div><a>this is a link</a><div>unrelated div</div> </form></body> The code as-is will include both the anchor and the unrelated div. This isn't what we want.   My first attempt to correct this used the filter parameter of the nextUntil function: nextUntil('.heading', 'div')  This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:   The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue: nextUntil('.heading, a')  This tells nextUntil to stop collecting sibling elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.   Bingo! One final note, we could have broken the code down even more: We could have replaced the andSelf function here: $this = $this.nextUntil('.heading, a').andSelf(); With this: // get all the following siblings and then add the current item$this = $this.nextUntil('.heading, a');$this.add(this);  But in this case, the andSelf function reads real nice. In my opinion. Here's a link to a jsFiddle if you want to play with it. I hope someone finds this useful Steve Wellens CodeProject

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • CodePlex Daily Summary for Sunday, August 03, 2014

    CodePlex Daily Summary for Sunday, August 03, 2014Popular ReleasesBoxStarter: Boxstarter 2.4.76: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.GMare: GMare Beta 1.2: Features Added: - Instance painting by holding the alt key down while pressing the left mouse button - Functionality to the binary exporter so that backgrounds from image files can be used - On the binary exporter background information can be edited manually now - Update to the GMare binary read GML script - Game Maker Studio export - Import from GMare project. Multiple options to import desired properties of a .gmpx - 10 undo/redo levels instead of 5 is now the default - New preferences dia...Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...SQL Server Dialog: SQL Server Dialog: Input server, user and password Show folder and file in treeview Customize icon Filter file extension Skip system generate folder and fileAitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityNew ProjectsCreek: Creek is a Collection of many C# Frameworks and my ownSpeaking Speedometer (android): Simple speaking speedometerT125Protocol { Alpha version }: implement T125 Protocol for communicate with a mainframe.Unix Time: This library provides a System.UnixTime as a new Type providing conversion between Unix Time and .NET DateTime.

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >