Search Results

Search found 3518 results on 141 pages for 'technical'.

Page 140/141 | < Previous Page | 136 137 138 139 140 141  | Next Page >

  • CodePlex Daily Summary for Tuesday, October 23, 2012

    CodePlex Daily Summary for Tuesday, October 23, 2012Popular ReleasesDNN Module Creator: 01.01.00: Updated templates for DNN7 ( ie. DAL2, Web Service API ). Numerous bug fixes and enhancements.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...ltxml.js - LINQ to XML for JavaScript: 1.0 - Beta 1: First release!EvoGame: EvoGame PreAlpha v0.0.2_c InDev: Yup, a new update, go the sprites working.Ficharts.Net: 1.0 Alpha: ??Ficharts????、???????????,?? ??/???、??/???、???、??/???ZXMAK2: Version 2.6.6.0: + fix refresh debugger after open RZX file + add NoFlic video filterSQLLib: Alpha release 17: Added CLR UDFs: * clr.fn_regex_instr - similar to Oracle REGEX_INSTR * clr.fn_regex_substr - similar to Oracle REGEX_SUBSTR To deploy CLR objects copy ClrAgg.dll and ClrRegEx.dll to a folder of you choice (currently deployment script points to C:\Program Files\Microsoft SQL Server\100\CLR\ClrAgg.dll) and execute deployment scripts InstallCLRAggregates.sql and InstallCLRRegEx.sql Thank you for rating the download and/or your feedback.EPiServer CMS ElencySolutions.MultipleProperty: ElencySolutions.MultipleProperty v1.6.3: The ElencySolutions.MulitpleProperty property controls have been developed by Lee Crowe a technical developer at Fortune Cookie (London). Installation notes The property copy page can be locked down by adding the following location element, the path of this will be different depending on whether you use the embedded or non embedded resource version. When installing the nuget package these will be added automatically, examples below: Embedded: <location path="util/ElencySolutionsMultipleP...Fiskalizacija za developere: FiskalizacijaDev 1.1: Ovo je prva nadogradnja ovog projekta nakon inicijalnog predstavljanja - dodali smo nekoliko feature-a, bilo zato što smo sami primijetili da bi ih bilo dobro dodati, bilo na osnovu vaših sugestija - hvala svima koji su se ukljucili :) Ovo su stvari riješene u v1.1.: 1. Bilo bi dobro da se XML dokument koji se šalje u CIS može snimiti u datoteku (http://fiskalizacija.codeplex.com/workitem/612) 2. Podrška za COM DLL (VB6) (http://fiskalizacija.codeplex.com/workitem/613) 3. Podrška za DOS (unu...MCEBuddy 2.x: MCEBuddy 2.3.4: Changelog for 2.3.4 (32bit and 64bit) 1. Fixed a bug introduced in 2.3.3 that would cause HD recordings and recordings with multiple audio channels to fail. 2. Updated <encoder-unsupported> option to compare with all Audio tracks for videos with multiple audio tracks. 3. Fixed a bug with SRT and EDL files, when input and output directory are the same the files are not preserved.BlogEngine.NET: BlogEngine.NET 2.7 RC: Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! dot This is a Release Candidate version for BlogEngine.NET 2.7. The most current, stable version of BlogEngine.NET is version 2.6. Find out more about the BlogEngine.NET 2.7 RC here. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.7 instructions...Pulse: Pulse 0.6.3.0: Fixed a number of bugs that showed up since my update yesterday. Fixes included are for: - Weird issue where the initial "Nature" wallbase.cc search would duplicate itself - After changing a providers settings it wouldn't take affect until you restarted Pulse (removing or adding a provider entirely did take effect though) - Another small issue with the regex for the wallbase.cc wallpapers that I tweaked yesterday, seems good now though.Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsClosedXML - The easy way to OpenXML: ClosedXML 0.68.1: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Orchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...New ProjectsAddition of two numbers: Addition of two integer numbersAddTwoNumbers: Add two numbersASP_BANMAYTINH: Xây d?ng web bán máy tính b?ng ASPAvalon MVC: Do not use, still in alphaCaio Proiete's HG Playground: Simple test project to leverage Mercurial features using CodePlexCaio Proiete's TFS Playground: Simple test project to leverage TFS features using CodePlexcodeplexaddproject: Task 1 adding two numbers.Compresor markov orden 1 shannon: Compresor de fuentes basado en el algoritmo de shannon con markov orden 1Cricket Mania: addd39 grid system: A web-based combat grid system for use in play-by-post DnD (or similar) role playing games.DarkSky Tagit: An Orchard module that exposes the jQuery Tagit plugin written by Hailwood as a script resource.DnnExpert: ??? ????? ?? ???? ???? ????? ? ???? ??? ?????? ??? ?? ???? ?? ???? ?? ???????? ???? ?? ???? ??? ???? ?? ?? ? ?????? ????? ???? ? ?????? ? ???? ???? ?? ????.Expandable Text/HTML for DotNetNuke by IowaComputerGurus Inc.: The DNN Expandable Text/HTML module allows you to display multiple text items with the ability to expand and collapse individual items.FarajsWeb2Project: This project is intended to design a Web2.0 website for 7COM0203 ModuleGeminorum Software Contacts for DotNetNuke: A simple contact manager for DotNetNuke.GitText: Test olyGoDarting by Harsh Maurya: Darts Game developed in WPF. requires .Net Framework 4.0Info Gempa BMKG: Aplikasi pembaca informasi gempa BMKGMassive encryption of files: "Massive Encrypt" allows you to encrypt or rename many files at once. Of course you can decrypt later encrypted files!Metal Player: Simple and easy to use, Metal Player has basic multimedia player functions, and some new functions that will enjoy you.NAntDefineTasks: NAnt Define Tasks allows you to define NAnt tasks in terms of other NAnt tasks, instead of having to write any C# code. Nebulosa: Nebulosa is a complete engine to create a complete websitesNigeria Single Mothers: This website project helps single mothers in Nigeria share ideas on how to raise children given the socio-economic and cultural challenges they face.PCV_Clinic_Pro: PCVClinicPro is a software proRendering.NET: Rendering.NET is an abstraction for any visualization device and over several APIs like OpenGL, DirectX, XNA, WebGL, WPF, Silverlight, Mobile DirectX, etc.RLA: A template for illustrate a MVC2 websiteSecure Password Recovery for DotNetNuke by IowaComputerGurus Inc.: IowaComputerGurus's Secure Password Recovery module is the next step in preventing user passwords from being sent via e-mail!SimpleSum: This calculates a simple sum using Visual Basic.SiteCetic: Sitio de CETIC Social Learning: Social Learning Project for BC 2012Towards a generic DSL for modeling page types in WCMSs: An exploration of creating DSLs to facilitate the creation of page models in WCMSs using VMSDK. The concepts of PIM, PSM, DSL, M2M, and M2T will be explored.TrafficArchives: TrafficArchives is a two people group of TrafficArchives team, in this project ,we will use asp.net do Traffic Archives information manager system.UnivDevs: university test developmentUppityUp: UppityUp is a simple and light-weight tray application which monitors a remote server and shows a notification when it comes online. This is useful when you need to connect to a server that is currently down and you want to be notified the moment it becomes available.uurrooster: Hier wordt nog aan gewerktUWE Computer Science: A collection of all work submitted and completed during my course at UWE - Bristol.ViewMyDeals: This Site is all about sharing dramatic deals and offers of several products using Promotional codes and vouchers .WebUntis4Win8: WebUntis4Win8X.MetaWeblog.Model: This is a model for MetaWeblog API. Detail info at: http://xmlrpc.scripting.com/metaWeblogApi.html http://en.wikipedia.org/wiki/MetaWeblogXPath execution utility: CommonXPath is a utility to execute an XPath expression on some XML and see the result.

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 09, 2012

    CodePlex Daily Summary for Tuesday, October 09, 2012Popular ReleasesScript SQL Server Configuration: Release 3.0.9: Release 3.0.9 Rewrote trigger scripting. If encrypted triggers are encountered they are listed in a commented line. Scripts event notifications Added an option to script a single database (in addition to the instance) using the /scriptdb parameter. Script user-defined end points Script Service Broker objects Skip database mail on Express EditionMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...D3 Loot Tracker: 1.5.2: now recording server ip for each drop.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...DevLib: 69721 binary dll: 69721 binary dllVidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...Sofire Suite v1.6: XSqlModelGenerator.AddIn: 1、?? VS2010/2012 2、?? .NET FRAMEWORK 2.0 3、?? SOFIRE V1.6ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y...Z3: Z3 4.1.2: Minor fixes. Now, z3 compiles with gcc 4.7.x.NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...NTCPMSG: V1.2.0.0: Allocate an identify cableid for each single connection cable. * Server can asend to specified cableid directly.Team Foundation Server Word Add-in: Version 1.0.12.0622: Welcome to the Visual Studio Team Foundation Server Word Add-in Supported Environments Microsoft Office Word 2007 and 2010 X86 (32-bit) Team Foundation Server 2010 Object Model TFS 2010, 2012 and TFS Service supported, using TFS OM / Explorer 2010. Quality-Bar Details Tool has been reviewed by Visual Studio ALM Rangers Tool has been through an independent technical and quality review All critical bugs have been resolved Known Issues / Bugs WI#43553 - The Acceptance criteria is not pu...UMD????? - PC?: UMDEditor?????V2.7: ??:http://jianyun.org/archives/948.html =============================================================================== UMD??? ???? =============================================================================== 2.7.0 (2012-10-3) ???????“UMD???.exe”??“UMDEditor.exe” ?????????;????????,??????。??????,????! ??64????,??????????????bug ?????????????,???? ???????????????? ???????????????,??????????bug ------------------------------------------------------- ?? reg.bat ????????????。 ????,??????txt/u...Untangler: Untangler 1.0.0: Add a missing file from first releaseNew Projects3dBuzz: Denise's website for 3d projection mapping artists to develop a web community to show and discuss their work.Advanced DataGridView with Excel-like auto filter: Windows Forms DataGridView Control with Excel-Like auto-filter context menu Windows Forms DataGridView ??????? ? Excel-???????? ????-????????azure media services admin panel: Simple azure media services dashboard. Upload media assets, queue encoder tasks and stream your audio\video assets.BackupCleaner.Net: A C#.Net-based tool for automatically removing old backups selectively. It can be used when you already make daily backups on disk, and want to clean them up while still keeping some of the ones made weeks, months or years ago.Bible Lib: BibleLib is a .net compatible library containing information for books, chapters and verses in the Old Testament.CRM 4.0 to CRM 2011 Queues Upgrades: for more information and documentation, please refer to http://mayankp.wordpress.com/2012/05/25/crm-4-0-to-crm-2011-queues-upgrades/ CSV File Reader and Writer for .NET: C# classes for reading and writing CSV files. Support for multi-line fields, custom delimiter and quote characters, options for how empty lines are handled.DotNetNuke Boards: DNN Boards is a task management solution that allows each user to have their own task board or social group members can collaborate within a single board.FarmaciasCruzAzul: Sistema Cruz AzulFindValueInDatabase: This solution finds the value you input in the hole given database and tells you which columns of which tables have the value you are looking for.Fish Tank: Social networking site for Aquarium enthusiasts.GSBA Apps: GSBA App for Windows 8Heuristics for the Vehicle Routing Problem: This is the code for our class, IEMS 482.Icaro - UPN: Icaro UPN es la recopilación de todos los preyectos realizados en clases de los diferentes proyectos que Enseño, espero les Sirva.JSLogger - free logging library for JavaScript: JSLogger is a free JavaScript Library for log information during the duration of your client script. The target is very easy >>Find every javasvript error<<Just little strategy: Just little strategy writen in C# using XNA Game Studio Now under developmentLightweight Medata Reader: The Lightweight Metadata Reader (LMR) is a tooling friendly version of the CLR Reflection APIs that takes no dependency on the CLR Loader.My Solution: No description for it nowoden????: ???????????????^^ooaavee.net: TODOPath Splitter: Path Splitter uses Roslyn to convert a method into a set of methods each equivalent to a distinct execution path. Assume annotations are added for use with Pex.Planning Poker for Azure: Planning Poker application allows distributed teams to play planning poker just using web browser. It can be deployed to IIS or Azure cloud service.plasmatrim.net: A library for controlling the USB PlasmaTrim from a .net application.powersaver: A simple utility, which will turn off your monitor when you lock your work station.Project13251008: gterProject13261008: asdsaProject13271008: sdfPyfus Reload: Server-side framework for Dofus 1.29.1 gameRei do Biscuit: E-commerce do rei do biscuitSofire Suite v1.6: Sofire Suite v1.6Sofire XSql: Sofire XSqlSosa Analysis Module (SAM): Numerical Analysis and data visualization program for SOSA psychological experiment software.SyncProject: The summary is coming soon... Just be patient ! Tistory Syntax Highlighter: ???? SyntaxHighlighter ????TJBHJJ: this is a website for company!tricogol App: nonetuts: My first SVNWatchr Change Control Management: Change control management for development and administrative teams focused on lean or agile processes.WCF Simple multi chat: This is a simple Windows form application that it's like a 'chat room'. Multiple users can login and chat with others users. The chat's core is powered by WCFWindows Event Log Email Notification: This will read the windows event logs based on the supplied xpath filters and email the specified addresses if there are matches.

    Read the article

  • CodePlex Daily Summary for Thursday, May 15, 2014

    CodePlex Daily Summary for Thursday, May 15, 2014Popular ReleasesVisualizers: Visualizer: This dll is the core of the project just copy and paste it in the following folder %Microsoft Visual Studio%\Common7\Packages\Debugger\Visualizers where %Microsoft Visual Studio% is the Visual studio installation folderQuickMon: Version 3.10: Adding the ability to see 'history' of Collector states (including details of errors or warnings at that time). The history size is configurable (default is switched off) and the Windows Service completely ignores keeping history (no UI or user to access it anyway). The Collector stats window now displays this history plus multiple collector stats windows can be opened at the same time. Additionally fixed a bug in the event log collector that reported an 'Error' state when an 'out of bounds' ...xFunc: xFunc 2.15.4: Fixed bug in Processor.csTFS Planning and Disaster Recovery Avoidance Guide: v1.4.BETA - TFS, DR and Azure IaaS Planning Guides: Welcome to the TFS Planning and DR Avoidance Guidance What is new? A new crisper, more compact style, which is easier to consume on multiple devices without sacrificing any content. Also included are the new TFS on Azure IaaS guide and supplementary guides. Note Capacity planning workbook and posters are included in the Everything Zip package. Quality-Bar Detail Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review ...WinAudit: WinAudit Freeware v3.0: WinAudit.exe v3.0 MD5: 88750CCF49FF7418199B2645755830FA Known Issues: 1. Report creation can be very slow when right-to-left (Hebrew) characters are present. 2. Emsisoft Anti-Malware may stop and/or quarantine WinAudit. This happens when WinAudit attempts to obtain a list if running programmes. You will need to set an exception rule in Emsisoft to allow WinAudit to run.MVCwCMS - ASP.NET MVC CMS: MVCwCMS 2.2.2: Updated CKFinder config. For the installation instructions visit the documentation page: https://mvcwcms.codeplex.com/documentationTerraMap (Terraria World Map Viewer): TerraMap 1.0.4: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed installer not uninstalling older versions The setup file will make sure .NET 4 is installed, install TerraMap, create desktop and start menu shortcuts, add a .wld file association, and launch TerraMap. If you prefer the zip ...WPF Localization Extension: v2.2.1: Issue #9277 Issue #9292 Issue #9311 Issue #9312 Issue #9313 Issue #9314CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.1.41167 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Variable walking / flying speed. Xbox 360 Controller support. Kinect for Windows support. Based on Firestorm viewer 4.6.5 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-1-41167-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http:/...ExtJS based ASP.NET Controls: FineUI v4.0.6: FineUI(???) ?? ExtJS ??? ASP.NET ??? FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ??????? ?????? IE 8.0+、Chrome、Firefox、Opera、Safari ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license) ???? ??:http://fineui.com/ ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,?????: 1. ????? FineUI ? ExtJS ? http://fineui.com/bbs/forum.ph...Office App Model Samples: Office App Model Samples v2.0: Office App Model Samples v2.0Readable Passphrase Generator: KeePass Plugin 0.13.0: Version 0.13.0 Added "mutators" which add uppercase and numbers to passphrases (to help complying with upper, lower, number complexity rules). Additional API methods which help consuming the generator from 3rd party c# projects. 13,160 words in the default dictionary (~600 more than previous release).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.25.0: Release v1.0.25.0 MemberInfo/MethodInfo popup is now positioned properly to fit the screen In MethodInfo popup method signatures are word-wrapped Implemented Debug text value visualizer Pining sub-values from Watch PanelWrapper for the PAYMILL API: Paymill API Wrapper: Add Description in PreauthorizationHow to develop an autodialer / predictive dialer in C#: VoIP AutoDialer in C Sharp: This is the downloadable source code for this example project that is intended to help you in developing your own VoIP autodialer application in C#.R.NET: R.NET 1.5.12: R.NET 1.5.12 is a beta release towards R.NET 1.6. You are encouraged to use 1.5.12 now and give feedback. See the documentation for setup and usage instructions. Main changes for R.NET 1.5.12: The C stack limit was not disabled on Windows. For reasons possibly peculiar to R, this means that non-concurrent access to R from multiple threads was not stable. This is now fixed, with the fix validated with a unit test. Thanks to Odugen, skyguy94, and previously others (evolvedmicrobe, tomasp) fo...SEToolbox: SEToolbox 01.029.006 Release 1: Fix to allow keyboard search on load dialog. (type the first few letters of your save) Fixed check for new release. Changed the way ship details are loaded to alleviate load time for worlds with very large ships (100,000+ blocks). Fixed Image importer, was incorrectly listing 'Asteroid' as import option. Minor changes to menus (text and appearance) for clarity and OS consistency. Added in reading of world palette for color dialog editor. WIP on subsystem editor. Can now multiselec...Thaumcraft4 Research: Alpha 2 release, lots of awesome improvements: Performance inprovements asynchronous running of the search Search result-cap ( usefull for long routes, it wont try to find 3000 results ) statistics addedTiny Wifi Host: Tiny Wifi Host 3.0.0.0: Tiny Wifi Hotspot Creator (Portable) v3 size: 50KB-140KB New Features: Friendly name for connected devices instead of Mac-Address (Double click selected device to enter friendly name) Saves device names to devices.xml Better error reporting+solutions Warning sound when number of connected devices exceed a certain number. (useful when only certain number of devices must be connected at a time) Many Bug Fixes. NoAudio files does not include connect, disconnect and warning audio to dec...Media Companion: Media Companion MC3.597b: Thank you for being patient, againThere are a number of fixes in place with this release. and some new features added. Most are self explanatory, so check out the options in Preferences. Couple of new Features:* Movie - Allow save Title and Sort Title in Title Case format. * Movie - Allow save fanart.jpg if movie in folder. * TV - display episode source. Get episode source from episode filename. Fixed:* Movie - Added Fill Tags from plot keywords to Batch Rescraper. * Movie - Fixed TMDB s...New ProjectsBLADE View Engine - An independent RAZOR alternative: An alternative parser and view engine to Asp.net Razor that does not depend on Asp.net or .NET 4.5BPL++: A basic object orientated programming language built upon a virtual machine using C#Caedisi - A Cellular Automata Editor and Simulator for Network Decontamination: A research tool that explores the use of cellular automata in order to decontaminate a network attacked or infected by a virus. Cosmos Software Distrobution 1.0: No content until project released!ETCQuality: ETC QUALITY STAT SYSTEM. ????????????,??????????????。F.A.Q.: F.A.Q. project about Frequently asked questions. Help Viewer Redirector: Enables use of Help Viewer 2.0 or 2.1 with SQL Server instead of Help Viewer 1.0Jet.Payment.Cielo: Projeto contém integração com o serviço e-commerce Cielo, desenvolvido em C#. Criamos esse Helper para nossa própria necessidade. Nos ajude a melhorá-lo.Learning with Pati: Learning Javascript with PatiLeRenard: LeRenard is a collection of solutions (from core helper, extension methods) to libraries, all written in C# to help build applications.MCMP.NET: MCMP.NET allows ASP.NET applications to be added as contexts using mod_cluster.PowerShell Deployment Automation Framework: This project contains resources related to my blog at www.powershellcoach.compravda-f: ??????????? ? ?????? ??? ?????????? ??????sdir: Colorful, sorted and optionally rich directory enumeration for Windows.SienSchoofsProject: school project for .NET ExpertSorting collection of any type by several fields: Sorting collection of any type by several fields with using Reflection and Expression TreesTPS BarCode Scanner: This is going to be a collaboration platform for TPS team. We want to develop a simple low cost bar code generator and scanner system. This project is currentlVerifyDomainOutlookAddIn: ??????????????????????????????。Ynote Packages: Ynote Packages are a collection of tools which a play a crucial role in extending Ynote Classic and understanding it's true potential.??????-??????【??】??????????: ??????????????????,???、???!???????,????????????????,????????????,???! ?????-?????【??】?????????: ????????????、?????、?????、?????、?????、????,???????????,?????,??????! ?????-?????【??】?????????: ???????????????,?????????????? ??。??????????、????、????、?????????? ???????。 ??????-??????【??】??????????: ????????????????????,?????,???????,???????????,??????! ??????-??????【??】??????????: ??????????????、??????、????、?????、?????!????,????????????????!????。 ?????-?????【??】?????????: ???????????????????,?????????????????????,?????,????,???????. ?????-?????【??】?????????: ???????????????????、????????、????????、????????、???????,????????????。 ??????-??????【??】??????????: ????????????????、?????、?????、????、?????,??????????。????????????????! ??????-??????【??】??????????: ???????????????????????:????、????、??????????????,????????。????????! ????-????【??】????????: ?????????????????????,????????????,?????、??、????,?????,??????! ?????-?????【??】?????????: ???????????????????,??????????,????????、????,??????????,??????????。 ?????-?????【??】?????????: ???????,??????:?????,?????,??????,??????????,????????。????????! ????: ????????,????????????????????????,?????????,????????????????。????????????????????,??????????????,?????????????。 ????????????,?:   ??????   ??????   ????????   ??????-??????【??】??????????: ??????????????????????,???????????????,????????????????????! ??????-??????【??】??????????: ??????????????????、????,??100%????,??????,????????????,???????????! ????-????【??】????????: ???????????、??????????????????,????????,?????,??????,????,????,????! ?????-?????【??】?????????: ????????????????,??????????、??????,??????????、????、????、???????。 ?????--?????【??】?????????: ?????????????????????,???????????????,???????,?????,?????,????? !!!

    Read the article

  • CodePlex Daily Summary for Friday, October 25, 2013

    CodePlex Daily Summary for Friday, October 25, 2013Popular Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.9.5 Stable: Do you like this piece of software ? It took some time and effort to develop. Please consider a helping me with a donation Or please visit my blog Code : Rewritten the launcher of 7zip whith "old" fashioned batch. Better handling of exit codes Feat : New argument --notifyextra to drive the way extra information is delivered with the notification log Bug : NoFollowJunctions switch was inverted Feat : Added new directives maxfilesize and minfilesize to enhance file selection upon their ...Gac Library -- C++ Utilities for GPU Accelerated GUI and Script: Gaclib 0.5.5.0: Gaclib.zip contains the following content GacUIDemo Demo solution and projects Public Source GacUI library Document HTML document. Please start at reference_gacui.html Content Necessary CSS/JPG files for document. Improvements to the previous release Add 1 demos Editor.Toolstrip.Document Added new features GuiDocumentViewer and GuiDocumentLabel is editable like an RichTextEdit control.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.7: This is a bug fix release, containing some important fixes! Fixed issue where Session 0 was not detected correctly, resulting in issues when attempting to display a UI when none was allowed Fixed Installation Prompt and Installation Restart Prompt appearing when deploy mode was non-interactive or silent Fixed issue where defer prompt is displayed after force closing multiple applications Fixed issue executing blocked app execution dialog from UNC path (executed instead from local tempo...BlackJumboDog: Ver5.9.7: 2013.10.24 Ver5.9.7 (1)FTP???????、2?????????????shift-jis????????????? (2)????HTTP????、???????POST??????????????????CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.1.0.34322 Alpha 4: This experimental release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-1-0-34322-alpha-4 Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or sup...ImapX 2: ImapX 2.0.0.13: The long awaited ImapX 2.0.0.13 release. The library has been rewritten from scratch, massive optimizations and refactoring done. Several new features introduced. Added support for Mono. Added support for Windows Phone 7.1 and 8.0 Added support for .Net 2.0, 3.0 Simplified connecting to server by reducing the number of parameters and adding automatic port selection. Changed authentication handling to be universal and allowing to build custom providers. Added advanced server featur...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 32 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) This release has been tested with Visual Studio 2008, 2010, 2012 and 2013, using TortoiseSVN 1.6, 1.7 and 1.8. It should also still work with Visual Studio 2005, but I couldn't find anyone to test it in VS2005. Build 32 (beta) changelogNew: Added Visual Studio 2013 support New: Added Visual Studio 2012 support New: Added SVN 1.8 support New: Added 'Ch...ABCat: ABCat v.2.0.1a: ?????????? ???????? ? ?????????? ?????? ???? ??? Win7. ????????? ?????? ????????? ?? ???????. ????? ?????, ???? ????? ???????? ????????? ?????????? ????????? "?? ??????? ????? ???????????? ?????????? ??????...", ?? ?????????? ??????? ? ?????????? ?????? Microsoft SQL Ce ?? ????????? ??????: http://www.microsoft.com/en-us/download/details.aspx?id=17876. ???????? ?????? x64 ??? x86 ? ??????????? ?? ?????? ???????????? ???????. ??? ??????? ????????? ?? ?????????? ?????? Entity Framework, ? ???? ...patterns & practices: Data Access Guidance: Data Access Guidance 2013: This is the 2013 release of Data Access Guidance. The documentation for this RI is also available on MSDN: Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence: http://msdn.microsoft.com/en-us/library/dn271399.aspxMedia Companion: Media Companion MC3.584b: IMDB changes fixed. Fixed* mc_com.exe - Fixed to using new profile entries. * Movie - fixed rename movie and folder if use foldername selected. * Movie - Alt Edit Movie, trailer url check if changed and confirm valid. * Movie - Fixed IMDB poster scraping * Movie - Fixed outline and Plot scraping, including removal of Hyperlink's. * Movie Poster refactoring, attempts to catch gdi+ errors Revision HistoryJayData -The unified data access library for JavaScript: JayData 1.3.4: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.4 For detailed release notes check ...TerrariViewer: TerrariViewer v7.2 [Terraria Inventory Editor]: Added "Check for Update" button Hopefully fixed Windows XP issue You can now backspace in Item stack fieldsVirtual Wifi Hotspot for Windows 7 & 8: Virtual Router Plus 2.6.0: Virtual Router Plus 2.6.0Fast YouTube Downloader: Fast YouTube Downloader 2.3.0: Fast YouTube DownloaderMagick.NET: Magick.NET 6.8.7.101: Magick.NET linked with ImageMagick 6.8.7.1. Breaking changes: - Renamed Matrix classes: MatrixColor = ColorMatrix and MatrixConvolve = ConvolveMatrix. - Renamed Depth method with Channels parameter to BitDepth and changed the other method into a property.VidCoder: 1.5.9 Beta: Added Rip DVD and Rip Blu-ray AutoPlay actions for Windows: now you can have VidCoder start up and scan a disc when you insert it. Go to Start -> AutoPlay to set it up. Added error message for Windows XP users rather than letting it crash. Removed "quality" preset from list for QSV as it currently doesn't offer much improvement. Changed installer to ignore version number when copying files over. Should reduce the chances of a bug from me forgetting to increment a version number. Fixed ...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksmyCollections: Version 2.8.7.0: New in this version : Added Public Rating Added Collection Number Added Order by Collection Number Improved XBMC integrations Play on music item will now launch default player. Settings are now saved in database. Tooltip now display sort information. Fix Issue with Stars on card view. Fix Bug with PDF Export. Fix Bug with technical information's. Fix HotMovies Provider. Improved Performance on Save. Bug FixingMoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...New ProjectsAdder: Adder is simply converter for WPF binding. They add a value from parameter to target if a target int or double, concatenate parameter and value if type a strinAir Control ATV: App for Windows Phone to remote control Apple TV with FireCore aTV Flash (or Black) and AirControl installed.Bangladeshi Open Source Windows 8/Phone Apps: The largest Bangladeshi Open Source project with a vision. Mostly community-contributed Windows 8 and Windows Phone apps.BTB: Tumor board presentation tool for oncologyBulletin: not completed yet. the basic functions are ready, though.CRM Dashboard for Lync: Dynamics CRM Dashboard for Lync is an add-on that makes both applications seamlessly communicate between one another.Crowd CMS: Crowd CMS - A Crowd Funded Web Content Management System Built Using ASP.Net MVC 4 Website: http://www.crowdcms.co.uk CrowdCube: http://goo.gl/Mnd1Xsdusanproject: It is a project for Web Scripting and Application Development (COM), School of Computer of Science, University of HertfordshireElectronic Integrated Disease Surveillance System (EIDSS™) - Web Version: EIDSS can be configured as an electronic disease surveillance network, implemented in a region, to support public health practitioners and epidemiologists.Facebook Login Tool: Facebook Login ToolGadgeteer interface: Project aimed at creating common wrappers and interfaces for common components. This would allow for building applications based on the interfaces, this should Madoko: Madoko is a fast javascript Markdown processor written in Koka. It can render beautiful HTML and PDF (via LaTeX) and supports many Markdown extensions.MinotaurTeam: Project Management System using ASP.NET MVCmobSocial: Open Source Social Network for free!Online Room Reservation for Exchange: When you have Microsoft Exchange Server and meeting rooms you share with people from outside your company this is the ideal application for you.Pescar2013-shop-Masapan: aproyecto_Andy_y_Meli: mely and andyQMachine: A platform for World Wide Computingquarkz: It's just a try-out of Visual C++ (on example of Windows Forms Application). In this project I tried out to re-write one simple Flash game called quarkz in VC++TFS Workspaces Cleaner: TFS Workspaces Cleaner deletes Team Foundation Server workspaces that have not been accessed in a number of days, along with their files locally on disk.Ticketing System With ASP.NET MVC: Ticketing System where visitors (without authentication) should be able to view most commented tickets, as well as to register and login in the system. RegisterTotalFreedom Angular JS MVC: This library provides more tight integration between C# classes and AngularJS, helps to those who prefer to use as much C# and as little JavaScript as possiblevWordToHtml: vWordToHtmlvwpHistory: vwpHistoryweibowinform: weibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformweibowinformXompare - XML files comparison: Xompare - XML files comparison

    Read the article

  • Scrum Master Stephen Forte Teaches Agile Development, Silverlight and BI at GIDS 2010

    - by rajesh ahuja
    Great Indian Developer Summit 2010 – Gold Standard for India's Software Developer Ecosystem Bangalore, March 25, 2010: The author of several books on application and database development including Programming SQL Server 2008 and certified Scrum Master Stephen Forte is coming this summer to India's biggest summit for the developer ecosystem - Great Indian Developer Summit. At the summit, Stephen will conduct a workshop guaranteed to give attendees a jump start in taking a certified scrum master exam. Scrum, one of the most popular Agile project management and development methods, which is starting to be adopted at major corporations and on very large projects. After an introduction to the basics of Scrum like project planning and estimation, the Scrum Master, team, product owner and burn down, and of course the daily Scrum, Stephen will show many real world applications of the methodology drawn from his own experience as a Scrum Master. Negotiating with the business, estimation and team dynamics are all discussed as well as how to use Scrum in small organizations, large enterprise environments and consulting environments. Stephen will also discuss using Scrum with virtual teams and an off-shoring environment. He will then take a look at the tools we will use for Agile development, including planning poker, unit testing, and much more. On 20th April at the GIDS.NET Conference, Stephen will also conduct a series of sessions on Microsoft computing technologies. He will teach how to build data driven, n-tier Rich Internet Applications (RIA) with Silverlight 4.0. Line of business applications (LOB) in Silverlight 4.0 are easy by tapping the power of WCF RIA Services, the Silverlight Toolkit, and elevated out of browser support. Stephen's demo centric session will walk you through an example of building a LOB application with Silverlight 4.0. See how Silverlight and WCF RIA Services support domain logic, services, data binding, validation, server based paging, authentication, authorization and much more. Silverlight 4.0 means business. Silverlight runs C# and Visual Basic code, and so it seems natural that a business application might share some code between the Silverlight client and its ASP.NET Web server. You may want to run some code client-side for interactivity, but re-run that code on the server for security or reliability. This is possible, and there are several techniques you can use to accomplish this goal. In Stephen's second talk learn about the various techniques and their pros and cons. Some techniques work better in C#, others in VB. Still others are simpler with a little extra tooling or code-generation. Any serious Silverlight business application will almost certainly face this issue, and this session gets you going fast. In the third talk, Stephen will explain how to properly architect and deploy a BI application using a mix of some exciting new tools and some old familiar ones. He will start with a traditional relational transaction centric database (OLTP) and explore ways to build a data warehouse (OLAP), looking at the star and snowflake schemas. Next he will look at the process of extraction, transformation, and loading (ETL) your OLTP data into your data warehouse. Different techniques for ETL will be described and the various tradeoffs will be discussed. Then he will look at using the warehouse for reporting, drill down, and data analysis in Microsoft Excel's PowerPivot 2010. The session will round off by showing how to properly build a cube and build a data analysis application on top of that cube, and conclude by looking at some tools to help with the data visualization process. Every year, GIDS is a game changer for several thousands of IT professionals, providing them with a competitive edge over their peers, enlightening them with bleeding-edge information most useful in their daily jobs, helping them network with world-class experts and visionaries, and providing them with a much needed thrust in their careers. Attend Great Indian Developer Summit to gain the information, education and solutions you seek. From post-conference workshops, breakout sessions by expert instructors, keynotes by industry heavyweights, enhanced networking opportunities, and more. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Ajax, Lizard Brain Web Design, JSF, Struts, JavaScript, Mobile Web, Flash, jQuery, GWT, Harmony at I

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Web Conference and Workshops has announced the complete program of over 30 sessions on how browser and rich web technologies such as AJAX, DHTML, Mashups, Web 2.0, Enterprise 2.0 technologies, and Rich UI technologies are making money and gaining market-share for some of the leading businesses in the world. The GIDS.Web track at Great Indian Developer Summit takes place 21 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Web at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 21 and 23 April 2010, GIDS.Web offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Web is led by the leading Java EE and Ajax developer, speaker, and author Marty Hall. The best of India's Java and RIA programmers have learnt the subject from Marty's seminal books Core Servlets and JavaServer Pages (first and second editions), More Servlets and JavaServer Pages, and Core Web Programming (first and second editions) from Prentice Hall and Sun Microsystems Press. Marty's keynote address is a comparison of approaches to building rich Internet applications with Ajax. Marty says Ajax development is difficult, and there are several fundamentally different strategies to building Ajaxified Web applications. The keynote address will survey the three most important of these approaches: using an Ajax-enabled JavaScript library such as jQuery, Prototype, Scriptaculous, Dojo, or Ext/JS; using a Web framework such as JSF 2.0 or Struts 2 that has integrated Ajax support; using the Google Web Toolkit (GWT) to build "pure Java" Ajax applications. The talk will compare and contrast these three approaches, discussing the types of applications that fit best for each option. Over the course of the summit Marty will conduct several more sessions on "Choosing an Ajax/JavaScript Toolkit: A Comparison of the Most Popular JavaScript Libraries", "Pure Java Ajax: An Overview of GWT 2.0", "Integrated Ajax Support in JSF 2.0" and "Ajax Support in the Prototype JavaScript Library". The second keynote by the head of Adobe's Flash initiative in India, Ramesh Srinivasaraghavan, explores the state of art in web application development and identify trends that could transform the way we create and use web applications. The talk explains how the Adobe Flash Platform has fuelled this revolution with an integrated set of technologies for delivering the most compelling applications, content and video to the widest possible audience. The Director of Forum Nokia will explain how cloud computing coupled with mobile applications enable consumers to have access to powerful services and improved user experiences never before thought possible. IEEE's 2010 President-Elect Sorel Reisman's afternoon address steps to improve the IT profession in India. Featured talks at GID.Web also include: Web 2.0 Checklist - Deconstructing Modern Websites, Scott Davis Choosing an Ajax/JavaScript Toolkit: Comparison of Popular JavaScript Libraries, Marty Hall Lizard Brain Web Design, Scott Davis Effective Design Processes and Resources for Mobile Web Development, Arabella David NoSQL: The Shift to a Non-relational World, Nosh Petigara Open Source Web Debugging Tools, Matthew McCullough Building Line of Business Applications with Silverlight 4.0, Stephen Forte Hadoop - Divide and Conquer, Matthew McCullough Adobe Flash Catalyst for Agile Interaction Design, Harish Sivaramakrishnan Using jQuery and AJAX to Build Front-ends for ASP.NET and ASP.NET MVC, Pandurang Nayak First Steps to IT Heaven Through the Cloud. Part II: .WEB, Simone Brunozzi Building Rich Internet Applications with SL RIA Web Services, Pandurang Nayak Enriching Cloud Applications with Adobe Flash Platform, Ramesh Srinivasaraghavan Payments for the Web.future, Khurram Khan and Praveen Alavilli Longevity of Scalable Systems, Nishad Kamat Transform yourself into a Mobile App Developer Using Web Run Time, Balagopal K S Developing Multi Screen Applications on Adobe Flash Platform, Hemanth Sharma Why Harmony and For Whom?, Himanshu Goyal IIS Hosting Solution for ASP.net and PHP Web Sites, Nahas Mohammed Building Pluggable Web applications using Django, Lakshman Prasad Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: Windows Azure Deep Dive, Ramaprasanna Chellamuthu Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • What does a Software Developer actually do?

    - by chobo2
    Hi I am graduating from my Computer Science degree in a few weeks from now!! I started to look for my first job. For the last couple years I gotten really into web programming(Asp.net). My first choice would be to get a junior asp.net MVC developer but I don't any companies in my area use MVC yet or if they do they are not hiring. So my second choice would be a junior asp.net Webforms developer. My other choices after that would be forms applications, mobile applications using .Net and C#. As you can see I am looking for something with .Net. I spent the last couple years doing .Net projects for school, on my free time and love the Language and it would pain me right now to switch to something like php. So now I found a posting in my area for an Entry Software Developer. I like the fact that they are using .net and that it is entry job(I never worked in this industry and never had more then like a tutoring job so I want to for like intermediate jobs). Posting Are you looking for an exciting challenge within a dynamic, people-oriented culture where you can launch your technical career? Company Name Inc. is a technology consulting company, located in Canada, that designs, develops, and delivers real-time interactive applications accessed via the Internet as well as back-end tools to support these applications. Company Name provides a combination of out-of-the-box and customized solutions to an expanding list of partners and customers. POSITION SUMMARY As a member of our team, the successful candidate will be responsible for helping us increase the quality and stability of our software systems by working jointly and directly with both the Software Development teams and the QA Team. The primary mission of this role will be to substantially enhance our test automation suite. The incumbent will design and program automated tests (unit, integration, system, stress and load) in Visual Studio using C# and will develop sound processes that help us identify and resolve defects as early as possible. The successful incumbent will help us improve and enhance system functionality, reliability, performance and scalability. This role is specifically designed for an eager, bright, new graduate who is looking for a stepping stone into a software engineering role. We promote from within and invite new graduates to apply for this important position - which may lead to new opportunities. We also offer a generous professional development plan to help you on your way. You will be a key part of a team of experts that is responsible for improving the quality of our software by: • Designing, writing, and executing test plans and programmatic tests in Visual Studio using C# and NUnit for functional testing of our code, new features, regression, and performance test procedures. • Working with the engineers to design and build the stress and load testing framework which emulates tens and even hundreds of thousands of concurrent users via a distributed network interfacing with our Load Testing Lab. • Interfacing with both the Development Team and the QA Team to ensure risks are identified and managed. • Mentoring and leading the QA Team in programmatic test automation technologies and tools. MUST HAVE SKILLS / QUALIFICATIONS: • Diploma or higher Degree in Computer Science, or equivalent formal training. • Fundamental C# programming skills. • Knowledge of Internet technologies and Microsoft Windows platforms. • Knowledge of PC hardware. • Excellent communication skills (both oral and written). • Self-starter who takes initiative, requires minimal supervision, can handle multiple simultaneous tasks. • Detail-oriented, able to concentrate, and work quickly. • Proven diagnostic, analytical, and problem solving skills. NICE TO HAVE SKILLS: • Exposure to Visual Studio Team System or Visual Studio Test Edition. • Exposure in C# using NUnit. • Exposure to NUnit, HTTPUnit, and other automation tool suites. • Exposure to Performance/Stress/Load Testing. • Good understanding of relational databases (MS SQL Server). • Familiar with video and online multi-player games. As part of our team you will have the opportunity to work with a supportive team of experts, drive your own success, and ride the wave as we continually expand our team of experts. If you are interested in this opportunity, please send your resume to [email protected] with “Entry Level Software Developer” in the subject line. So that is the posting. To me it sounds like it is QA job. I don't have anything against QA jobs but alot of them seems to be your just clicking buttons and running scripts. Is this what a typical software developer does? Like I am so on the fence to apply for this job. On one side I am not sure how much programming I would be doing. Like I want to be at least half the time programming otherwise my skills will never improve since I will never be programming in teams and stuff. At the same time I have no experience in the industry so on the other side I am thinking just go for it and then maybe a year later try to get a full programming job(provided that I got the job). Yet if I am not programming in that job then that experience will not help me for the next job I find as I will be back a square one.

    Read the article

  • Thread scheduling Round Robin / scheduling dispatch

    - by MRP
    #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <semaphore.h> #define NUM_THREADS 4 #define COUNT_LIMIT 13 int done = 0; int count = 0; int quantum = 2; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,7}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void * inc_count(void * arg); static sem_t count_sem; int quit = 0; ///////// Inc_Count//////////////// void *inc_count(void *t) { long my_id = (long)t; int i; sem_wait(&count_sem); /////////////CRIT SECTION////////////////////////////////// printf("run_thread = %d\n",my_id); printf("%d \n",thread_runtime[my_id]); for( i=0; i < thread_runtime[my_id];i++) { printf("runtime= %d\n",thread_runtime[my_id]); pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n",my_id, count); pthread_mutex_unlock(&count_mutex); sleep(1) ; }//End For sem_post(&count_sem); // Next Thread Enters Crit Section pthread_exit(NULL); } /////////// Count_Watch //////////////// void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } ////////////////// Main //////////////// int main (int argc, char *argv[]) { int i; long t1=0, t2=1, t3=2, t4=3; pthread_t threads[4]; pthread_attr_t attr; sem_init(&count_sem, 0, 1); /* Initialize mutex and condition variable objects */ pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); /* For portability, explicitly create threads in a joinable state */ pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); /* Wait for all threads to complete */ for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); /* Clean up and exit */ pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } I am trying to learn thread scheduling, there is a lot of technical coding that I don't know. I do know in theory how it should work, but having trouble getting started in code... I know, at least I think, this program is not real time and its not meant to be. Some how I need to create a scheduler dispatch to control the threads in the order they should run... RR FCFS SJF ect. Right now I don't have a dispatcher. What I do have is semaphores/ mutex to control the threads. This code does run FCFS... and I have been trying to use semaphores to create a RR.. but having a lot of trouble. I believe it would be easier to create a dispatcher but I dont know how. I need help, I am not looking for answers just direction.. some sample code will help to understand a bit more. Thank you.

    Read the article

  • Windows 2008 RenderFarm Service: CreateProcessAsUser "Session 0 Isolation" and OpenGL

    - by holtavolt
    Hello, I have a legacy Windows server service and (spawned) application that works fine in XP-64 and W2K3, but fails on W2K8. I believe it is because of the new "Session 0 isolation " feature. (Note: As a StackOverflow newbie I'm being limited to one link in this post, so you'll need to scroll to bottom to lookup the links for '' items)* Consequently, I'm looking for code samples/security settings mojo that let you create a new process from a windows service for Windows 2008 Server such that I can restore (and possibly surpass) the previous behavior. I need a solution that: Creates the new process in a non-zero session to get around session-0 isolation restrictions (no access to graphics hardware from session 0) - the official MS line on this is: Because Session 0 is no longer a user session, services that are running in Session 0 do not have access to the video driver. This means that any attempt that a service makes to render graphics fails. Querying the display resolution and color depth in Session 0 reports the correct results for the system up to a maximum of 1920x1200 at 32 bits per pixel. The new process gets a windows station/desktop (e.g. winsta0/default) that can be used to create windows DCs. I've found a solution (that launches OK in an interactive session) for this here: *(Starting an Interactive Client Process in C++ - 2) The windows DC, when used as the basis for an *(OpenGL DescribePixelFormat enumeration - 3), is able to find and use the hardware-accelerated format (on a system appropriately equipped with OpenGL hardware.) Note that our current solution works OK on XP-64 and W2K3, except if a terminal services session is running (VNC works fine.) A solution that also allowed the process to work (i.e. run with OpenGL hardware acceleration even when a terminal services session is open) would be fanastic, although not required. I'm stuck at item #1 currently, and although there are some similar postings that discuss this (like *(this -4), and *(this - 5) - they are not suitable solutions, as there is no guarantee of a user session logged in already to "take" a session id from, nor am I running from a LocalSystem account (I'm running from a domain account for the service, for which I can adjust the privileges of, within reason, although I'd prefer to not have to escalate priorities to include SeTcbPrivileges.) For instance - here's a stub that I think should work, but always returns an error 1314 on the SetTokenInformation call (even though the AdjustTokenPrivileges returned no errors) I've used some alternate strategies involving "LogonUser" as well (instead of opening the existing process token), but I can't seem to swap out the session id. I'm also dubious about using the WTSActiveConsoleSessionId in all cases (for instance, if no interactive user is logged in) - although a quick test of the service running with no sessions logged in seemed to return a reasonable session value (1). I’ve removed error handling for ease of reading (still a bit messy - apologies) //Also tried using LogonUser(..) here OpenProcessToken(GetCurrentProcess(), TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE, &hToken) GetTokenInformation( hToken, TokenSessionId, &logonSessionId, sizeof(DWORD), &dwTokenLength ) DWORD consoleSessionId = WTSGetActiveConsoleSessionId(); /* Can't use this - requires very elevated privileges (LOCAL only, SeTcbPrivileges as well) if( !WTSQueryUserToken(consoleSessionId, &hToken)) ... */ DuplicateTokenEx(hToken, (TOKEN_QUERY | TOKEN_ADJUST_PRIVILEGES | TOKEN_ADJUST_SESSIONID | TOKEN_ADJUST_DEFAULT | TOKEN_ASSIGN_PRIMARY | TOKEN_DUPLICATE), NULL, SecurityIdentification, TokenPrimary, &hDupToken)) // Look up the LUID for the TCB Name privilege. LookupPrivilegeValue(NULL, SE_TCB_NAME, &tp.Privileges[0].Luid)) // Enable the TCB Name privilege in the token. tp.PrivilegeCount = 1; tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if (!AdjustTokenPrivileges(hDupToken, FALSE, &tp, sizeof(TOKEN_PRIVILEGES), NULL, 0)) { DisplayError("AdjustTokenPrivileges"); ... } if (GetLastError() == ERROR_NOT_ALL_ASSIGNED) { DEBUG( "Token does not have the necessary privilege.\n"); } else { DEBUG( "No error reported from AdjustTokenPrivileges!\n"); } // Never errors here DEBUG(LM_INFO, "Attempting setting of sessionId to: %d\n", consoleSessionId ); if (!SetTokenInformation(hDupToken, TokenSessionId, &consoleSessionId, sizeof(DWORD))) *** ALWAYS FAILS WITH 1314 HERE *** All the debug output looks fine up until the SetTokenInformation call - I see session 0 is my current process session, and in my case, it's trying to set session 1 (the result of the WTSGetActiveConsoleSessionId). (Note that I'm logged into the W2K8 box via VNC, not RDC) So - a the questions: Is this approach valid, or are all service-initiated processes restricted to session 0 intentionally? Is there a better approach (short of "Launch on logon" and auto-logon for the servers?) Is there something wrong with this code, or a different way to create a process token where I can swap out the session id to indicate I want to spawn the process in a new session? I did try using LogonUser instead of OpenProcessToken, but that didn't work either. (I don't care if all spawned processes share the same non-zero session or not at this point.) Any help much appreciated - thanks! (You need to replace the 'zttp' with 'http' - StackOverflow restriction on one link in my newbie post) 2: http://msdn.microsoft.com/en-us/library/aa379608(VS.85).aspx 3: http://www.opengl.org/resources/faq/technical/mswindows.htm 4: http://stackoverflow.com/questions/2237696/creating-a-process-in-a-non-zero-session-from-a-service-in-windows-2008-server 5: http://stackoverflow.com/questions/1602996/how-can-i-lauch-a-process-which-has-a-ui-from-windows-service

    Read the article

  • I never really understood: what is CGI?

    - by claws
    CGI is a Comman Gateway Interface. As the name says, it is a "common" gateway interface for everything. It is so trivial and naive from the name. I feel that I understood this and I felt this every time I encountered this word. But frankly, I didn't. I'm still confused. I am a PHP programmer. I did lot of web development. user (client) request for page --- webserver(-embedded PHP interpreter) ---- Server side(PHP) Script --- MySQL Server. Now say my PHP Script can fetch results from MySQL Server && MATLAB Server && Some other server. So, now PHP Script is the CGI? because its interface for the between webserver & All other servers? I don't know. Sometimes they call CGI, a technology & othertimes they call CGI a program or someother server. What exactly is CGI? Whats the big deal with /cgi-bin/*.cgi? Whats up with this? I don't know what is this cgi-bin directory on the server for. I don't know why they have *.cgi extensions. Why does Perl always comes in the way. CGI & Perl (language). I also don't know whats up with these two. Almost all the time I keep hearing these two in combination "CGI & Perl". This book is another great example CGI Programming with Perl Why not "CGI Programming with PHP/JSP/ASP". I never saw such things. CGI Programming in C this confuses me a lot. in C?? Seriously?? I don't know what to say. I"m just confused. "in C"?? This changes everything. Program needs to be compiled and executed. This entirely changes my view of web programming. When do I compile? How does the program gets executed (because it will be a machine code, so it must execute as a independent process). How does it communicate with the web server? IPC? and interfacing with all the servers (in my example MATLAB & MySQL) using socket programming? I'm lost!! They say that CGI is depreciated. Its no more in use. Is it so? What is its latest update? Once, I ran into a situation where I had to give HTTP PUT request access to web server (Apache HTTPD). Its a long back. So, as far as I remember this is what I did: Edited the configuration file of Apache HTTPD to tell webserver to pass all HTTP PUT requests to some put.php ( I had to write this PHP script) Implement put.php to handle the request (save the file to the location mentioned) People said that I wrote a CGI Script. Seriously, I didn't have clue what they were talking about. Did I really write CGI Script? I hope you understood what my confusion is. (Because I myself don't know where I'm confused). I request you guys to keep your answer as simple as possible. I really can't understand any fancy technical terminology. At least not in this case. EDIT: I found this amazing tutorial "CGI Programming Is Simple!" - CGI Tutorial Which explains the concepts in simplest possible way. I've only have one complaint about this tutorial. Just to make what ever he explained complete he should have shown the C code he used for generating response for those GET / POST requests. I've also added link to this tutorial to Wikipedia's article : http://en.wikipedia.org/wiki/Common_Gateway_Interface

    Read the article

  • Am I crazy? (How) should I create a jQuery content editor?

    - by Brendon Muir
    Ok, so I created a CMS mainly aimed at Primary Schools. It's getting fairly popular in New Zealand but the one thing I hate with a passion is the largely bad quality of in browser WYSIWYG editors. I've been using KTML (made by InterAKT which was purchased by Adobe a few years ago). In my opinion this editor does a lot of great things (image editing/management, thumbnailing and pretty good content editing). Unfortunately time has had its nasty way with this product and new browsers are beginning to break features and generally degrade the performance of this tool. It's also quite scary basing my livelihood on a defunct product! I've been hunting, in fact I regularly hunt around to see if anything has changed in the WYSIWYG arena. The closest thing I've seen that excites me is the WYSIHAT framework, but they've decided to ignore a pretty relevant editing paradigm which I'm going to outline below. This is the idea for my proposed editor, and I don't know of any existing products that can do this properly: Right, so the traditional model for editing let's say a Page in a CMS is to log into a 'back end' and click edit on the page. This will then load another screen with the editor in it and perhaps a few other fields. More advanced CMS's will maybe have several editing boxes that are for different portions of the page. Anyway, the big problem with this way of doing things is that the user is editing a document outside of the final context it will appear in. In the simplest terms, this means the page template. Many things can be wrong, e.g. the with of the editing area might be different to the width of the actual template area. The height is nearly always fixed because existing editors always seem to use IFRAMES for backward compatibility. And there are plenty of other beefs which I'm sure you're quite aware of if you're in this development area. Here's my editor utopia: You click 'Edit Page': The actual page (with its actual template) displays. Portions of the page have been marked as editable via a class name. You click on one of these areas (in my case it'd just be the big 'body' area in the middle of the template) and a editing bar drops down from the top of the screen with all your standard controls (bold, italic, insert image etc...). Iframes are never used, instead we rely on setting contentEditable to true on the DIV's in question. Firefox 2 and IE6 can go away, let's move on. You can edit the page knowing exactly how it will look when you save it. Because all the styles for this template are loaded, your headings will look correct, everything will be just dandy. Is this such a radical concept? Why are we still content with TinyMCE and that other editor that is too embarrassing to use because it sounds like a swear word!? Let's face the facts: I'm a JavaScript novice. I did once play around in this area using the Javascript Anthology from SitePoint as a guide. It was quite a cool learning experience, but they of course used the IFRAME to make their lives easier. I tried to go a different route and just use contentEditable and even tried to sidestep the native content editing routines (execCommand) and instead wrote my own. They kind of worked but there were always issues. Now we have jQuery, and a few libraries that abstract things like IE's lack of Range support. I'm wondering, am I crazy, or is it actually a good idea to try and build an editor around this editing paradigm using jQuery and relevant plugins to make the job easier? My actual questions: Where would you start? What plugins do you know of that would help the most? Is it worth it, or is there a magical project that already exists that I should join in on? What are the biggest hurdles to overcome in a project like this? Am I crazy? I hope this question has been posted on the right board. I figured it is a technical question as I'm wanting to know specific hurdles and pitfalls to watch out for and also if it is technically feasible with todays technology. Looking forward to hearing peoples thoughts and opinions.

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • Does there exist an "idea checkout system" on the Internet?

    - by TimeSpace Traveller
    Greetings. I would like to ask the following question: is there anything on the Internet like an "idea checkout" system? Situation: I'm a software developer. Since my current job has started 2 years ago, my mentor at that time has pointed me to the open source world. I have only put little time to look at some of the open source projects, let alone any contribution. However, it is my wish to start developing something outside of the work. Well, except a little problem. I don't know what to develop! It is not about the technical knowledge; the problem is that, I am not a creative person. I am very good at analytical thinking, as well as debugging skills. When being told by my work partners to develop a solution, I could get it done without a problem. However, outside of work, I have no idea what to develop. When I look at the Internet, it seems that so many people have already been developing on so many interesting stuff, making me wonder what I could develop, so that I would not reinvent something already existed. That starts to make me wonder. On the Internet, is there anything like an "idea checkout" system or society? For example, some people would throw in as a software idea, and the system would keep it as an "inventory"; later, a potential software developer would "check out" the idea, just a how people would check out a book from the library. Then, the developer would check the "idea" back in, with a certain kind of work-in-progress or developed software, thus becoming an open-source project. I have just noticed that here at stackoverflow, there is a "Project-Ideas" tag, so perhaps that can provide me some ideas on what to develop; still, my wonder is about a system that people would provide ideas, and people would check out ideas to develop / implement into actual solution. Is there such a system or society existing anywhere on the Internet? Any input is welcome! Thank you very much. Update: Thank you for everyone who has answered my question. Certainly, "getting idea" is part of my problem; as a software developer, however, I'm concerned more than just "getting idea". What I am concerned more, as I have commented, is about the existence of such an idea exchanging ecosystem, capable to initiate open-source projects. I'll put an example here. Say, person A has an idea of music search program, but not search by the attributes of the music (composer, singer, publisher, lyrics, etc.); instead, he wants a program (and a database) to search a piece of music by melody. Very often, people only remember a piece of music by its melody, not even the name of the music (e.g. the music he wants was only once heard in a bookstore, but the melody just gets stuck in his head!). In order to find that piece, normally he would just need to blindly search for it, and spent a long time to do so. A search by melody would enable person A to find the piece much quicker. However, he would not want to personally work on it, not just because of the complexity (he is not a musician and/or programmer, knowing almost nothing about music systems in computer, search algorithms, etc.), but also legal issues (RIAA??), thus he would just like to keep the idea at some place, and let other people to work on that. Now, a developer (person B) may be at the same stage as I am right now, wishing to find something to develop, but not having an idea. With the idea exchanging ecosystem, person B will search, and somehow discover person A's music search idea, and feeling interested enough to work on it. So he "checks out" the idea, start working on it (at least a skeleton), and checks back in with the progress. An open-source project starts from here, fulfilling person A's wish, and person B's programming desire. The above is just an example, because there are already such systems exist on the Internet, but it illustrates what I think about the idea exchange system in my mind. My main concern is about idea exchanging ecosystem, not at personal and unorganized level, but at a semi-organized protocol that's specifically for software developers, having actual projects coming out as the fruits. Not about "projects", but about "ideas and product of ideas". Hopefully that would clear up some of the original idea of this question. Any input is welcome; in fact, I would like to hear as many people as possible how everyone thinks about this. Thank you very much!

    Read the article

  • How should I model the database for this problem? And which ORM can handle it?

    - by Kristof Claes
    I need to build some sort of a custom CMS for a client of ours. These are some of the functional requirements: Must be able to manage the list of Pages in the site Each Page can contain a number of ColumnGroups A ColumnGroup is nothing more than a list of Columns in a certain ColumnGroupLayout. For example: "one column taking up the entire width of the page", "two columns each taking up half of the width", ... Each Column can contain a number ContentBlocks Examples of a ContentBlock are: TextBlock, NewsBlock, PictureBlock, ... ContentBlocks can be given a certain sorting within a Column A ContentBlock can be put in different Columns so that content can be reused without having to be duplicated. My first quick draft of how this could look like in C# code (we're using ASP.NET 4.0 to develop the CMS) can be found at the bottom of my question. One of the technical requirements is that it must be as easy as possible to add new types of ContentBlocks to the CMS. So I would like model everything as flexible as possible. Unfortunately, I'm already stuck at trying to figure out how the database should look like. One of the problems I'm having has to do with sorting different types of ContentBlocks in a Column. I guess each type of ContentBlock (like TextBlock, NewsBlock, PictureBlock, ...) should have it's own table in the database because each has it's own different fields. A TextBlock might only have a field called Text whereas a NewsBlock might have fields for the Text, the Summary, the PublicationDate, ... Since one Column can have ContentBlocks located in different tables, I guess I'll have to create a many-to-many association for each type of ContentBlock. For example: ColumnTextBlocks, ColumnNewsBlocks and ColumnPictureBlocks. The problem I have with this setup is the sorting of the different ContentBlocks in a column. This could be something like this: TextBlock NewsBlock TextBlock TextBlock PictureBlock Where do I store the sorting number? If I store them in the associaton tables, I'll have to update a lot of tables when changing the sorting order of ContentBlocks in a Column. Is this a good approach to the problem? Basically, my question is: What is the best way to model this keeping in mind that it should be easy to add new types of ContentBlocks? My next question is: What ORM can deal with that kind of modeling? To be honest, we are ORM-virgins at work. I have been reading a bit about Linq-to-SQL and NHibernate, but we have no experience with them. Because of the IList in the Column class (see code below) I think we can rule out Linq-to-SQL, right? Can NHibernate handle the mapping of data from many different tables to one IList? Also keep in mind that this is just a very small portion of the domain. Other parts are Users belonging to a certain UserGroup having certain Permissions on Pages, ColumnGroups, Columns and ContentBlocks. The code (just a quick first draft): public class Page { public int PageID { get; set; } public string Title { get; set; } public string Description { get; set; } public string Keywords { get; set; } public IList<ColumnGroup> ColumnGroups { get; set; } } public class ColumnGroup { public enum ColumnGroupLayout { OneColumn, HalfHalf, NarrowWide, WideNarrow } public int ColumnGroupID { get; set; } public ColumnGroupLayout Layout { get; set; } public IList<Column> Columns { get; set; } } public class Column { public int ColumnID { get; set; } public IList<IContentBlock> ContentBlocks { get; set; } } public interface IContentBlock { string GetSummary(); } public class TextBlock : IContentBlock { public string GetSummary() { return "I am a piece of text."; } } public class NewsBlock : IContentBlock { public string GetSummary() { return "I am a news item."; } }

    Read the article

  • How to associate Wi-Fi beacon info with a virtual "location"?

    - by leander
    We have a piece of embedded hardware that will sense 802.11 beacons, and we're using this to make a map of currently visible bssid -> signalStrength. Given this map, we would like to make a determination: Is this likely to be a location I have been to before? If so, what is its ID? If not, I should remember this location: generate a new ID. Now what should I store (and how should I store it) to make future determinations easier? This is for an augmented-reality app/game. We will be using it to associate particular characters and events with "locations". The device does not have internet or cellular access, so using a geolocation service is out of consideration for the time being. (We don't really need to know where we are in reality, just be able to determine if we return there.) It isn't crucial that it be extremely accurate, but it would be nice if it was tolerant to signal strength changes or the occasional missing beacon. It should be usable in relatively low numbers of access points (e.g. rural house with one wireless router) or many (wandering around a dense metropolis). In the case of a city, it should change location every few minutes of walking (continuously-overlapping signals make this a bit more tricky in naive code). A reasonable number of false positives (match a location when we aren't actually there) is acceptable. The wrong character/event showing up just adds a bit of variety. False negatives (no location match) are a bit more troublesome: this will tend to add a better-matching new location to the saved locations, masking the old one. While we will have additional logic to ensure locations that the device hasn't seen in a while will "orphan" any associated characters or events (if e.g. you move to a different country), we'd prefer not to mask and eventually orphan locations you do visit regularly. Some technical complications: signalStrength is returned as 1-4; presumably it's related to dB, but we are not sure exactly how; in my experiments it tends to stick to either 1 or 4, but occasionally we see numbers in between. (Tech docs on the hardware are sparse.) The device completes a scan of one-quarter of the channel space every second; so it takes about 4-5 seconds to get a complete picture of what's around. The list isn't always complete. (We are making strides to fix this using some slight sampling period randomization, as recommended by the library docs. We're also investigating ways to increase the number of scans without killing our performance; the hardware/libs are poorly behaved when it comes to saturating the bus.) We have only kilobytes to store our history. We have a "working" impl now, but it is relatively naive, and flaky in the face of real-world Wi-Fi behavior. Rough pseudocode: // recordLocation() -- only store strength 4 locations m_savedLocations[g_nextId++] = filterForStrengthGE( m_currentAPs, 4 ); // determineLocation() bestPoints = -inf; foreach ( oldLoc in m_savedLocations ) { points = 0.0; foreach ( ap in m_currentAPs ) { if ( oldLoc.has( ap ) ) { switch ( ap.signalStrength ) { case 3: points += 1.0; break; case 4: points += 2.0; break; } } } points /= oldLoc.numAPs; if ( points > bestPoints ) { bestLoc = oldLoc; bestPoints = points; } } if ( bestLoc && bestPoints > 1.0 ) { if ( bestPoints >= (2.0 - epsilon) ) { // near-perfect match. // update location with any new high-strength APs that have appeared bestLoc.addAPs( filterForStrengthGE( m_currentAPs, 4 ) ); } return bestLoc; } else { return NO_MATCH; } We record a location currently only when we have NO_MATCH and the app determines it's time for a new event. (The "near-perfect match" code above would appear to make it harder to match in the future... It's mostly to keep new powerful APs from being associated with other locations, but you'd think we'd need something to counter this if e.g. an AP doesn't show up in the next 10 times I match a location.) I have a feeling that we're missing some things from set theory or graph theory that would assist in grouping/classification of this data, and perhaps providing a better "confidence level" on matches, and better robustness against missed beacons, signal strength changes, and the like. Also it would be useful to have a good method for mutating locations over time. Any useful resources out there for this sort of thing? Simple and/or robust approaches we're missing?

    Read the article

  • WPF Some styles not applied on DataTemplate controls

    - by Martin
    Hi, I am trying to learn something about WPF and I am quite amazed by its flexibility. However, I have hit a problem with Styles and DataTemplates, which is little bit confusing. I have defined below test page to play around a bit with styles etc and found that the Styles defined in <Page.Resources> for Border and TextBlock are not applied in the DataTemplate, but Style for ProgressBar defined in exactly the same way is applied. Source code (I just use Kaxaml and XamlPadX to view the result) <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <Style TargetType="{x:Type Border}"> <Setter Property="Background" Value="SkyBlue"/> <Setter Property="BorderBrush" Value="Black"/> <Setter Property="BorderThickness" Value="2"/> <Setter Property="CornerRadius" Value="5"/> </Style> <Style TargetType="{x:Type TextBlock}"> <Setter Property="FontWeight" Value="Bold"/> </Style> <Style TargetType="{x:Type ProgressBar}"> <Setter Property="Height" Value="10"/> <Setter Property="Width" Value="100"/> <Setter Property="Foreground" Value="Red"/> </Style> <XmlDataProvider x:Key="TestData" XPath="/TestData"> <x:XData> <TestData xmlns=""> <TestElement> <Name>Item 1</Name> <Value>25</Value> </TestElement> <TestElement> <Name>Item 2</Name> <Value>50</Value> </TestElement> </TestData> </x:XData> </XmlDataProvider> <HierarchicalDataTemplate DataType="TestElement"> <Border Height="45" Width="120" Margin="5,5"> <StackPanel Orientation="Vertical" Margin="5,5" VerticalAlignment="Center" HorizontalAlignment="Center"> <TextBlock HorizontalAlignment="Center" Text="{Binding XPath=Name}"/> <ProgressBar Value="{Binding XPath=Value}"/> </StackPanel> </Border> </HierarchicalDataTemplate> </Page.Resources> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center" VerticalAlignment="Center"> <StackPanel Orientation="Vertical" VerticalAlignment="Center"> <Border Height="45" Width="120" Margin="5,5"> <StackPanel Orientation="Vertical" VerticalAlignment="Center" HorizontalAlignment="Center"> <TextBlock HorizontalAlignment="Center" Text="Item 1"/> <ProgressBar Value="25"/> </StackPanel> </Border> <Border Height="45" Width="120" Margin="5,5"> <StackPanel Orientation="Vertical" VerticalAlignment="Center" HorizontalAlignment="Center"> <TextBlock HorizontalAlignment="Center" Text="Item 2"/> <ProgressBar Value="50"/> </StackPanel> </Border> </StackPanel> <ListBox Margin="10,10" Width="140" ItemsSource="{Binding Source={StaticResource TestData}, XPath=TestElement}"/> </StackPanel> </Page> I suspect it has something to do with default styles etc, but more puzzling is why some Styles are applied and some not. I cannot find an easy explanation for above anywhere and thus would like to ask if someone would be kind enough to explain this behaviour in lamens' terms with possible links to technical description, i.e. to MSDN or so. Thanks in advance for you support!

    Read the article

  • Unable to center text in IE but works in firefox

    - by greenpool
    Can somebody point out where I'm going wrong with the following code. Text inside td elements need to be centered except for Summary and Experience. This only appears to work in Firefox/chrome. In IE8 all td text are displayed as left-justified. No matter what I try it doesn't center it. Any particular reason why this would happen? Thanks. css #viewAll { font-family:"Trebuchet MS", Arial, Helvetica, sans-serif; width:100%; border-collapse:collapse; margin-left:10px; table-layout: fixed; } #viewAll td, #viewAll th { font-size:1.1em; border:1px solid #98bf21; word-wrap:break-word; text-align:center; overflow:hidden; } #viewAll tbody td{ padding:2px; } #viewAll th { font-size:1.1em; padding-top:5px; padding-bottom:4px; background-color:#A7C942; color:#ffffff; } table <?php echo '<table id="viewAll" class="tablesorter">'; echo '<thead>'; echo '<tr align="center">'; echo '<th style="width:70px;">Product</th>'; echo '<th style="width:105px;">Prob</th>'; echo '<th style="width:105px;">I</th>'; echo '<th style="width:60px;">Status</th>'; echo '<th style="width:120px;">Experience</th>'; echo '<th style="width:200px;">Technical Summary</th>'; echo '<th style="width:80px;">Record Created</th>'; echo '<th style="width:80px;">Record Updated</th>'; echo '<th style="width:50px;">Open</th>'; echo '</tr>'; echo '</thead>'; echo '<tbody>'; while ($data=mysqli_fetch_array($result)){ #limiting the summary text displayed in the table $limited_summary = (strlen($data['summary']) > 300) ? substr(($data['summary']),0,300) . '...' : $data['summary']; $limited_exp = (strlen($data['exp']) > 300) ? substr(($data['exp']),0,300) . '...' : $data['exp']; echo '<tr align="center"> <td style="width:70px; text-align:center;">'.$data['product'].'</td>'; //if value is '-' do not display as link if ($data['prob'] != '-'){ echo '<td style="width:105px;">'.$data['prob'].'</a></td>'; } else{ echo '<td style="width:105px; ">'.$data['prob'].'</td>'; } if ($data['i'] != '-'){ echo '<td style="width:105px; ">'.$data['i'].'</a></td>'; } else{ echo '<td style="width:105px; ">'.$data['i'].'</td>'; } echo'<td style="width:40px; " >'.$data['status'].'</td> <td style="width:120px; text-align:left;">'.$limited_cust_exp.'</td> <td style="width:200px; text-align:left;">'.$limited_summary.'</td> <td style="width:80px; ">'.$data['created'].'</td> <td style="width:80px; ">'.$data['updated'].'</td>'; if (isset($_SESSION['username'])){ echo '<td style="width:50px; "> <form action="displayRecord.php" method="get">'.' <input type="hidden" name="id" value="'. $data['id'].'" style="text-decoration: none" /><input type="submit" value="Open" /></form></td>'; }else{ echo '<td style="width:50px; "> <form action="displayRecord.php" method="get">'.' <input type="hidden" name="id" value="'. $data['id'].'" style="text-decoration: none" /><input type="submit" value="View" /></form></td>'; } echo '</tr>'; }#end of while echo '</tbody>'; echo '</table>'; ?>

    Read the article

  • Problems configuring nameserver in plesk

    - by Saif Bechan
    Hello, i have some troubles with setting up a nameserver in PLESK for months now. I have tried all possible scenario's but i can not get this to work. I am really in need for some help, and if you can i will really appreciate it. Basically what i want is to just set up a nameserver in PLESK. I have a primary IP, and my host gave me a secondary nameserver i can use. My host is leaseweb in the netherlands. I have made some screenshots of the important parts in my opinion, maybe you guys can see some errors in them. To use the secondary nameserver provided by leaseweb i had to enable ACL on that account, i did so and made a screenshot of that too. The DNS recursion is set to localnets. These settings have not changed for months, so the dns should be fully updated everywhere. The check i run is the following: https://www.sidn.nl/over-nl/aanvraag...-server-check/ Domeinnaam (inclusief .nl): rdshosting.nl Eerste Nameserver: ns1.rdshosting.nl Eerste IP: 62.212.66.33 Tweede Nameserver: ns7.leaseweb.net Tweede ip: 62.212.76.50 If i run the dns check of the netherlands it gives me the following errors: primary name server "ns1.rdshosting.nl." Error: specified name server is not listed as NS record. All public name servers for a domain must also be listed as NS records in the zone of the domain. This domain was specified explicitly as a name server, but not found in the zone description of the primary name server. TE.6a rdshosting.nl. 86400 IN SOA ns1.rdspartners.nl. saif2k.hotmail.com. (2010031102 12H 1H 7D 3H) Error: the MNAME in SOA says "ns1.rdspartners.nl." is the primary name server. The MNAME field in the SOA record (first parameter) lists a different primary name server from the one specified for this check. RFC1035 section 3.3.13 rdshosting.nl. 86400 IN NS ns1.rdspartners.nl. Warning: hidden name server "ns1.rdspartners.nl." never used for first contact. The zone contains an NS record for a host which is not in the list of specified name servers. Hence, this name server will not be used to initiate contact to the domain. It may be used in sequential lookups, so it may still be useful. secondary name server "ns1.rdspartners.nl." [BROKEN] [HIDDEN] Failure: name server at 77.232.85.129 cannot be reached: (unknown error) The name server could not be contacted, which may be due to temporary technical problems or global DNS configuration mistakes. The internal error is shown, but not always clear about the cause. secondary name server "ns7.leaseweb.net." Info: name server looks correctly configured. I have the content of the file etc/named.conf also: // $Id: named.conf,v 1.1.1.1 2001/10/15 07:44:36 kap Exp $ // // Refer to the named(8) man page for details. If you are ever going // to setup a primary server, make sure you've understood the hairy // details of how DNS is working. Even with simple mistakes, you can // break connectivity for affected parties, or cause huge amount of // useless Internet traffic. options { allow-recursion { localnets; }; directory "/var"; auth-nxdomain no; pid-file "/var/run/named/named.pid"; // In addition to the "forwarders" clause, you can force your name // server to never initiate queries of its own, but always ask its // forwarders only, by enabling the following line: // // forward only; // If you've got a DNS server around at your upstream provider, enter // its IP address here, and enable the line below. This will make you // benefit from its cache, thus reduce overall DNS traffic in the Internet. /* forwarders { 127.0.0.1; }; */ /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; /* * If running in a sandbox, you may have to specify a different * location for the dumpfile. */ // dump-file "s/named_dump.db"; }; //Use with the following in named.conf, adjusting the allow list as needed: key "rndc-key" { algorithm hmac-md5; secret "CeMgS23y0oWE20nyv0x40Q=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; // Note: the following will be supported in a future release. /* host { any; } { topology { 127.0.0.0/8; }; }; */ // Setting up secondaries is way easier and the rough picture for this // is explained below. // // If you enable a local name server, don't forget to enter 127.0.0.1 // into your /etc/resolv.conf so this server will be queried first. // Also, make sure to enable it in /etc/rc.conf. zone "." { type hint; file "named.root"; }; zone "0.0.127.IN-ADDR.ARPA" { type master; file "localhost.rev"; }; // NB: Do not use the IP addresses below, they are faked, and only // serve demonstration/documentation purposes! // // Example secondary config entries. It can be convenient to become // a secondary at least for the zone where your own domain is in. Ask // your network administrator for the IP address of the responsible // primary. // // Never forget to include the reverse lookup (IN-ADDR.ARPA) zone! // (This is the first bytes of the respective IP address, in reverse // order, with ".IN-ADDR.ARPA" appended.) // // Before starting to setup a primary zone, better make sure you fully // understand how DNS and BIND works, however. There are sometimes // unobvious pitfalls. Setting up a secondary is comparably simpler. // // NB: Don't blindly enable the examples below. :-) Use actual names // and addresses instead. // // NOTE!!! FreeBSD runs bind in a sandbox (see named_flags in rc.conf). // The directory containing the secondary zones must be write accessible // to bind. The following sequence is suggested: // // mkdir /etc/namedb/s // chown bind.bind /etc/namedb/s // chmod 750 /etc/namedb/s zone "rdshosting.nl" { type master; file "rdshosting.nl"; allow-transfer { 77.232.85.129; 62.212.76.50; common-allow-transfer; }; }; zone "66.212.62.in-addr.arpa" { type master; file "66.212.62.in-addr.arpa"; allow-transfer { common-allow-transfer; }; }; acl common-allow-transfer { 62.212.76.50; }; As i mentioned i made some screenshots of some parts: First the dns settings in plesk: http://www.freeimagehosting.net/uploads/2480faed5e.jpg Second the acl settings in plesk: http://www.freeimagehosting.net/uploads/777f5e69b0.jpg Third my settings at leaseweb: http://www.freeimagehosting.net/uploads/de7122b19c.jpg And last the secondary nameserver settings from leaseweb: http://www.freeimagehosting.net/uploads/fd1da38a8f.jpg If someone has anysuggestion at all on this this will be highly appriciated. Thank you for your time! PS. I am dutch so dutch answers are welcome aswell

    Read the article

  • ftp connection problem, vsftp server, active mode

    - by Mark Szente
    I have a server that runs vsftpd to handle ftp connections. One of my users have a notebook with Total Commander and WinSCP installed. Both ftp clients fail right after the connection is established to the server and it tries to download the directory listing without any particular error message. The weird thing is: the notebook works perfectly ok with other ftp servers. My ftp server also works well with other clients. In fact, this user also has a pc running on the same LAN as the notebook and the pc works well with the ftp server. We use active ftp connection mode. Passive mode works well but is not an option at this point. I would post more technical details but I don't even know what this problem is related to. Anyway, below is the server side tcpdump for the failed connection attempt. There's no further communication between the client and the server after the last line of log. Thank you very much for any hint! 23:39:24.514852 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: S 1314489715:1314489715(0) win 65535 <mss 1460,nop,wscale 3,nop,nop,sackOK> 23:39:24.514896 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: S 2633658883:2633658883(0) ack 1314489716 win 5840 <mss 1460,nop,nop,sackOK,nop,wscale 2> 23:39:24.520842 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 1 win 62500 23:39:24.523803 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 1:21(20) ack 1 win 1460 23:39:24.546858 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 1:15(14) ack 21 win 62497 23:39:24.546902 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 15 win 1460 23:39:24.547247 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 21:55(34) ack 15 win 1460 23:39:24.762806 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 55 win 62493 23:39:30.415011 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 15:28(13) ack 55 win 62493 23:39:30.454116 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 28 win 1460 23:39:31.036283 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 55:78(23) ack 28 win 1460 23:39:31.053018 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 28:34(6) ack 78 win 62490 23:39:31.053042 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 34 win 1460 23:39:31.053268 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 78:97(19) ack 34 win 1460 23:39:31.068969 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 34:40(6) ack 97 win 62488 23:39:31.069148 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 97:112(15) ack 40 win 1460 23:39:31.069179 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 112:119(7) ack 40 win 1460 23:39:31.076981 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 119 win 62485 23:39:31.077010 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 119:177(58) ack 40 win 1460 23:39:31.114979 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 40:45(5) ack 177 win 62478 23:39:31.115164 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 177:186(9) ack 45 win 1460 23:39:31.180966 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 45:53(8) ack 186 win 62476 23:39:31.181066 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 186:216(30) ack 53 win 1460 23:39:31.213065 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 53:80(27) ack 216 win 62473 23:39:31.213180 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 216:267(51) ack 80 win 1460 23:39:31.251086 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 80:86(6) ack 267 win 62466 23:39:31.251498 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054371220 0,nop,wscale 2> 23:39:31.290979 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 86 win 1460 23:39:34.251489 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054374220 0,nop,wscale 2> 23:39:40.249625 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054380220 0,nop,wscale 2> 23:39:43.695108 IP 195.70.xx.xx.21 > 62.201.xx.xx.1057: P 2280716551:2280716588(37) ack 3838413728 win 5840 23:39:52.248791 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054392220 0,nop,wscale 2> 23:40:16.245159 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054416221 0,nop,wscale 2> 23:40:29.853685 IP 195.70.xx.xx.21 > 62.201.xx.xx.1057: FP 37:51(14) ack 1 win 5840 23:40:31.241951 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 267:304(37) ack 86 win 1460 23:40:31.381708 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 304 win 62462

    Read the article

  • ftp connection problem, vsftp server, active mode

    - by Mark Szente
    I have a server that runs vsftpd to handle ftp connections. One of my users have a notebook with Total Commander and WinSCP installed. Both ftp clients fail right after the connection is established to the server and it tries to download the directory listing with the following error message: Timeout detected. Could not retrieve directory listing PORT command successful. Consider using PASV. Error listing directory '/'. The weird thing is: the notebook works perfectly ok with other ftp servers. My ftp server also works well with other clients. In fact, this user also has a pc running on the same LAN as the notebook and the pc works well with the ftp server. We use PORT ftp connection mode. Passive mode works well but is not an option at this point. I would post more technical details but I don't even know what this problem is related to. Anyway, below is the server side tcpdump for the failed connection attempt. There's no further communication between the client and the server after the last line of log. Thank you very much for any hint! 23:39:24.514852 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: S 1314489715:1314489715(0) win 65535 <mss 1460,nop,wscale 3,nop,nop,sackOK> 23:39:24.514896 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: S 2633658883:2633658883(0) ack 1314489716 win 5840 <mss 1460,nop,nop,sackOK,nop,wscale 2> 23:39:24.520842 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 1 win 62500 23:39:24.523803 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 1:21(20) ack 1 win 1460 23:39:24.546858 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 1:15(14) ack 21 win 62497 23:39:24.546902 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 15 win 1460 23:39:24.547247 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 21:55(34) ack 15 win 1460 23:39:24.762806 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 55 win 62493 23:39:30.415011 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 15:28(13) ack 55 win 62493 23:39:30.454116 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 28 win 1460 23:39:31.036283 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 55:78(23) ack 28 win 1460 23:39:31.053018 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 28:34(6) ack 78 win 62490 23:39:31.053042 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 34 win 1460 23:39:31.053268 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 78:97(19) ack 34 win 1460 23:39:31.068969 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 34:40(6) ack 97 win 62488 23:39:31.069148 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 97:112(15) ack 40 win 1460 23:39:31.069179 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 112:119(7) ack 40 win 1460 23:39:31.076981 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 119 win 62485 23:39:31.077010 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 119:177(58) ack 40 win 1460 23:39:31.114979 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 40:45(5) ack 177 win 62478 23:39:31.115164 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 177:186(9) ack 45 win 1460 23:39:31.180966 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 45:53(8) ack 186 win 62476 23:39:31.181066 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 186:216(30) ack 53 win 1460 23:39:31.213065 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 53:80(27) ack 216 win 62473 23:39:31.213180 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 216:267(51) ack 80 win 1460 23:39:31.251086 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: P 80:86(6) ack 267 win 62466 23:39:31.251498 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054371220 0,nop,wscale 2> 23:39:31.290979 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: . ack 86 win 1460 23:39:34.251489 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054374220 0,nop,wscale 2> 23:39:40.249625 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054380220 0,nop,wscale 2> 23:39:43.695108 IP 195.70.xx.xx.21 > 62.201.xx.xx.1057: P 2280716551:2280716588(37) ack 3838413728 win 5840 23:39:52.248791 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054392220 0,nop,wscale 2> 23:40:16.245159 IP 195.70.xx.xx.20 > 62.201.xx.xx.5001: S 2640780713:2640780713(0) win 5840 <mss 1460,sackOK,timestamp 2054416221 0,nop,wscale 2> 23:40:29.853685 IP 195.70.xx.xx.21 > 62.201.xx.xx.1057: FP 37:51(14) ack 1 win 5840 23:40:31.241951 IP 195.70.xx.xx.21 > 62.201.xx.xx.2241: P 267:304(37) ack 86 win 1460 23:40:31.381708 IP 62.201.xx.xx.2241 > 195.70.xx.xx.21: . ack 304 win 62462

    Read the article

  • Why is Java EE 6 better than Spring ?

    - by arungupta
    Java EE 6 was released over 2 years ago and now there are 14 compliant application servers. In all my talks around the world, a question that is frequently asked is Why should I use Java EE 6 instead of Spring ? There are already several blogs covering that topic: Java EE wins over Spring by Bill Burke Why will I use Java EE instead of Spring in new Enterprise Java projects in 2012 ? by Kai Waehner (more discussion on TSS) Spring to Java EE migration (Part 1 and 2, 3 and 4 coming as well) by David Heffelfinger Spring to Java EE - A Migration Experience by Lincoln Baxter Migrating Spring to Java EE 6 by Bert Ertman and Paul Bakker at NLJUG Moving from Spring to Java EE 6 - The Age of Frameworks is Over at TSS Java EE vs Spring Shootout by Rohit Kelapure and Reza Rehman at JavaOne 2011 Java EE 6 and the Ewoks by Murat Yener Definite excuse to avoid Spring forever - Bert Ertman and Arun Gupta I will try to share my perspective in this blog. First of all, I'd like to start with a note: Thank you Spring framework for filling the interim gap and providing functionality that is now included in the mainstream Java EE 6 application servers. The Java EE platform has evolved over the years learning from frameworks like Spring and provides all the functionality to build an enterprise application. Thank you very much Spring framework! While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework - some people are even calling it legacy. However my theory is "code is king". So my approach is to build/take a simple Hello World CRUD application in Java EE 6 and Spring and compare the deployable artifacts. I started looking at the official tutorial Developing a Spring Framework MVC Application Step-by-Step but it is using the older version 2.5. I wasn't able to find any updated version in the current 3.1 release. Next, I downloaded Spring Tool Suite and thought that would provide some template samples to get started. A least a quick search did not show any handy tutorials - either video or text-based. So I searched and found a link to their SVN repository at src.springframework.org/svn/spring-samples/. I tried the "mvc-basic" sample and the generated WAR file was 4.43 MB. While it was named a "basic" sample it seemed to come with 19 different libraries bundled but it was what I could find: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/joda-time-jsptags-1.0.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar And it is not even using any database! The app deployed fine on GlassFish 3.1.2 but the "@Controller Example" link did not work as it was missing the context root. With a bit of tweaking I could deploy the application and assume that the account got created because no error was displayed in the browser or server log. Next I generated the WAR for "mvc-ajax" and the 5.1 MB WAR had 20 JARs (1 removed, 2 added): ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.6.4.jar./WEB-INF/lib/jackson-mapper-asl-1.6.4.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar 2 more JARs for just doing Ajax. Anyway, deploying this application gave the following error: Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.map.SerializationConfig.<init>(Lorg/codehaus/jackson/map/ClassIntrospector;Lorg/codehaus/jackson/map/AnnotationIntrospector;Lorg/codehaus/jackson/map/introspect/VisibilityChecker;Lorg/codehaus/jackson/map/jsontype/SubtypeResolver;)V    at org.springframework.samples.mvc.ajax.json.ConversionServiceAwareObjectMapper.<init>(ConversionServiceAwareObjectMapper.java:20)    at org.springframework.samples.mvc.ajax.json.JacksonConversionServiceConfigurer.postProcessAfterInitialization(JacksonConversionServiceConfigurer.java:40)    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407) Seems like some incorrect repos in the "pom.xml". Next one is "mvc-showcase" and the 6.49 MB WAR now has 28 JARs as shown below: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/aspectjrt-1.6.10.jar./WEB-INF/lib/commons-fileupload-1.2.2.jar./WEB-INF/lib/commons-io-2.0.1.jar./WEB-INF/lib/el-api-2.2.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.8.1.jar./WEB-INF/lib/jackson-mapper-asl-1.8.1.jar./WEB-INF/lib/javax.inject-1.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/jdom-1.0.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-api-1.2.jar./WEB-INF/lib/jstl-impl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/rome-1.0.0.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.1.0.RELEASE.jar./WEB-INF/lib/spring-asm-3.1.0.RELEASE.jar./WEB-INF/lib/spring-beans-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-support-3.1.0.RELEASE.jar./WEB-INF/lib/spring-core-3.1.0.RELEASE.jar./WEB-INF/lib/spring-expression-3.1.0.RELEASE.jar./WEB-INF/lib/spring-web-3.1.0.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.1.0.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar The app at least deployed and showed results this time. But still no database! Next I tried building "jpetstore" and got the error: [ERROR] Failed to execute goal on project org.springframework.samples.jpetstore:Could not resolve dependencies for project org.springframework.samples:org.springframework.samples.jpetstore:war:1.0.0-SNAPSHOT: Failed to collect dependencies for [commons-fileupload:commons-fileupload:jar:1.2.1 (compile), org.apache.struts:com.springsource.org.apache.struts:jar:1.2.9 (compile), javax.xml.rpc:com.springsource.javax.xml.rpc:jar:1.1.0 (compile), org.apache.commons:com.springsource.org.apache.commons.dbcp:jar:1.2.2.osgi (compile), commons-io:commons-io:jar:1.3.2 (compile), hsqldb:hsqldb:jar:1.8.0.7 (compile), org.apache.tiles:tiles-core:jar:2.2.0 (compile), org.apache.tiles:tiles-jsp:jar:2.2.0 (compile), org.tuckey:urlrewritefilter:jar:3.1.0 (compile), org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-orm:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-context-support:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework.webflow:spring-js:jar:2.0.7.RELEASE (compile), org.apache.ibatis:com.springsource.com.ibatis:jar:2.3.4.726 (runtime), com.caucho:com.springsource.com.caucho:jar:3.2.1 (compile), org.apache.axis:com.springsource.org.apache.axis:jar:1.4.0 (compile), javax.wsdl:com.springsource.javax.wsdl:jar:1.6.1 (compile), javax.servlet:jstl:jar:1.2 (runtime), org.aspectj:aspectjweaver:jar:1.6.5 (compile), javax.servlet:servlet-api:jar:2.5 (provided), javax.servlet.jsp:jsp-api:jar:2.1 (provided), junit:junit:jar:4.6 (test)]: Failed to read artifact descriptor for org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT: Could not transfer artifact org.springframework:spring-webmvc:pom:3.0.0.BUILD-SNAPSHOT from/to JBoss repository (http://repository.jboss.com/maven2): Access denied to: http://repository.jboss.com/maven2/org/springframework/spring-webmvc/3.0.0.BUILD-SNAPSHOT/spring-webmvc-3.0.0.BUILD-SNAPSHOT.pom It appears the sample is broken - maybe I was pulling from the wrong repository - would be great if someone were to point me at a good target to use here. With a 50% hit on samples in this repository, I started searching through numerous blogs, most of which have either outdated information (using XML-heavy Spring 2.5), some piece of configuration (which is a typical "feature" of Spring) is missing, or too much complexity in the sample. I finally found this blog that worked like a charm. This blog creates a trivial Spring MVC 3 application using Hibernate and MySQL. This application performs CRUD operations on a single table in a database using typical Spring technologies.  I downloaded the sample code from the blog, deployed it on GlassFish 3.1.2 and could CRUD the "person" entity. The source code for this application can be downloaded here. More details on the application statistics below. And then I built a similar CRUD application in Java EE 6 using NetBeans wizards in a couple of minutes. The source code for the application can be downloaded here and the WAR here. The Spring Source Tool Suite may also offer similar wizard-driven capabilities but this blog focus primarily on comparing the runtimes. The lack of STS tutorials was slightly disappointing as well. NetBeans however has tons of text-based and video tutorials and tons of material even by the community. One more bit on the download size of tools bundle ... NetBeans 7.1.1 "All" is 211 MB (which includes GlassFish and Tomcat) Spring Tool Suite  2.9.0 is 347 MB (~ 65% bigger) This blog is not about the tooling comparison so back to the Java EE 6 version of the application .... In order to run the Java EE version on GlassFish, copy the MySQL Connector/J to glassfish3/glassfish/domains/domain1/lib/ext directory and create a JDBC connection pool and JDBC resource as: ./bin/asadmin create-jdbc-connection-pool --datasourceclassname \\ com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype \\ javax.sql.DataSource --property \\ portNumber=3306:user=mysql:password=mysql:databaseName=mydatabase \\ myConnectionPool ./bin/asadmin create-jdbc-resource --connectionpoolid myConnectionPool jdbc/myDataSource I generated WARs for the two projects and the table below highlights some differences between them: Java EE 6 Spring WAR File Size 0.021030 MB 10.87 MB (~516x) Number of files 20 53 (> 2.5x) Bundled libraries 0 36 Total size of libraries 0 12.1 MB XML files 3 5 LoC in XML files 50 (11 + 15 + 24) 129 (27 + 46 + 16 + 11 + 19) (~ 2.5x) Total .properties files 1 Bundle.properties 2 spring.properties, log4j.properties Cold Deploy 5,339 ms 11,724 ms Second Deploy 481 ms 6,261 ms Third Deploy 528 ms 5,484 ms Fourth Deploy 484 ms 5,576 ms Runtime memory ~73 MB ~101 MB Some points worth highlighting from the table ... 516x WAR file, 10x deployment time - With 12.1 MB of libraries (for a very basic application) bundled in your application, the WAR file size and the deployment time will naturally go higher. The WAR file for Spring-based application is 516x bigger and the deployment time is double during the first deployment and ~ 10x during subsequent deployments. The Java EE 6 application is fully portable and will run on any Java EE 6 compliant application server. 36 libraries in the WAR - There are 14 Java EE 6 compliant application servers today. Each of those servers provide all the functionality like transactions, dependency injection, security, persistence, etc typically required of an enterprise or web application. There is no need to bundle 36 libraries worth 12.1 MB for a trivial CRUD application. These 14 compliant application servers provide all the functionality baked in. Now you can also deploy these libraries in the container but then you don't get the "portability" offered by Spring in that case. Does your typical Spring deployment actually do that ? 3x LoC in XML - The number of XML files is about 1.6x and the LoC is ~ 2.5x. So much XML seems circa 2003 when the Java language had no annotations. The XML files can be further reduced, e.g. faces-config.xml can be replaced without providing i18n, but I just want to compare stock applications. Memory usage - Both the applications were deployed on default GlassFish 3.1.2 installation and any additional memory consumed as part of deployment/access was attributed to the application. This is by no means scientific but at least provides an initial ballpark. This area definitely needs more investigation. Another table that compares typical Java EE 6 compliant application servers and the custom-stack created for a Spring application ... Java EE 6 Spring Web Container ? 53 MB (tcServer 2.6.3 Developer Edition) Security ? 12 MB (Spring Security 3.1.0) Persistence ? 6.3 MB (Hibernate 4.1.0, required) Dependency Injection ? 5.3 MB (Framework) Web Services ? 796 KB (Spring WS 2.0.4) Messaging ? 3.4 MB (RabbitMQ Server 2.7.1) 936 KB (Java client 936) OSGi ? 1.3 MB (Spring OSGi 1.2.1) GlassFish and WebLogic (starting at 33 MB) 83.3 MB There are differentiating factors on both the stacks. But most of the functionality like security, persistence, and dependency injection is baked in a Java EE 6 compliant application server but needs to be individually managed and patched for a Spring application. This very quickly leads to a "stack explosion". The Java EE 6 servers are tested extensively on a variety of platforms in different combinations whereas a Spring application developer is responsible for testing with different JDKs, Operating Systems, Versions, Patches, etc. Oracle has both the leading OSS lightweight server with GlassFish and the leading enterprise Java server with WebLogic Server, both Java EE 6 and both with lightweight deployment options. The Web Container offered as part of a Java EE 6 application server not only deploys your enterprise Java applications but also provide operational management, diagnostics, and mission-critical capabilities required by your applications. The Java EE 6 platform also introduced the Web Profile which is a subset of the specifications from the entire platform. It is targeted at developers of modern web applications offering a reasonably complete stack, composed of standard APIs, and is capable out-of-the-box of addressing the needs of a large class of Web applications. As your applications grow, the stack can grow to the full Java EE 6 platform. The GlassFish Server Web Profile starting at 33MB (smaller than just the non-standard tcServer) provides most of the functionality typically required by a web application. WebLogic provides battle-tested functionality for a high throughput, low latency, and enterprise grade web application. No individual managing or patching, all tested and commercially supported for you! Note that VMWare does have a server, tcServer, but it is non-standard and not even certified to the level of the standard Web Profile most customers expect these days. Customers who choose this risk proprietary lock-in since VMWare does not seem to want to formally certify with either Java EE 6 Enterprise Platform or with Java EE 6 Web Profile but of course it would be great if they were to join the community and help their customers reduce the risk of deploying on VMWare software. Some more points to help you decide choose between Java EE 6 and Spring ... Freedom to choose container - There are 14 Java EE 6 compliant application servers today, with a variety of open source and commercial offerings. A Java EE 6 application can be deployed on any of those containers. So if you deployed your application on GlassFish today and would like to scale up with your demands then you can deploy the same application to WebLogic. And because of the portability of a Java EE 6 application, you can even take it a different vendor altogether. Spring requires a runtime which could be any of these app servers as well. But why use Spring when all the required functionality is already baked into the application server itself ? Spring also has a different definition of portability where they claim to bundle all the libraries in the WAR file and move to any application server. But we saw earlier how bloated that archive could be. The equivalent features in Spring runtime offerings (mainly tcServer) are not all open source, not as mature, and often require manual assembly.  Vendor choice - The Java EE 6 platform is created using the Java Community Process where all the big players like Oracle, IBM, RedHat, and Apache are conritbuting to make the platform successful. Each application server provides the basic Java EE 6 platform compliance and has its own competitive offerings. This allows you to choose an application server for deploying your Java EE 6 applications. If you are not happy with the support or feature of one vendor then you can move your application to a different vendor because of the portability promise offered by the platform. Spring is a set of products from a single company, one price book, one support organization, one sustaining organization, one sales organization, etc. If any of those cause a customer headache, where do you go ? Java EE, backed by multiple vendors, is a safer bet for those that are risk averse. Production support - With Spring, typically you need to get support from two vendors - VMWare and the container provider. With Java EE 6, all of this is typically provided by one vendor. For example, Oracle offers commercial support from systems, operating systems, JDK, application server, and applications on top of them. VMWare certainly offers complete production support but do you really want to put all your eggs in one basket ? Do you really use tcServer ? ;-) Maintainability - With Spring, you are likely building your own distribution with multiple JAR files, integrating, patching, versioning, etc of all those components. Spring's claim is that multiple JAR files allow you to go à la carte and pick the latest versions of different components. But who is responsible for testing whether all these versions work together ? Yep, you got it, its YOU! If something does not work, who patches and maintains the JARs ? Of course, you! Commercial support for such a configuration ? On your own! The Java EE application servers manage all of this for you and provide a well-tested and commercially supported bundle. While it is always good to realize that there is something new and improved that updates and replaces older frameworks like Spring, the good news is not only does a Java EE 6 container offer what is described here, most also will let you deploy and run your Spring applications on them while you go through an upgrade to a more modern architecture. End result, you get the best of both worlds - keeping your legacy investment but moving to a more agile, lightweight world of Java EE 6. A message to the Spring lovers ... The complexity in J2EE 1.2, 1.3, and 1.4 led to the genesis of Spring but that was in 2004. This is 2012 and the name has changed to "Java EE 6" :-) There are tons of improvements in the Java EE platform to make it easy-to-use and powerful. Some examples: Adding @Stateless on a POJO makes it an EJB EJBs can be packaged in a WAR with no special packaging or deployment descriptors "web.xml" and "faces-config.xml" are optional in most of the common cases Typesafe dependency injection is now part of the Java EE platform Add @Path on a POJO allows you to publish it as a RESTful resource EJBs can be used as backing beans for Facelets-driven JSF pages providing full MVC Java EE 6 WARs are known to be kilobytes in size and deployed in milliseconds Tons of other simplifications in the platform and application servers So if you moved away from J2EE to Spring many years ago and have not looked at Java EE 6 (which has been out since Dec 2009) then you should definitely try it out. Just be at least aware of what other alternatives are available instead of restricting yourself to one stack. Here are some workshops and screencasts worth trying: screencast #37 shows how to build an end-to-end application using NetBeans screencast #36 builds the same application using Eclipse javaee-lab-feb2012.pdf is a 3-4 hours self-paced hands-on workshop that guides you to build a comprehensive Java EE 6 application using NetBeans Each city generally has a "spring cleanup" program every year. It allows you to clean up the mess from your house. For your software projects, you don't need to wait for an annual event, just get started and reduce the technical debt now! Move away from your legacy Spring-based applications to a lighter and more modern approach of building enterprise Java applications using Java EE 6. Watch this beautiful presentation that explains how to migrate from Spring -> Java EE 6: List of files in the Java EE 6 project: ./index.xhtml./META-INF./person./person/Create.xhtml./person/Edit.xhtml./person/List.xhtml./person/View.xhtml./resources./resources/css./resources/css/jsfcrud.css./template.xhtml./WEB-INF./WEB-INF/classes./WEB-INF/classes/Bundle.properties./WEB-INF/classes/META-INF./WEB-INF/classes/META-INF/persistence.xml./WEB-INF/classes/org./WEB-INF/classes/org/javaee./WEB-INF/classes/org/javaee/javaeemysql./WEB-INF/classes/org/javaee/javaeemysql/AbstractFacade.class./WEB-INF/classes/org/javaee/javaeemysql/Person.class./WEB-INF/classes/org/javaee/javaeemysql/Person_.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$1.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$PersonControllerConverter.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController.class./WEB-INF/classes/org/javaee/javaeemysql/PersonFacade.class./WEB-INF/classes/org/javaee/javaeemysql/util./WEB-INF/classes/org/javaee/javaeemysql/util/JsfUtil.class./WEB-INF/classes/org/javaee/javaeemysql/util/PaginationHelper.class./WEB-INF/faces-config.xml./WEB-INF/web.xml List of files in the Spring 3.x project: ./META-INF ./META-INF/MANIFEST.MF./WEB-INF./WEB-INF/applicationContext.xml./WEB-INF/classes./WEB-INF/classes/log4j.properties./WEB-INF/classes/org./WEB-INF/classes/org/krams ./WEB-INF/classes/org/krams/tutorial ./WEB-INF/classes/org/krams/tutorial/controller ./WEB-INF/classes/org/krams/tutorial/controller/MainController.class ./WEB-INF/classes/org/krams/tutorial/domain ./WEB-INF/classes/org/krams/tutorial/domain/Person.class ./WEB-INF/classes/org/krams/tutorial/service ./WEB-INF/classes/org/krams/tutorial/service/PersonService.class ./WEB-INF/hibernate-context.xml ./WEB-INF/hibernate.cfg.xml ./WEB-INF/jsp ./WEB-INF/jsp/addedpage.jsp ./WEB-INF/jsp/addpage.jsp ./WEB-INF/jsp/deletedpage.jsp ./WEB-INF/jsp/editedpage.jsp ./WEB-INF/jsp/editpage.jsp ./WEB-INF/jsp/personspage.jsp ./WEB-INF/lib ./WEB-INF/lib/antlr-2.7.6.jar ./WEB-INF/lib/aopalliance-1.0.jar ./WEB-INF/lib/c3p0-0.9.1.2.jar ./WEB-INF/lib/cglib-nodep-2.2.jar ./WEB-INF/lib/commons-beanutils-1.8.3.jar ./WEB-INF/lib/commons-collections-3.2.1.jar ./WEB-INF/lib/commons-digester-2.1.jar ./WEB-INF/lib/commons-logging-1.1.1.jar ./WEB-INF/lib/dom4j-1.6.1.jar ./WEB-INF/lib/ejb3-persistence-1.0.2.GA.jar ./WEB-INF/lib/hibernate-annotations-3.4.0.GA.jar ./WEB-INF/lib/hibernate-commons-annotations-3.1.0.GA.jar ./WEB-INF/lib/hibernate-core-3.3.2.GA.jar ./WEB-INF/lib/javassist-3.7.ga.jar ./WEB-INF/lib/jstl-1.1.2.jar ./WEB-INF/lib/jta-1.1.jar ./WEB-INF/lib/junit-4.8.1.jar ./WEB-INF/lib/log4j-1.2.14.jar ./WEB-INF/lib/mysql-connector-java-5.1.14.jar ./WEB-INF/lib/persistence-api-1.0.jar ./WEB-INF/lib/slf4j-api-1.6.1.jar ./WEB-INF/lib/slf4j-log4j12-1.6.1.jar ./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-jdbc-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-orm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-tx-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar ./WEB-INF/lib/standard-1.1.2.jar ./WEB-INF/lib/xml-apis-1.0.b2.jar ./WEB-INF/spring-servlet.xml ./WEB-INF/spring.properties ./WEB-INF/web.xml So, are you excited about Java EE 6 ? Want to get started now ? Here are some resources: Java EE 6 SDK (including runtime, samples, tutorials etc) GlassFish Server Open Source Edition 3.1.2 (Community) Oracle GlassFish Server 3.1.2 (Commercial) Java EE 6 using WebLogic 12c and NetBeans (Video) Java EE 6 with NetBeans and GlassFish (Video) Java EE with Eclipse and GlassFish (Video)

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

< Previous Page | 136 137 138 139 140 141  | Next Page >