Search Results

Search found 7281 results on 292 pages for 'quality attributes'.

Page 81/292 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • LLBLGen Pro and JSON serialization

    - by FransBouma
    I accidentally removed a reply from my previous blogpost, and as this blog-engine here at weblogs.asp.net is apparently falling apart, I can't re-add it as it thought it would be wise to disable comment controls on all posts, except new ones. So I'll post the reply here as a quote and reply on it. 'Steven' asks: What would the future be for LLBLGen Pro to support JSON for serialization? Would it be worth the effort for a LLBLGenPro user to bother creating some code templates to produce additional JSON serializable classes? Or just create some basic POCO classes which could be used for exchange of client/server data and use DTO to map these back to LLBGenPro ones? If I understand the work around, it is at the expense of losing xml serialization. Well, as described in the previous post, to enable JSON serialization, you can do that with a couple of lines and some attribute assignments. However, indeed, the attributes might make the XML serialization not working, as described in the previous blogpost. This is the case if the service you're using serializes objects using the DataContract serializer: this serializer will give up to serialize the entity objects to XML as the entity objects implement IXmlSerializable and this is a no-go area for the DataContract serializer. However, if your service doesn't use a DataContract serializer, or you serialize the objects manually to Xml using an xml serializer, you're fine. When you want to switch to Xml serializing again, instead of JSON in WebApi, and you have decorated the entity classes with the data-contract attributes, you can switch off the DataContract serializer, by setting a global configuration setting: var xml = GlobalConfiguration.Configuration.Formatters.XmlFormatter; xml.UseXmlSerializer = true; This will make the WebApi use the XmlSerializer, and run the normal IXmlSerializable interface implementation.

    Read the article

  • OpenGL: Where shoud I place shaders?

    - by mivic
    I'm trying to learn OpenGL ES 2.0 and I'm wondering what is the most common practice to "manage" shaders. I'm asking this question because in the examples I've found (like the one included in the API Demo provided with the android sdk), I usually see everything inside the GLRenderer class and I'd rather separate things so I can have, for example, a GLImage object that I can reuse whenever I want to draw a textured quad (I'm focusing on 2D only at the moment), just like I had in my OpenGL ES 1.0 code. In almost every example I've found, shaders are just defined as class attributes. For example: public class Square { public final String vertexShader = "uniform mat4 uMVPMatrix;\n" + "attribute vec4 aPosition;\n" + "attribute vec4 aColor;\n" + "varying vec4 vColor;\n" + "void main() {\n" + " gl_Position = uMVPMatrix * aPosition;\n" + " vColor = aColor;\n" + "}\n"; public final String fragmentShader = "precision mediump float;\n" + "varying vec4 vColor;\n" + "void main() {\n" + " gl_FragColor = vColor;\n" + "}\n"; // ... } I apologize in advance if some of these questions are dumb, but I've never worked with shaders before. 1) Is the above code the common way to define shaders (public final class properties)? 2) Should I have a separate Shader class? 3) If shaders are defined outside the class that uses them, how would I know the names of their attributes (e.g. "aColor" in the following piece of code) so I can bind them? colorHandle = GLES20.glGetAttribLocation(program, "aColor");

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • Sync custom AD properties to SharePoint Profile

    - by KunaalKapoor
    Here are some step-by-step instructions regarding configuring SharePoint to sync with custom AD attributes:Add the custom attribute in Active DirectoryThis part will have to be your doing; here is some documentation regarding creating customattributes in AD:http://msdn.microsoft.com/en-us/library/ms675085(VS.85).aspxhttp://technet.microsoft.com/en-us/magazine/2008.05.schema.aspxhttp://blogs.technet.com/b/isingh/archive/2007/02/18/adding-custom-attributes-in-active-directory.aspx2. Open up the miisclient.exe (C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe)a. This will have to be opened up with the farm admin account3. Click on "Management Agents" in the ribbon4. Right-click the Active Directory Management Agent ("MOSS-<name of sync connection>") and click "Refresh Schema"a. When prompted, enter the credentials for the farm account5. Once complete, close out of miisclient.exe6. Go into Central Admin --> Application Management --> Manage Service Applications --> Go into the User Profile Service Application7. Click on "Manage User Properties"8. Click on "New Property"9. Put in the correct information regarding the attribute that was created10. At the bottom of this page, under the "Source Data Connection" drop down, select the AD synchronization connection you have already configured11. For the "Attribute" drop down, select the new attribute you have created12. For the "Direction" drop down, select "Import"13. Click "OK"14. Run a full synchronization for the user profile service application and the custom property will get synced (as long as the attribute is set in Active Directory for the desired users)

    Read the article

  • Surface RT: To Be Or Not To Be (Part 1)

    - by smehaffie
    So the Surface RT has been out for 9 months and Microsoft just declared a $900 million dollar write-down. So how did this happen and what does it mean for Microsoft’s efforts to break into the tablet market? I have been thinking a lot about most of the information below since the Surface product line was released. If you are looking for a “Microsoft Is Dead” story, then don’t read any further. But if you want an honest look at what I think led Microsoft to this point and what I think can be done to make Surface RT devices better, then please continue reading. What Led Microsoft To The $900 Million Write-Down Surface Unveiling:Microsoft totally missed the boat when they unveiled the Surface product line on June 18th, 2012. Microsoft should’ve been ready to post the specifications of both devices that night. Microsoft should’ve had a site up and running right after the event so people could pre-order the devices. This would have given them a good idea what the interest was in each device.  They could also have used this data to make a better estimate for the number of units to to have available for the launch and beyond.  They also lost out on taking advantage of the excitement generated by the Surface RT and Surface Pro announcement. They could have thrown in a free touch keyboard to anyone who pre-ordered. The advertising should have started right after the announcement and gotten bigger as launch day approached. Push for as many pre-order as possible and build excitement for the launch. Actual Launch (Surface RT): By this time all excitement was gone from the initial announcement, except for the Micorsoft faithful. Microsoft should have been ready to sell the Surface in as many markets as possible at launch. The limited market release was a real letdown for a lot of people.  A limited release right after the initial announce is understandable, but not at the official launch of the product. Microsoft overpriced the device and now they are lowering it to what it should have been to start with. The $349 price is within the range I suggested it should be at before pricing was announced. (Surface Tablets: The Price Must Be Right). Limited ordering options online was also a killer. User should have been able to buy the base unit of each device and then add on whatever keyboard they wanted to (this applies more to the Surface Pro).  There should have also been a place where users could order any additional add-ins that they wanted to buy (covers, extra power supplies, etc.) Marketing was better and the dancing “Click In” commercial was cool, but the ads comparing the iPad with Siri should have been on the air from day one of the announcement (or at least the launch).  Consumers want to know why you tablet is better, not just that is has a clickable keyboard and built-in kickstand. They could have also compared it to some of the other mid-range tablets if they had not overprices it to begin with. Stock Applications (Mail, People, Calendar, Music, Video, Reader and IE): This is where Microsoft really blew it. They had all the time in the world to make these applications the best of breed and instead we got applications that seemed thrown together.  Some updates have made these application better, but they are all still lacking in features that should have been there from day one. This did not help to enhance a new users experience any. ** I will admit that the applications that were data driven were first class citizen’s and that makes it even more perplexing why MS could knock it out of the park with the Weather, Travel, Finance, Bing, etc.) and fail so miserably on the core applications users would use the most on a tablet. Desktop on Tablet: The desktop just is so out of place on the tablet  I understand it was needed for Office but think it would have been better to not have the desktop in Windows RT, but instead open up the Office applications in full screen mode, in a desktop shell (same goes for  IE11).That way the user wouldn’t realize they are leaving Metro and going to the desktop. The other option would have been to just not include Office on Windows RT devices. Instead they could have made awesome Widows Store Apps for Word, Excel, OneNote and PowerPoint. In addition, they could have made the stock Mail, People, and Calendar applications contain all the functions that Outlook gives desktop users. Having some of the settings in desktop mode and others under “Change PC Settings” made Windows RT seemed unfinished and rushed to market. What Can Be Done To Make Windows RT Based Tablets Better (At least in my opinion) Either eliminate the desktop all together from Windows RT or at least make the user experience better by hiding the fact the user is running Office/IE in the desktop. Personally I ‘d like them to totally get rid of it and just make awesome Windows Store Application version of Word, Excel PowerPoint & OneNote.  This might also make the OS smaller and give the user more available disk space. I doubt there will ever be a Windows Store App versions of Office, but I still think it is a good idea. Make is so users can easily direct their documents, picture, videos and music to their extra storage and can access these files from the standard libraries.  A user should not have to create a VM on their microSD card or create symbolic links to get this to work properly. Most consumers would not be able to do this. Then users get frustrated when they run out or room on their main storage because nothing is automatically save to their microSD card when saved to libraries.  This is a major bug that needs to be fixed, otherwise Microsoft’s selling point of having a microSD slot is worthless. Allows users to uninstall and re-install any of the Office product that come with the Surface. That way people can free up storage space by uninstalling the Office applications they do not need. Everyone’s needs are different, so make the options flexible. Don’t take up storage space for applications the user will not use. Make the Core applications the “Cream of the Crop” Windows App Store applications. The should set the bar for all other Store applications. Improve performance as much as possible, if it seems to be sluggish on a tablet consumer will not buy it. They need to price the next line of Surface product very aggressive to undercut not only iPad but also Android low end tablets (Nook, Kindle Fire, and Nexus, etc.) Give developers incentives to write quality applications for the devices. Don’t reward developers for cranking out cookie cutter, low quality applications. I’d even suggest Microsoft consider implementing some new store certification guideline to stop these type of applications being published. Allow users to easily move the recover disk “partition between their microSD card and main storage. My Predictions for the Surface RT and Windows RT I honestly think even with all the missteps MS has made since the announcement  about the Surface product line, that they are on the right path. I was excited the Surface tablets when they were announced, and I still am. The truth be told, Windows 8 on a tablet (aka: Windows RT) is better than both iOS and Android. My nephew who is an Apple fan boy told me after he saw and used Windows 8 (he got the beta running on his iPad), that Windows 8 kicked Apples butt as a tablet OS. So there is hope for all Windows RT based tablets. I agree with my nephew and that is why whenever anyone asks me about my Surface, I love showing it off and recommend it. The 6 keys to gaining market share in the tablet market are; Aggressive pricing by both Microsoft and their OEM’s Good quality devices put out by Microsoft and their OEM’s (there are some out there, but not enough) Marketing, Marketing, Marketing from both Microsoft and their OEM’s (Need more ads showing why windows based tablets are better than iPads and Android tablets) Getting Widows tablets in retails stores all over, and giving sales people incentive to sell them. Consumers like to try electronics out before they buy them, and most will listen to what the sales person suggest. Microsoft needs sales people in retail stores directing people to buy windows based tablets over iPads and Android tablets. I think the Microsoft Stores within Best Buy is a good start, but they also need to get prominent displays in Walmart, Target, etc.. Release a smaller form factor Surface, Hopefully the 8”-10” next generation Surface is not a rumor. Make “Surface” the brand name for all Microsoft tablets and hybrid devices that they come out with. They cannot change the name with each new release.  Make Surface synonymous with quality, the same way that iPad  is for Apple. Well, that is my 2 cents on the subject. Let me know your thoughts by leaving a comment below. Soon to follow will be my thought on the Surface Pro, so keep an eye out for it. var addthis_pub="smehaffie"; var addthis_options="email, print, digg, slashdot, delicious, twitter, live, myspace, facebook, google, stumbleupon, newsvine";

    Read the article

  • Third-party open-source projects in .NET and Ruby and NIH syndrome

    - by Anton Gogolev
    The title might seem to be inflammatory, but it's here to catch your eye after all. I'm a professional .NET developer, but I try to follow other platforms as well. With Ruby being all hyped up (mostly due to Rails, I guess) I cannot help but compare the situation in open-source projects in Ruby and .NET. What I personally find interesting is that .NET developers are for the most part severely suffering from the NIH syndrome and are very hesitant to use someone else's code in pretty much any shape or form. Comparing it with Ruby, I see a striking difference. Folks out there have gems literally for every little piece of functionality imaginable. New projects are popping out left and right and generally are heartily welcomed. On the .NET side we have CodePlex which I personally find to be a place where abandoned projects grow old and eventually get abandoned. Now, there certainly are several well-known and maintained projects, but the number of those pales in comparison with that of Ruby. Granted, NIH on the .NET devs part comes mostly from the fact that there are very few quality .NET projects out there, let alone projects that solve their specific needs, but even if there is such a project, it's often frowned upon and is reinvented in-house. So my question is multi-fold: Do you find my observations anywhere near being correct? If so, what are your thoughts on quality and quantitiy of OSS projects in .NET? Again, if you do agree with my thoughts on "NIH in .NET", what do you think is causing it? And finally, is it Ruby's feature set & community standpoint (dynamic language, strong focus on testing) that allows for such easy integration of third-party code?

    Read the article

  • CodePlex Daily Summary for Sunday, March 06, 2011

    CodePlex Daily Summary for Sunday, March 06, 2011Popular ReleasesIIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.CRM 2011 OData Query Designer: CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. This tool allows you to build OData queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the query and view the ATOM and JSON data returned. The look and feel of this component will improve and new functionality will be added in the near future so please provide feedback on your experience. Import this solution int...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...DotNetNuke® Community Edition: 05.06.02 Beta: Major HighlightsFixed issue where "My Folder" was not available in the URL control and the Telerik HTML Editor Fixed issue where HTML Editor dialogs were not displaying correctly in alternate languages Fixed issue with Regex for email validation Fixed race condition in the core scheduler Fixed issue where editing Host page settings would result in broken host menu Fixed issue where "Apply to All Modules" setting was not propogating settings correctly. Fixed issue where browser lan...DirectQ: Release 1.8.7 (RC1): Release candidate 1 of 1.8.7GoogleTrail: TrailMap Beta 1: Trailmap beta 1 release Now we have updated custom map builder. Now we have complete gpx file editor. Now we have elevation data update service for any gpx file. (currently supports only google only).Chirpy - VS Add In For Handling Js, Css, DotLess, and T4 Files: Margogype Chirpy (ver 2.0): Chirpy loves Americans. Chirpy hates Americanos.ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...Sandcastle Help File Builder: SHFB v1.9.2.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. NOTE: The included help file and the online help have not been completely updated to reflect all changes in this release. A refresh will be issue...Network Monitor Open Source Parsers: Microsoft Network Monitor Parsers 3.4.2554: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, we have added 4 new protocol parsers and updated 79 existing parsers in the NetworkMonitor_Pa...Image Resizer for Windows: Image Resizer 3 Preview 1: Prepare to have your minds blown. This is the first preview of what will eventually become 39613. There are still a lot of rough edges and plenty of areas still under construction, but for your basic needs, it should be relativly stable. Note: You will need the .NET Framework 4 installed to use this version. Below is a status report of where this release is in terms of the overall goal for version 3. If you're feeling a bit technically ambitious and want to check out some of the features th...JSON Toolkit: JSON Toolkit 1.1: updated GetAllJsonObjects() method and GetAllProperties() methods to JsonObject and Properties propertiesFacebook Graph Toolkit: Facebook Graph Toolkit 1.0: Refer to http://computerbeacon.net for Documentation and Tutorial New features:added FQL support added Expires property to Api object added support for publishing to a user's friend / Facebook Page added support for posting and removing comments on posts added support for adding and removing likes on posts and comments added static methods for Page class added support for Iframe Application Tab of Facebook Page added support for obtaining the user's country, locale and age in If...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager small improvements for some helpers and AjaxDropdown has Data like the Lookup except it's value gets reset and list refilled if any element from data gets changedManaged Extensibility Framework: MEF 2 Preview 3: This release aims .net 4.0 and Silverlight 4.0. Accordingly, there are two solutions files. The assemblies are named System.ComponentModel.Composition.Codeplex.dll as a way to avoid clashing with the version shipped with the 4th version of the framework. Introduced CompositionOptions to container instantiation CompositionOptions.DisableSilentRejection makes MEF throw an exception on composition errors. Useful for diagnostics Support for open generics Support for attribute-less registr...PHPExcel: PHPExcel 1.7.6 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.New Projectsasp.net mvc 3 simple cms: asp.net mvc3 cms for learling purpose.C++ Mini Framework: C++ Mini Framework is a simple and easy to use class library in source format to quickly do things you commonly need to do in native projects with the purpose to get you started specifically targeting new C++ developers hopping you will be productive from the very start.Community Megaphone Helpers: Community Megaphone Helpers is a project intended as a means of sharing and accepting contributions for reusable Razor helper modules for functionality used in the Community Megaphone events web application, including Bing maps and more. Supports Microsoft WebMatrix & MVC 3CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. The tool allows you to build OData REST queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the queryFaceted Search: Implementation of faceted (advanced) search with composite client side UI. Abstraction interfaces for intagration with different server side technologies, implementation for ASP.NET MVC. FileRenamePro: FileRenamePro makes it easier for users to rename files using advanced rules and regular expressions. It's developed in C#.GT5 Mobile: Gran-Turismo Remote Racing mobile site wrapper.KAT: KAT - Knowledge Assessment Tool is a Solution from IndERP in order to automate Performance and Process Management for a Technology/Job Oriented Companies. monopoly game: Monopoly is an open source project for educational purposes. The project will incorporate XNA, Silverlight, WCF technologies. The project will also show good design patterns considerations, and integration into Facebook App. The project will be written in C#.MuDB: MuDB is an embedded object-oriented database for the .NET Micro Framework which provides a simple yet useful interface for managing data.NDataStructure: A library providing a handful of useful data structures omitted from the .NET framework.Set NuGet version number: A simple command line tool that makes it easy to set the version number within a NuGet .nuspec package configuration file. This is useful for when you want to automatically update and publish a NuGet package from your build system.Sightreader: Small application to aid in the wrote learning of basic musical notation.SSAS Operations Templates: SSAS Operations Templates includes SSIS packages, scripts and code samples for automating maintenance of SSAS in a production environment. Includes operations such as backup the current state of cube designs in production, scripting paritition creation, etc.Team Run Log: Team Run LogTEDHelper: Download TED movie's subtitle. ?? TED ?????。testprojectit339: project339Ultimate Resume Repository: A class library and application for storing resumes of multiple people with the ability to export a targeted resume in various, configurable formats. Further additions may include cover letters, browser add-ons to populate applications, job search engine integration, etc...Umbraco: Inspired DataTypes: New datatypes that are not in the default install to make Umbraco have some new controls such as Content/Media Treeview, Content/Media Drop Down List with Treeview. Controls have options to restrict DocumentTypes or MediaType and the start location to retreive fromUsing different schemas in the same Orchestration Receive Port: Using different schemas in the same Orchestration Receive PortWF4Host: Examples in re-hosting Workflow 4 designer.WMP Hotkeys: WMP Hotkeys is a windows media player plugin that enable users to use VLC player like keyboard shortcuts(e.g SPACE to play/pause) in Windows Media Player.

    Read the article

  • How should data be passed between client-side Javascript and C# code behind an ASP.NET app?

    - by ctck
    I'm looking for the most efficient / standard way of passing data between client-side Javascript code and C# code behind an ASP.NET application. I've been using the following methods to achieve this but they all feel a bit of a fudge. To pass data from Javascript to the C# code is by setting hidden ASP variables and triggering a postback: <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in the C# code I collect the list: protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way I often use either ScriptManager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or I add attributes to controls before a post back or during the initialization or pre-rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job, but they just dont feel complete. Is there a more standard way of passing data between client side Javascript and C# backend code? Ive seen some posts like this one that describe HtmlElement class; is this something I should look into?

    Read the article

  • US GAAP and IFRS Convergence May Be Delayed Even More

    - by Theresa Hickman
    Yesterday, on March 10, the Financial Accounting Standards Board (FASB) met to discuss the changes in financial statement presentation. Over the last six months, the FASB and IASB have been working feverishly to converge US GAAP and IFRS standards to meet the 2011 deadline. In March alone, the standards-setters met eight times. Many people fear that this accelerated pace is compromising the quality of the end product and that maybe they should slow down and do their due diligence. According to WG&L Accounting & Compliance Alert Checkpoint 3/10/2010, (which requires a subscription to view the full article) "Some statement preparers and investors who advise the FASB believe that the process would be better served if it was slowed down so that more attention could be paid to quality." "Should 2011 be looked at as a line in the sand?" asked Joan Amble, executive vice president and corporate comptroller for American Express Co. "We don't think that due process should be compromised for the due date," concurred Lewis Dulitz, vice president of accounting policies and research for medical products supplier Covidien plc. I personally have mixed emotions about this. On one hand, I have been growing impatient with how slow the US has jumped on the IFRS band wagon. On the other hand, being the conservative that I am and knowing this convergence will be costly and disruptive to businesses, I would prefer to be safe than sorry and get it right the first time.

    Read the article

  • Schedule for my session at MIX10

    - by Laurent Bugnion
    Microsoft has published the schedule for the MIX10 sessions. I have a sweet spot, and I dearly hope that it stays this way (Last year I had a great spot, but it was changed last minute and then I had a much better one, “competing” against Vertigo and their Playboy app… yeah try to explain to a bunch of geeks that MVVM is better than Playboy… good luck with that ;) Anyway, this year my sweet spot is on the very first day of the conference (there are workshops on Sunday, but this qualifies as pre-conference), Monday after the keynote which should get everyone pumped and excited. Schedule and location I would be really happy to meet y’all at Understanding the Model-View-ViewModel Pattern in Lagoon F on Monday at 2:00 PM http://live.visitmix.com/MIX10/Sessions/EX14 See you in Vegas (or in video…) Everything I saw so far hints that this should be a very, very exciting edition of MIX, maybe the most electrifying ever. The great news is that everything will be available even if you cannot make it: The keynotes are typically streamed live, and if you remember last year’s experience at PDC, it is a really good alternative. Built with Silverlight, the feed uses smooth streaming (adjusting the quality according to your bandwidth automatically), possibility to pause and rewind if you miss something, and a great picture quality. As for the sessions, the message at MIX is that the videos will be available online approximately 24 hours after the session is being held. This is a great feat! So, see you in Vegas (or in video)! Cheers, Laurent

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • Using ASP.NET C# and Javascript

    - by ctck
    I'm looking for the most efficient / standardized way of passing data between client javascript code and C# code behind in an ASP.NET application. Currently ive been using the following methods to achieve this but they all feel a bit like a fudge. The way i pass data from javascript to the C# code behind is by setting hidden asp variables and triggering a postback <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in C# code i collect the list protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way i often use either scriptmanager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or i add attributes to controls before a post back or during Initialization / pre rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job. However they just dont feel complete. Is there a more standardized way of passing data between client side markup / javascript and backend code. Ive seen some posts like this one: Injecting JavaScrip : StackOverflow that describe HtmlElement class. Is this something is should look into? Thanks everyone for your time.

    Read the article

  • Open source vs commercial game engines

    - by Vanangamudi
    How commercial game accomplsih stunnning graphics with smooth game play? I am a huge die hard fan and follower of GNU Stallman and his philosophies and other Libre people Cmon how wud I miss Linus. but I got to admit commercial games does excellent jobs. One such good example is Assasin's Creed from Ubisoft. It has good quality graphcis and plays smoothly in my Dual core CPU with Nvidia Geforce 8400ES. Rockstar GTA4 has awesome graphcis but it's slower than AC considering the graphics quality tradeoff. Age of Empires from Ensemble studios, does include Massive crowd AI simulation, yet it plays so smoothly with eyecandy graphics and very large weapon sets and different techtree elements on the other hand. Open source games like Glest, 0A.D(still in alpha :) are not so smooth even though they have very restricted abilities? Coming to question: how do game companies achieve such optmizations, or the open source community is not doing optimizations, or there are any propriarity technological elements that benefits only the companies exists huh?? e.g the OpenSubDiv from Pixar just released open to community?? something like that. and why it is hard to implement optimizations? are there any legal restrictions???

    Read the article

  • How to get tilemap transparency color working with TiledLib's Demo implementation?

    - by Adam LaBranche
    So the problem I'm having is that when using Nick Gravelyn's tiledlib pipeline for reading and drawing tmx maps in XNA, the transparency color I set in Tiled's editor will work in the editor, but when I draw it the color that's supposed to become transparent still draws. The closest things to a solution that I've found are - 1) Change my sprite batch's BlendState to NonPremultiplied (found this in a buried Tweet). 2) Get the pixels that are supposed to be transparent at some point then Set them all to transparent. Solution 1 didn't work for me, and solution 2 seems hacky and not a very good way to approach this particular problem, especially since it looks like the custom pipeline processor reads in the transparent color and sets it to the color key for transparency according to the code, just something is going wrong somewhere. At least that's what it looks like the code is doing. TileSetContent.cs if (imageNode.Attributes["trans"] != null) { string color = imageNode.Attributes["trans"].Value; string r = color.Substring(0, 2); string g = color.Substring(2, 2); string b = color.Substring(4, 2); this.ColorKey = new Color((byte)Convert.ToInt32(r, 16), (byte)Convert.ToInt32(g, 16), (byte)Convert.ToInt32(b, 16)); } ... TiledHelpers.cs // build the asset as an external reference OpaqueDataDictionary data = new OpaqueDataDictionary(); data.Add("GenerateMipMaps", false); data.Add("ResizetoPowerOfTwo", false); data.Add("TextureFormat", TextureProcessorOutputFormat.Color); data.Add("ColorKeyEnabled", tileSet.ColorKey.HasValue); data.Add("ColorKeyColor", tileSet.ColorKey.HasValue ? tileSet.ColorKey.Value : Microsoft.Xna.Framework.Color.Magenta); tileSet.Texture = context.BuildAsset<Texture2DContent, Texture2DContent>( new ExternalReference<Texture2DContent>(path), null, data, null, asset); ... I can share more code as well if it helps to understand my problem. Thank you.

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • Why My Adsense Account is not accepted

    - by Muhammad Adeel Zahid
    I have created a blog on blogger few days ago and created two blog entries on it. when i try creating AdSense account with blog's address it does not accept the application due to PageType. i have searched around on the net and found that its probably due to duplicate or insufficient content on the blog but either of my blog entry is more than 2,000 words and i have literally typed 80 percent of its content with only few code blocks copied from other sites. Below is content of email i received as response to my application. Hello, Thank you for your interest in Google AdSense. Unfortunately, after reviewing your application, we're unable to accept you into AdSense at this time. We did not approve your application for the reasons listed below. Issues: - Page Type Further detail: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). You must provide accurate personal information with your application that matches the information on your domain registration. Your website must contain substantial, original content. Your site must comply with Google AdSense program policies: https://www.google.com/adsense/policies" which include Google's webmaster quality guidelines: http://www.google.com/support/webmasters/bin/answer.py?answer=35769#quality . If your site satisfies the above criteria in the future, please resubmit your application and we'll review it as soon as possible My Personal credentials are also same as they appear in my google account. i can't figure out the problem. Any Help/Suggestion is highly appreciated. regards

    Read the article

  • Today's Links (6/21/2011)

    - by Bob Rhubart
    Keeping your process clean: Hiding technology complexity behind a service | Izaak de Hullu Izaak de Hullu offers a solution to "technology pollution like exception handling, technology adapters and correlation." WebLogic Weekly for June 20th, 2011 | James Bayer James Bayer presents "a round-up what has been going on in WebLogic over the past week." Publish to EDN from Java & OSB with JMS | Edwin Biemond Busy blogger and Oracle ACE Edwin Biemond shows "how you can publish events from Java and OSB." How is HTML 5 changing web development? | Audrey Watters - O'Reilly Radar In this interview, OSCON speaker Remy Sharp discusses HTML5's current usage and how it could influence the future of web apps and browsers. SOA Governance Book | SOA Partner Community Blog Information on how those in EMEA can win a free copy of SOA Governance: Governing Shared Services On-Premise and in the Cloud by Thomas Erl, et al. Keeping The Faith on 11i | Floyd Teter "The iceberg is melting, the curtain is coming down, the lights are dimming, the fat lady is singing," says Oracle ACE Director Floyd Teter. Configure and test JMS based EDN in SOA Suite 11g | Edwin Biemond Oracle ACE Edwin Biemond shows you "how to configure EDN-JMS and how to publish an Event to this JMS Queue." Choosing the best way for SOA Suite and Oracle Service Bus to interact with the Oracle Database | Lucas Jellema Oracle ACE Director Lucas Jellema illustrates "over 20 different interaction channels" covering "a fairly wild variation of attributes, required skills, productivity and performance characteristics." Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets | Alex Kotopoulis ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. Java Spotlight Podcast Episode 35: JVM Performance and Quality Featuring an interview with Vladimir Ivanov, Ivan Krylov, and Sergey Kuksenko on the JDK 7 Java Virtual Machine performance and quality. Also includes the Java All Star Developer Panel featuring Dalibor Topic, Java Free and Open Source Software Ambassador, and Alexis Moussine-Pouchkine, Java EE Developer Advocate.

    Read the article

  • How to disable an ASP.NET linkbutton when clicked

    - by Jeff Widmer
    Scenario: User clicks a LinkButton in your ASP.NET page and you want to disable it immediately using javascript so that the user cannot accidentally click it again.  I wrote about disabling a regular submit button here: How to disable an ASP.NET button when clicked.  But the method described in the other blog post does not work for disabling a LinkButton.  This is because the Post Back Event Reference is called using a snippet of javascript from within the href of the anchor tag: <a id="MyContrl_MyButton" href="javascript:__doPostBack('MyContrl$MyButton','')">My Button</a> If you try to add an onclick event to disable the button, even though the button will become disabled, the href will still be allowed to be clicked multiple times (causing duplicate form submissions).  To get around this, in addition to disabling the button in the onclick javascript, you can set the href to “#” to prevent it from doing anything on the page.  You can add this to the LinkButton from your code behind like this: MyButton.Attributes.Add("onclick", "this.href='#';this.disabled=true;" + Page.ClientScript.GetPostBackEventReference(MyButton, "").ToString()); This code adds javascript to set the href to “#” and then disable the button in the onclick event of the LinkButton by appending to the Attributes collection of the ASP.NET LinkButton control.  Then the Post Back Event Reference for the button is called right after disabling the button.  Make sure you add the Post Back Event Reference to the onclick because now that you are changing the anchor href, the button still needs to perform the original postback. With the code above now the button onclick event will look something like this: onclick="this.href='#';this.disabled=true;__doPostBack('MyContrl$MyButton','');" The anchor href is set to “#”, the linkbutton is disabled, AND then the button post back method is called. Technorati Tags: ASP.NET LinkButton

    Read the article

  • We Found 100 Manufacturing Heros That Focus on Innovation!

    - by Stephen Slade
    There’s a good piece written by Ann Grackin of ChainLink Research on the Manufacturing Leadership 100 Awards program held recently in Palm Beach Fl, Apr 30-May 3, 2012.  This article (link below) highlights the summary of the Summit with specific focus on manufacturing innovation.  There were several informative keynotes and sessions from industrial leaders who are leveraging the latest tools and technologies to make better decisions. Ann writes that she was a panelist with Cindy Reese, SVP, Worldwide Operations, Oracle; and Steven Tungate, VP/GM, Supply Chain & Innovation, Toshiba America Business Solutions about Factories and Supply Networks of the Future. Ann writes “So what are these manufacturers doing? Significant rationalization of the supply base (Toshiba, especially, has this issue since they have a long history of many acquisitions), streamlining production to increase productivity, and looking for lower-cost countries for manufacturing….  No doubt firms have global customer bases, so they need to be present in these markets. However, a low-cost-country manufacturing source does introduce more risk in the supply chain. And that was discussed. Quality, security, and intellectual property protection were the critical global manufacturing issues also discussed. “Cindy (Reese) told a fascinating story about Oracle’s acquisition of Sun and the supply chain that was subsequently created. Here was one of the key points: Although Oracle sells on a global basis, they now do their own factory-installed software. This keeps potential ‘factory-installed malware’ from getting into the servers at contract manufacturers, and prevents pirated software. In this way, Oracle ensures that they deliver the quality and security people expect”. Learn more about the Manufacturing Leadership 100 program from Manufacturing Executive at: http://www.mlsummit.com/ Full Article Link:  http://www.clresearch.com/research/detail.cfm?guid=52327213-3048-79ED-99D4-E433DA64D4F0

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Thank you South Florida for a successful SPSouthFLA

    - by Leonard Mwangi
    I wanted to officially thank the organizers, speakers, volunteers and the attendees of SharePoint Saturday South Florida. Being the first event in South Florida the reception was phenomenon and the group of speakers from keynote by Joel Oleson to session’s speakers from well renowned speakers like John Holliday, Randy Disgrill, Richard Harbridge, Ameet Phadnis, Fabian Williams, Chris McNulty, Jaime Velez to organizers like Michael Hinckley amongst others. With my Business Intelligence (BI) presentation being on the last track of the day, I spent very quality time networking with these great guys and getting the insider scope on International SharePoint Community from Joel and his son which was mesmerizing. I had a very active audience to a point where we couldn’t accommodate all the contents within the 1hr allocated time because they were very engaged and wanted a deep dive session on news features like PowerPivot, enhancements on PerformancePoint, Excel Services amongst others in order to understand the business value and how SharePoint 2010 is making the self-service BI become a reality. These community events allows the attendees experience technology first hand and network with MVPs, authors, experts providing high quality educational sessions usually for free which is a reason to attend. I have made the slides for my session available for download for those interested http://goo.gl/VaH5x

    Read the article

  • Why do some bad websites rank well?

    - by BradB
    Consider the following scenario: you are pitching SEO/Website Optimisation to a prospective client and you explain to them the importance of great copy and content, how acquiring links (ethically) can increase page rank, why the quality of the HTML build matters (H1, H2 tags, w3c validation etc), why keyword research is beneficial, you may drop in a few Google Webmaster Guideline or Matt Cutts references to back up your claims and rubbish the "back hat" approach as being no longer effective for good measure. Your advice is ethical and in the eyes of best practices, spot on. Then, the client points out to you some of their long established competitors on Google and you see these competitor websites ranking in the top spots (1 to 3) for medium to highly competitive search phrases that your client wants to compete for. These websites totally contradict your ethical approach and pretty much violate every best practice previously noted. They even out perform other "white hat" competitors who are in accordance with the above guidelines. I experienced this today. One of these well ranking websites had: About six microsites with more or less the same copy and a slightly varied layout Little or not textual content I would almost say duplicate content across the sites, but there was so little of it it could barely qualify for being duplicate All the content in Flash (with a music track that kicked in on each page load, not so much of an SEO issue - but it helps paint the picture) Keyword stuffing behind the Flash file with a bunch of black text on black background in the style of keyword 1 keyword 2,keyword1,keyword 2,keyword 2 keyword 3 and so on... The exact keyword stuffed combination present on every page of the website A bunch of clearly self made links from poor quality forums and directories with little or no Page Rank Links exchanged across the microsites How do you explain your way out of this when this hard evidence is sat in front of you undermining your great pitch?

    Read the article

  • Is DQS-in-the-cloud on its way?

    - by jamiet
    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll: March 2012 – December 2012 (10 months)Redmond, WA Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.  http://www.linkedin.com/profile/view?id=9657184  Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff. @Jamiet  

    Read the article

  • Refactoring this code that produces a reverse-lookup hash from another hash

    - by Frank Joseph Mattia
    This code is based on the idea of a Form Object http://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/ (see #3 if unfamiliar with the concept). My actual code in question may be found here: https://gist.github.com/frankjmattia/82a9945f30bde29eba88 The code takes a hash of objects/attributes and creates a reverse lookup hash to keep track of their delegations to do this. delegate :first_name, :email, to: :user, prefix: true But I am manually creating the delegations from a hash like this: DELEGATIONS = { user: [ :first_name, :email ] } At runtime when I want to look up the translated attribute names for the objects, all I have to go on are the delegated/prefixed (have to use a prefix to avoid naming collisions) attribute names like :user_first_name which aren't in sync with the rails i18n way of doing it: en: activerecord: attributes: user: email: 'Email Address' The code I have take the above delegations hash and turns it into a lookup table so when I override human_attribute_name I can get back the original attribute name and its class. Then I send #human_attribute_name to the original class with the original attribute name as its argument. The code I've come up with works but it is ugly to say the least. I've never really used #inject so this was a crash course for me and am quite unsure if this code effective way of solving my problem. Could someone recommend a simpler solution that does not require a reverse lookup table or does that seem like the right way to go? Thanks, - FJM

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >