Search Results

Search found 17899 results on 716 pages for 'oracle focus'.

Page 670/716 | < Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >

  • MSMQ messages using HTTP just won't get delivered

    - by John Breakwell
    I'm starting off the blog with a discussion of an unusual problem that has hit a couple of my customers this month. It's not a problem you'd expect to bump into and the solution is potentially painful. Scenario You want to make use of the HTTP protocol to send MSMQ messages from one machine to another. You have installed HTTP support for MSMQ and have addressed your messages correctly but they will not leave the outgoing queue. There is no configuration for HTTP support - setup has already done all that for you (although you may want to check the most recent "Installation of the MSMQ HTTP Support Subcomponent" section of MSMQINST.LOG to see if anything DID go wrong) - so you can't tweak anything. Restarting services and servers makes no difference - the messages just will not get delivered. The problem is documented and resolved by Knowledgebase article 916699 "The message may not be delivered when you use the HTTP protocol to send a message to a server that is running Message Queuing 3.0". It is unlikely that you would be able to resolve the problem without the assistance of PSS because there are no messages that can be seen to assist you and only access to the source code exposes the root cause. As this communication is over HTTP, the IIS logs would be a good place to start. POST entries are logged which show that connectivity is working and message delivery is being attempted: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2006-09-12 12:11:29 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2006-09-12 12:12:12 W3SVC1 10.1.17.219 POST /msmq/private$/test - 80 - 10.2.200.3 - 200 0 0 If you capture the traffic with Network Monitor you can see the POST being sent to the server but you also see a response being returned to the client: HTTP: Response to Client; HTTP/1.1; Status Code = 500 - Internal Server Error "Internal Server Error" means we can probably stop looking at IIS and instead focus on the Message Queuing ISAPI extension (Mqise.dll). MSMQ 3.0 (Windows XP and Windows Server 2003) comes with error logging enabled by default but the log files are in binary format - MSMQ 2.0 generated logging in plain text. The symbolic information needed for formatting the files is not currently publicly available so log files have to be sent in to Microsoft PSS.  Although this does mean raising a support case, formatting the log files to text and returning them to the customer shouldn't take long. Obviously the engineer analyses them for you - I just want to point out that you can see the logging output in text format if you want it. The important entries in the log for this problem are: [7]b48.928 09/12/2006-13:20:44.552 [mqise GetNetBiosNameFromIPAddr] ERROR:Failed to get the NetBios name from the DNS name, error = 0xea [7]b48.928 09/12/2006-13:20:44.552 [mqise RPCToServer] ERROR:RPC call R_ProcessHTTPRequest failed, error code = 1702(RPC_S_INVALID_BINDING) which allow a Microsoft escalation engineer to check the MQISE source code to see what is going wrong. This problem according to the article occurs when the extension tries to bind to the local MSMQ service after the extension receives a POST request that contains an MSMQ message. MSMQ resolves the server name by using the DNS host name but the extension cannot bind to the service because the buffer that MSMQ uses to resolve the server name is too small - server names that are exactly 15 characters long will not fit. RPC exception 0x6a6 (RPC_S_INVALID_BINDING) occurs in the W3wp.exe process but the exception is handled and so you do not receive an error message. The workaround is to rename the MSMQ server to something less than 15 characters. If the problem has only just been noticed in a production environment - an application may have been modified to get through a newly-implemented firewall, for example - then renaming is going to be an issue. Other applications may need to be reinstalled or modified if server names are hard-coded or stored in the registry. The renaming may also break a company naming convention where the name is built up from something like location+department+number. If you want to learn more about MSMQ logging then check out Chapter 15 of the MSMQ FAQ. In fact, even if you DON'T want to learn anything about MSMQ logging you should read the FAQ anyway as there is a huge amount of useful information on known issues and the like.

    Read the article

  • I’m a Phoenix… and I’m miffed

    - by Stan Spotts
    For personal reasons, almost 30 years ago I left school to enter the workforce. I decided late 2008 to go back to school and finish my degree. After the expected loss of credits for a transfer, from Temple University to University of Phoenix, I'm now about 75% done. The experience has been interesting. Classes are time compressed, only 5 weeks each. Because I have a family and a full time job, I'm taking one at a time. Even so, I've written more papers in these classes than I ever wrote at Temple. My own papers are one thing, but the team papers give me heartburn since I can't completely control what goes into them. Not a big deal except that they make up 30% of our grade. In any case, most of the class facilitators have been great. I had great ones for Accounting, Finance, and frankly most others. I've had a few (4, maybe) cases where I was less than 2 points from an A, and asked the facilitator if I could get any of my work reviewed to see if I could get those extra points. I figured it was worth a shot, and there were no extenuating circumstances to help make my case. I think that only one facilitator decided after a review of one paper that my interpretation was good, just not what he expected, and gave me another point, which gave me an A. So while none are pushovers, they've all been open to discussion, which is as much as I should expect. Overall, good experience. That is, until my last class. On the second week, the day I was due to hand in my personal assignment for the week, I was in an accident. An SUV creamed my little Ford Focus, and totaled it (estimated repair over $11K). I was pretty banged up, especially my left shoulder. I was scheduled for rotator cuff surgery for two weeks later, and getting hit against the door really made it worse. After dealing with the police, the EMT, the tow truck, and the Percocet and Flexeril for the pain, I crashed for the night and didn't get to upload my paper until the next day. The instructor took 30% off for it being late, even after I supplied photos of the car, my arm (huge bruises), and offered to supply the police report number. I figured I'd be okay since that's 2.7 points, and I could lose up to 5 before jeopardizing an A grade. Well, that wasn't the case as we lost more points than I expected on our team paper in Week 5. I ended up with a 94.3. Yes, 7/10 of a point from an A. Of course I asked the instructor to review the issue with the accident and give me just the 0.7 points I needed for the A. That got me a short response of "I have received your emails and review your work over the last five weeks. Your current grade will stand. If you would like to dispute your grade then please feel free to contact your academic advisor. I wish you much success in your professional and academic career." Brrrr….! So I asked my academic advisor to file a dispute. If it wasn't that a pretty bad car accident was the cause, I wouldn't have. Without the grade reduction, I would have had a 97 for the class, so I'll argue that I was performing at the A level throughout the class. Why her purported "review" of my work didn't then warrant such a minor adjustment, I don't know. An A- drops my GPA, and this ticked me off. Now I have to wait and see what the school says about the grade dispute.

    Read the article

  • Travelling MVP #1: Visit to SharePoint User Group Finland

    - by DigiMortal
    My first self organized trip this autumn was visit to SharePoint User Group Finland community evening. As active community leaders who make things like these possible they are worth mentioning and on spug.fi side there was Jussi Roine the one who invited me. Here is my short review about my trip to Helsinki. User group meeting As Helsinki is near Tallinn I went there using ship. It was easy to get from sea port to venue and I had also some minutes of time to visit academic book store. Community evening was held on the ground floor of one city center hotel and room was conveniently located near hotel bar and restaurant. Here is the meeting schedule: Welcome (Jussi Roine) OpenText application archiving and governance for SharePoint (Bernd Hennicke, OpenText) Using advanced C# features in SharePoint development (Alexey Sadomov, NED Consulting) Optimizing public-facing internet sites for SharePoint (Gunnar Peipman) After meeting, of course, local dudes doesn’t walk away but continue with some beers and discussion. Sessions After welcome words by Jussi there was session by Bernd Hennicke who spoke about OpenText. His session covered OpenText history and current moment. After this introduction he spoke about OpenText products for SharePoint and gave the audience good overview about where their SharePoint extensions fit in big picture. I usually don’t like those vendors sessions but this one was good. I mean vendor dudes were not aggressively selling something. They were way different – kind people who introduced their stuff and later answered questions. They acted like good guests. Second speaker was Alexey Sadomov who is working on SharePoint development projects. He introduced some ways how to get over some limitations of SharePoint. I don’t go here deeply with his session but it’s worth to mention that this session was strong one. It is not rear case when developers have to make nasty hacks to SharePoint. I mean really nasty hacks. Often these hacks are long blocks of code that uses terrible techniques to achieve the result. Alexey introduced some very much civilized ways about how to apply hacks. Alex Sadomov, SharePoint MVP, speaking about SharePoint coding tips and tricks on C# I spoke about how I optimized caching of Estonian Microsoft community portal that runs on SharePoint Server and that uses publishing infrastructure. I made no actual demos on SharePoint because I wanted to focus on optimizing process and share some experiences about how to get caches optimized and how to measure caches. Networking After official part there was time to talk and discuss with people. Finns are cool – they have beers and they are glad. It was not big community event but people were like one good family. Developers there work often for big companies and it was very interesting to me to hear about their experiences with SharePoint. One thing was a little bit surprising for me – SharePoint guys in Finland are talking actively also about Office 365 and online SharePoint. It doesn’t happen often here in Estonia. I had to leave a little bit 21:00 to get to my ship back to Tallinn. I am sure spug.fi dudes continued nice evening and they had at least same good time as I did. Do I want to go back to Finland and meet these guys again? Yes, sure, let’s do it again! :)

    Read the article

  • Learn Cloud Computing – It’s Time

    - by Ben Griswold
    Last week, I gave an in-house presentation on cloud computing.  I walked through an overview of cloud computing – characteristics (on demand, elastic, fully managed by provider), why are we interested (virtualization, distributed computing, increased access to high-speed internet, weak economy), various types (public, private, virtual private cloud) and services models (IaaS, PaaS, SaaS.)  Though numerous providers have emerged in the cloud computing space, the presentation focused on Amazon, Google and Microsoft offerings and provided an overview of their platforms, costs, data tier technologies, management and security.  One of the biggest talking points was why developers should consider the cloud as part of their deployment strategy: You only have to pay for what you consume You will be well-positioned for one time event provisioning You will reap the benefits of automated growth and scalable technologies For the record: having deployed dozens of applications on various platforms over the years, pricing tends to be the biggest customer concern.  Yes, scalability is a customer consideration, too, but it comes in distant second.  Boy do I hope you’re still reading… You may be thinking, “Cloud computing is well and good and it sounds catchy, but should I bother?  After all, it’s just another technology bundle which I’m supposed to ramp up on because it’s the latest thing, right?”  Well, my clients used to be 100% reliant upon me to find adequate hosting for them.  Now I find they are often aware of cloud services and some come to me with the “possibility” that deploying to the cloud is the best solution for them.  It’s like the patient who walks into the doctor’s office with their diagnosis and treatment already in mind thanks to the handful of Internet searches they performed earlier that day.  You know what?  The customer may be correct about the cloud. It may be a perfect fit for their app.  But maybe not…  I don’t think there’s a need to learn about every technical thing under the sun, but if you are responsible for identifying hosting solutions for your customers, it is time to get up to speed on cloud computing and the various offerings (if you haven’t already.)  Here are a few references to get you going: DZone Refcardz #82 Getting Started with Cloud Computing by Daniel Rubio Wikipedia Cloud Computing – What is it? Amazon Machine Images (AMI) Google App Engine SDK Azure SDK EC2 Spot Pricing Google App Engine Team Blog Amazon EC2 Team Blog Microsoft Azure Team Blog Amazon EC2 – Cost Calculator Google App Engine – Cost and Billing Resources Microsoft Azure – Cost Calculator Larry Ellison has stated that cloud computing has been defined as "everything that we currently do" and that it will have no effect except to "change the wording on some of our ads" Oracle launches worldwide cloud-computing tour NoSQL Movement  

    Read the article

  • Are you cashing in on the MVP complimentary subscriptions ?

    - by Tarun Arora
    The two most asked questions in the Microsoft technology communities around the Microsoft MVP program are, 1. How do I become a Microsoft MVP? 2. What benefits do I get as an MVP? The answer to the first question has been well answered here. In this blog post, I’ll try and answer the second question.           Please find a comprehensive list of Not for Resale personal subscriptions of various products that Microsoft MVP’s are eligible for Product Description Details JetBrains Resharper, dotTrace, dotCover & WebStorm  https://www.jetbrains.com/resharper/buy/mvp.html RedGate Sql server development, database administration, .net development, azure development (merged with Cerebrata), mySQL development, Oracle development http://www.red-gate.com/community/mvp-program Pluralsight Pluralsight on demand training http://blog.pluralsight.com/2011/02/28/pluralsight-for-mvp/ Cerebrata Cloud storage studio and Azure Diagnostic Manager (part of redgate now) https://www.cerebrata.com/Offers/mvp.aspx Telerik Telerik Ultimate collection & Telerik TeamPulse http://blogs.telerik.com/blogs/posts/11-03-01/telerik-gift-for-microsoft-mvps.aspx Developer Express DevEx controls http://www.devexpress.com/Home/Community/mvp.xml InnerWorking 600 hours of .net training catalogue http://www.innerworkings.com/mvp Typemock Typemock Isolator, Typemock Isolator for Sharepoint developers, Typemock Isolator for web developers, TestDriven.NET http://www.typemock.com/mvp SpeakFlow A suite of tools for creating, managing, and delivering non-linear presentations http://www.speakflow.com/ TechSmith Camtasia Studio, SnagIt, screen cast http://www.techsmith.com/camtasia.html Altova Altova XML spy http://www.altova.com/xml-editor/ Visual SVN VisualSVN Subversion integration plug-in for Visual Studio http://www.visualsvn.com/visualsvn/purchase/mvp/ PreEmptive Solution Professional PreEmptive Analytics, Dotfuscator http://www.preemptive.com/landing/mvp Armadillo Armadillo Adaptive Bug Prevention http://www.armadilloverdrive.com/ IS Decisions NFR license to Userlock, RemoteExec, FileAudit & WinReporter http://www.isdecisions.com/download/mvp-mct-program.htm Idera SQL tools http://www.idera.com/Content/Home.aspx West Wind Help Builder Help builder solution http://www.west-wind.com/weblog/posts/2005/Mar/09/Are-you-a-Microsoft-MVP-Get-a-FREE-copy-of-West-Wind-Html-Help-Builder Bamboo Sharepoint tools http://community.bamboosolutions.com/blogs/partner-advantage-program/archive/2008/08/01/partner-advantage-program-mvp.aspx Nitriq Nitriq code analysis http://blog.nitriq.com/FreeLicensesForMicrosoftMVPs.aspx ByteScout Components, Libraries and Developer Tools http://bytescout.com/buy/purchase_nfr_for_mvp.html YourKit Java and .net Profiler http://yourkit.com/.net/profiler/index.jsp Aspose .NET components http://www.aspose.com/corporate/community/2012_05_08_nfr-licenses-for-community-leaders.aspx Apart from google bing fu; stackoverflow and breathtech were a great help in compiling the above list. If you know of any other benefits, offers or complimentary subscriptions on offer for MVPs not cover in the list above, please add to the comment thread and I’ll have it updated in the list. Enjoy

    Read the article

  • Visual Studio &amp; TFS &ndash; List of addins, extensions, patches and hotfixes &ndash; Latest and Greatest

    - by terje
    This post is a list of the addins and extensions we (I ) recommend for use in Inmeta.  It’s coming up all the time – what to install, where are the download sites, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 2010 connected to a Team Foundation Server 2010.  The edition of Visual Studio I use is the Ultimate Edition, but as many stay with the Premium Edition I’ve marked the extensions which only works with the Ultimate with a . I’ve also split the group into Recommended (which means Required) and Optional (which means Recommended) and Nice to Have (which means Optional) .   The focus is to get a setup which can be used for a complete coding experience for the whole ALM process.  The Code Gallery is found either through the Tools/Extension Manager menu in Visual Studio or through this link. The ones to really download is the Recommended category.  Then consider the Optional based on your needs.  The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column.     Product Available on Code Gallery Latest Version License Rec/Opt/N2H Applicable to Microsoft TFS Power Tools Sept 2010 Complete setup msi on link, split into parts on CG Sept 2010 Free Recommended TFS integration Yes Productivity Power Tools Yes 10.0.11019.3 Free Recommended Coding Yes Code Contracts No 1.4.30903 Free Recommended Coding & Quality Yes Code Contracts Editor Extensions Yes 1.4.30903 Free Recommended Coding & Quality Yes VSCommands Yes 3.6.4.1 Lite version Free (Good enough) Nice to have Coding No Power Commands Yes 1.0.2.3 Free Recommended Coding Yes FeaturePack 2   No.  MSDN Subscriber download under Visual Studio 2010 FP2 Part of MSDN Subscription Recommended Modeling & Testing Yes ReSharper No (Trial only) 5.1.1 Licensed Recommended Coding & Quality No dotTrace No 4.0.1 Licensed Optional Quality No NDepends No (Trial only) Licensed Optional Quality No tangible T4 editor Yes 1.950 Lite version Free (Good enough) Optional Coding (T4 templates) No Reflector No (Trial of Pro version only) 6.5 Lite version Free (Good enough) Recommended Coding/Investigation No LinqPad No 4.26.2 Licensed Nice to have Coding No Beyond Compare No 3.1.11 Licensed Recommended Coding/Investigation No Pex and Moles No (Moles available alone on CG) . Complete on MSDN Subscriber download under Visual Studio 2010 0.94.51023 Part of MSDN Subscription Optional Coding & Unit Testing Yes ApexSQL No Licensed Nice to have SQL No                 Some important Patches, upgrades and fixes Product Date Information Rec/Opt Applicable to Scrolling context menu KB2345133 and KB2413613 October 2010 Here Recommended Visual Studio MTM Patch October 2010 Here and here  KB2387011 Recommended (if you use MTM) MTM Data warehouse fix June 2010 Iteration dates fails with SQL 2008 R2.  KB2222312. Affects Burndown chart in Agile workbook Only for SQL 2008 R2 Server Upgrade 2008 to 2010 issue and hotfix August 2010 Fixes problems with labels and branches which are lost during upgrade. Apply before upgrade. Note: This has been fixed in the latest re-release of the TFS Server dated Aug 5th 2010. See here. Recommends downloading the latest bits. Only if upgrade from 2008 from earlier bits Server

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    Hi, This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US, but we're in Texas. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Introducing .NET 4.0 with Visual Studio 2010 by Alex Mackey - Book review

    - by Malisa L. Ncube
    Alex (http://simpleisbest.co.uk/) does a very good job in covering the new features of .NET 4.0 and Visual Studio 2010. His focus is on the developers that have experience in development using previous versions of Visual Studio, more specifically Visual Studio 2008.     The following are my views towards his book. 1. Scope / Coverage Even as the book is labeled as introduction, it is covers a broad spectrum of technologies, features and references that are focused into helping a developer quickly decide what to use in the new .NET framework. a. Content The content included covers as much as possible the new additions that are included in the new .NET version 4.0. He shows the Visual Studio 2010 new features and quickly shows how to extend it using Managed Extensibility Framework. Some of my favorites are parallel debugging enhancements. The author delves into JQuery, which Microsoft has decided to support. Some of the very interesting content is on the out-of-band releases including ASP.NET MVC, Windows Azure Silverlight 3 and WCF Data Services. b. What is not included? Windows Phone 7 Series. This was only talked about in the MIX10. The data may not have been available at the time of writing. Microsoft Pinpoint (Microsoft code name "Dallas") Windows Embedded development. c. Table of Contents Chapter 1: Introduction Chapter 2: Visual Studio IDE and MEF Chapter 3: Language and Dynamic Changes Chapter 4: CLR and BCL Changes Chapter 5: Parallelization and Threading Enhancements Chapter 6: Windows Workflow Foundation 4 Chapter 7: Windows Communication Foundation Chapter 8: Entity Framework Chapter 9: WCF Data Services Chapter 10: ASPNET Chapter 11: Microsoft AJAX Library Chapter 12: jQuery Chapter 13: ASPNET MVC Chapter 14: Silverlight Introduction Chapter 15: WPF 4.0 and Silverlight 3.0 Chapter 16: Windows Azure 2. Depth Avoids getting into depth on the topics presented, to present the new concepts in assumption of the developer’s existing knowledge. Code samples are on book and exist mostly as snippets and very easy to follow. There are no downloadable examples. 3. Complexity The book is written in a very simple way and easy to follow. There are no irrelevant intimidating details. So it’s a book that you can grab and never put down until you’ve finished reading the entire book. 4. References The author includes reference links to blogs, Wikis and a lot of online resources including the MSDN documentation, which is a very convenient strategy to avoid flooding the reader with details which may not be of interest to them. Most sites do not use url routing and that is really not nice. There are notes from interviews between the author and people behind the new technologies, in which they explain what some specific areas that need clarifications and what their future views are in relation to the features they are working on. 5. Target The author targets experts that want to make a transition from .NET 3.5 to 4.0. Some obvious 3.5 features have been purposely excluded from the text 6. Overrall It is evident that the author has made extensive research into the breadth of what MS is working on, in relation to .NET and Visual Studio and has also been watching the online community. What I would like to see in the next edition are some details on OData protocol, Expression Blend 4 and Embedded development and Windows Phone development. I should say I’m one of the beneficiaries of this book. Excellent work Alex.   Technorati Tags: .NET,Book-Review,Visual Studio

    Read the article

  • SQLAuthority News – Windows Efficiency Tricks and Tips – Personal Technology Tip

    - by pinaldave
    This is the second post in my series about my favorite Technology Tips, and I wanted to focus on my favorite Microsoft product.  Choosing just one topic to cover was too hard, though.  There are so many interesting things I have to share that I am forced to turn this second installment into a five-part post.  My five favorite Windows tips and tricks. 1) You can open multiple applications using the task bar. With the new Windows 7 taskbar, you can start navigating with just one click.  For example, you can launch Word by clicking on the icon on your taskbar, and if you are using multiple different programs at the same time, you can simply click on the icon to return to Word.  However, what if you need to open another Word document, or begin a new one?  Clicking on the Word icon is just going to bring you back to your original program.  Just click on the Word icon again while holding down the shift key, and you’ll open up a new document. 2) Navigate the screen with the touch of a button – and not your mouse button. Yes, we live in a pampered age.  We have access to amazing technology, and it just gets better every year.  But have you ever found yourself wishing that right when you were in the middle of something, you didn’t have to interrupt your work flow be reaching for your mouse to navigate through the screen?  Yes, we have all been guilty of this pampered wish.  But Windows has delivered!  Now you can move your application window using your arrow keys. Lock the window to the left, right hand screen: Win+left Arrow and Win+right Arrow Maximize & minimize: Win+up arrow and Win+down arrow Minimize all items on screen: Win+M Return to your original folder, or browse through all open windows: Alt+up arrow, Alt+Left Arrow, or Alt+right arrow Close down or reopen all windows: win+home 3) Are you one of the few people who still uses Command Prompt? You know who you are, and you aren’t ashamed to still use this option that so many people have forgotten about it.  You can easily access it by holding down the shift key while RIGHT clicking on any folder. 4) Quickly select multiple files without using your mouse. We all know how to select multiple files or folders by Ctrl-clicking or Shift-clicking multiple items.  But all of us have tried this, and then accidentally released Ctrl, only to lose all our precious work.  Now there is a way to select only the files you want through a check box system.  First, go to Windows Explorer, click Organize, and then “Folder and Search Options.”  Go to the View tab, and under advanced settings, you can find a box that says “Use check boxes to select items.”  Once this has been selected, you will be able to hover your mouse over any file and a check box will appear.  This makes selecting multiple, random files quick and easy. 5) Make more out of remote access. If you work anywhere in the tech field, you are probably the go-to for computer help with friends and family, and you know the usefulness of remote access (ok, some of us use this extensively at work, as well, but we all have friends and family who rely on our skills!).  Often it is necessary to restart a computer, which is impossible in remote access as the computer will not show the shutdown menu.  To force the computer to do your wishes, we return to Command Prompt.  Open Command Prompt and type “shutdown /s” for shutdown, or “shutdown /r” for restart. I hope you will find above five tricks which I use in my daily use very important. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Personal Technology

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Do We Indeed Have a Future? George Takei on Star Wars.

    - by Bil Simser
    George Takei (rhymes with Okay), probably best known for playing Hikaru Sulu on the original Star Trek, has always had deep concerns for the present and the future. Whether on Earth or among the stars, he has the welfare of humanity very much at heart. I was digging through my old copies of Famous Monsters of Filmland, a great publication on monster and films that I grew up with, and came across this. This was his reaction to STAR WARS from issue 139 of Famous Monsters of Filmland and was written June 6, 1977. It is reprinted here without permission but I hope since the message is still valid to this day and has never been reprinted anywhere, nobody will mind me sharing it. STAR WARS is the most pre-posterously diverting galactic escape and at the same time the most hideously credible portent of the future yet.While I thrilled to the exploits that reminded me of the heroics of Errol Flynn as Robin Hood, Burt Lancaster as the Crimson Pirate and Buster Crabbe as Flash Gordon, I was at the same time aghast at the phantasmagoric violence technology can place at our disposal. STAR WARS raised in my mind the question - do we indeed have a future?It seems to me what George Lucas has done is to masterfully guide us on a journey through space and time and bring us back face to face with today's reality. STAR WARS is more than science fiction, I think it is science fictitious reality.Just yesterday, June 7, 1977, I read that the United States will embark on the production of a neutron bomb - a bomb that will kill people on a gigantic scale but will not destroy buildings. A few days before that, I read that the Pentagon is fearful that the Soviets may have developed a warhead that could neutralize ours that have a capacity for that irrational concept overkill to the nth power. Already, it seems we have the technology to realize the awesome special effects simulations that we saw in the film.The political scene of STAR WARS is that of government by force and power, of revolutions based on some unfathomable grievance, survival through a combination of cunning and luck and success by the harnessing of technology -  a picture not very much at variance from the political headlines that we read today.And most of all, look at the people; both the heroes in the film and the reaction of the audience. First, the heroes; Luke Skywalker is a pretty but easily led youth. Without any real philosophy to guide him, he easily falls under the influence of a mystical old man believed previously to be an eccentric hermit. Recognize a 1960's hippie or a 1970's moonie? Han Solo has a philosophy coupled with courage and skill. His philosophy is money. His proficiency comes for a price - the highest. Solo is a thoroughly avaricious mercenary. And the Princess, a decisive, strong, self-confident and chilly woman. The audience cheered when she wielded a gun. In all three, I missed qualities that could be called humane - love, kindness, yes, I missed sensuality. I also missed a sense of ideals and faith. In this regard the machines seemed more human. They demonstrated real affection for each other and an occasional poutiness. They exhibited a sense of fidelity and constancy. The machines were humanized and the humans conversely seemed mechanical.As a member of the audience, I was swept up by the sheer romantic escapsim of it all. The deering-dos, the rope swing escape across the pit, the ray gun battles and especially the swash buckle with the ray swords. Great fun!But I just hope that we weren't too intoxicated by the escapism to be able to focus on the recognizable. I hope the beauty of the effects didn't narcotize our sensitivity to violence. I hope the people see through the fantastically well done futuristic mirrors to the disquieting reflection of our own society. I hope they enjoy STAR WARS without being "purely entertained".

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28

    - by pinaldave
    In this blog post, I am going to discuss something from my field experience. While consultation, I have seen various wait typed, but one of my customers who has been using SQL Server for all his operations had an interesting issue with a particular wait type. Our customer had more than 100+ SQL Server instances running and the whole server had MSSQL_XP wait type as the most number of wait types. While running sp_who2 and other diagnosis queries, I could not immediately figure out what the issue was because the query with that kind of wait type was nowhere to be found. After a day of research, I was relieved that the solution was very easy to figure out. Let us continue discussing this wait type. From Book On-Line: ?MSQL_XP occurs when a task is waiting for an extended stored procedure to end. SQL Server uses this wait state to detect potential MARS application deadlocks. The wait stops when the extended stored procedure call ends. MSQL_XP Explanation: This wait type is created because of the extended stored procedure. Extended Stored Procedures are executed within SQL Server; however, SQL Server has no control over them. Unless you know what the code for the extended stored procedure is and what it is doing, it is impossible to understand why this wait type is coming up. Reducing MSQL_XP wait: As discussed, it is hard to understand the Extended Stored Procedure if the code for it is not available. In the scenario described at the beginning of this post, our client was using third-party backup tool. The third-party backup tool was using Extended Stored Procedure. After we learned that this wait type was coming from the extended stored procedure of the backup tool they were using, we contacted the tech team of its vendor. The vendor admitted that the code was not optimal at some places, and within that day they had provided the patch. Once the updated version was installed, the issue on this wait type disappeared. As viewed in the wait statistics of all the 100+ SQL Server, there was no more MSSQL_XP wait type found. In simpler terms, you must first identify which Extended Stored Procedure is creating the wait type of MSSQL_XP and see if you can get in touch with the creator of the SP so you can help them optimize the code. If you have encountered this MSSQL_XP wait type, I encourage all of you to write how you managed it. Please do not mention the name of the vendor in your comment as I will not approve it. The focus of this blog post is to understand the wait types; not talk about others. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • How to Code Faster (Without Sacrificing Quality)

    - by ashes999
    I've been a professional coder for a several years. The comments about my code have generally been the same: writes great code, well-tested, but could be faster. So how do I become a faster coder, without sacrificing quality? For the sake of this question, I'm going to limit the scope to C#, since that's primarily what I code (for fun) -- or Java, which is similar enough in many ways that matter. Things that I'm already doing: Write the minimal solution that will get the job done Write a slew of automated tests (prevents regressions) Write (and use) reusable libraries for all kinds of things Use well-known technologies where they work well (eg. Hibernate) Use design patterns where they fit into place (eg. Singleton) These are all great, but I don't feel like my speed is increasing over time. I do care, because if I can do something to increase my productivity (even by 10%), that's 10% faster than my competitors. (Not that I have any.) Besides which, I've consistently gotten this feeback from my managers -- whether it was small-scale Flash development or enterprise Java/C++ development. Edit: There seem to be a lot of questions about what I mean by fast, and how I know I'm slow. Let me clarify with some more details. I worked in small and medium-sized teams (5-50 people) in various companies over various projects and various technologies (Flash, ASP.NET, Java, C++). The observation of my managers (which they told me directly) is that I'm "slow." Part of this is because a significant number of my peers sacrificed quality for speed; they wrote code that was buggy, hard to read, hard to maintain, and difficult to write automated tests for. My code generally is well-documented, readable, and testable. At Oracle, I would consistently solve bugs slower than other team-members. I know this, because I would get comments to that effect; this means that other (yes, more senior and experienced) developers could do my work in less time than it took me, at nearly the same quality (readability, maintainability, and testability). Why? What am I missing? How can I get better at this? My end goal is simple: if I can make product X in 40 hours today, and I can improve myself somehow so that I can create the same product at 20, 30, or even 38 hours tomorrow, that's what I want to know -- how do I get there? What process can I use to continually improve? I had thought it was about reusing code, but that's not enough, it seems.

    Read the article

  • ASP.NET MVC, Web API, Razor and Open Source

    - by ScottGu
    Microsoft has made the source code of ASP.NET MVC available under an open source license since the first V1 release. We’ve also integrated a number of great open source technologies into the product, and now ship jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js and JSON.NET as part of it. I’m very excited to announce today that we will also release the source code for ASP.NET Web API and ASP.NET Web Pages (aka Razor) under an open source license (Apache 2.0), and that we will increase the development transparency of all three projects by hosting their code repositories on CodePlex (using the new Git support announced last week). Doing so will enable a more open development model where everyone in the community will be able to engage and provide feedback on code checkins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code and tests. We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products. We announced a similar open development approach with the Windows Azure SDK last December, and have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Very importantly - ASP.NET MVC, Web API and Razor will continue to be fully supported Microsoft products that ship both standalone as well as part of Visual Studio (the same as they do today). They will also continue to be staffed by the same Microsoft developers that build them today (in fact, we have more Microsoft developers working on the ASP.NET team now than ever before). Our goal with today’s announcement is to increase the feedback loop on the products even more, and allow us to deliver even better products.  We are really excited about the improvements this will bring. Learn More You can now browse, sync and build the source tree of ASP.NET MVC, Web API, and Razor on the http://aspnetwebstack.codeplex.com web-site.  The Git repository on the site is the live RC milestone development tree that the team has been working on the last several weeks, and the tree contains both the runtime sources + tests, and is buildable and testable by anyone.  Because the binaries produced are bin-deployable, this allows you to compile your own builds and try product updates out as soon as they are checked-in. You can also now contribute directly to the development of the products by reviewing and sending feedback on code checkins, submitting bugs and helping us verify fixes as they are checked in, suggesting and giving feedback on new features as they are implemented, as well as by submitting code fixes or code contributions of your own. Note that all code submissions will be rigorously reviewed and tested by the ASP.NET MVC Team, and only those that meet an extremely high bar for both quality and design/roadmap appropriateness will be merged into the source. Summary All of us on the team are really excited about today’s announcement – it has been something we’ve been working toward for many years.  The tighter feedback loop is going to enable us to build even better products, and take ASP.NET to the next level in terms of innovation and customer focus. Thanks, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • ASP.NET MVC: Simple view to display contents of DataTable

    - by DigiMortal
    In one of my current projects I have to show reports based on SQL Server views. My code should be not aware of data it shows. It just asks data from view and displays it user. As WebGrid didn’t seem to work with DataTable (at least with no hocus-pocus) I wrote my own very simple view that shows contents of DataTable. I don’t focus right now on data querying questions as this part of my simple generic reporting stuff is still under construction. If the final result is something good enough to share with wider audience I will blog about it for sure. My view uses DataTable as model. It iterates through columns collection to get column names and then iterates through rows and writes out values of all columns. Nothing special, just simple generic view for DataTable. @model System.Data.DataTable @using System.Data; <h2>Report</h2> <table>     <thead>     <tr>     @foreach (DataColumn col in Model.Columns)         {                  <th>@col.ColumnName</th>     }         </tr>     </thead>             <tbody>     @foreach (DataRow row in Model.Rows)         {                 <tr>         @foreach (DataColumn col in Model.Columns)                 {                          <td>@row[col.ColumnName]</td>         }                 </tr>     }         </tbody> </table> In my controller action I have code like this. GetParams() is simple function that reads parameter values from form. This part of my simple reporting system is still under construction but as you can see it will be easy to use for UI developers. public ActionResult TasksByProjectReport() {      var data = _reportService.GetReportData("MEMOS",GetParams());      return View(data); } Before seeing next silver bullet in this example please calm down. It is just plain and simple stuff for simple needs. If you need advanced and powerful reporting system then better use existing components by some vendor.

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • Create a Search Filter List in Google Chrome

    - by Asian Angel
    Are you tired of unwanted and/or non-relevant results cluttering up the search results at Bing, Yahoo, and Google? With the Search Filter extension for Chrome you can easily remove the unwanted “chaff” from your search results. Note: The extension only works on Bing, Yahoo, and Google at this time. Before For our example we conducted a search for “anime wallpapers” at Yahoo Singapore, Bing Singapore, and Google. In each set of results we decided to focus on results that would display either a yellow or red warning color from WOT. You can see our targeted result for Yahoo Singapore… The one for Bing Singapore… And the targeted result from Google. Search Filter in Action As soon as you install the extension you should take a quick look at the “Options”. At first the “Filters List Area” will be empty but will not remain so for long as you create your own filter list. The second part may or may not be of interest to you…the ability to opt into the filter service. If you opt in your filter list will be connected to your “Google Account” and will be available on any of your Chrome installs with the extension installed (and set to “Opt In”). Keep in mind that if you choose this option the filter list that you create will be aggregated anonymously and have a GUID number attached to it. After installing the extension we refreshed each of our three search pages…notice the small red circle button beside each search result link. Clicking on the red circle button will cause the entire browser window area to “shade out” temporarily while you decide between adding that website to the filter list or cancelling the action. If you add a website to the filter list that result will immediately disappear from the search results list without refreshing the webpage. Looks like we have another website at the bottom that we could add to the filter list… Click, click, click! After adding one website from each of the three search services you can see that our filter list has gotten off to a nice start. If for some reason you accidentally add a website to the list or change your mind about a website simply click on the red circle button to remove that particular listing. Conclusion If you are looking for an easy way to create a search filter list then this is definitely an extension that is worth taking the time to look at. Links Download the Search Filter extension (Google Chrome Extensions) Visit the Search Filter Hub Website to View Lists of Filtered Sites Similar Articles Productive Geek Tips How to Make Google Chrome Your Default BrowserGeek’s Spam Filter – Updated to Version 0.2Access Wolfram Alpha Search in Google ChromeGain Access to a Search Box in Google ChromeGeek’s Spam Filter – Updated to Version 0.3 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats

    Read the article

< Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >