Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 115/196 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • APress Deal of the Day 2/June/2014 - Pro SQL Server 2012 Integration Services

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/02/apress-deal-of-the-day-2june2014---pro-sql-server.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430236924 is Pro SQL Server 2012 Integration Services. “Pro SQL Server 2012 Integration Services is your key to building powerful extract, transform, load (ETL) solutions using SQL Server 2012 Integration Services (SSIS).”

    Read the article

  • Multiple instances of Intellitrace.exe process

    - by Vincent Grondin
    Not so long ago I was confronted with a very bizarre problem… I was using visual studio 2010 and whenever I opened up the Test Impact view I would suddenly see my pc perf go down drastically…  Investigating this problem, I found out that hundreds of “Intellitrace.exe” processes had been started on my system and I could not close them as they would re-start as soon as I would close one.  That was very weird.  So I knew it had something to do with the Test Impact but how can this feature and Intellitrace.exe going crazy be related?  After a bit of thinking I remembered that a teammate (Etienne Tremblay, ALM MVP) had told me once that he had seen this issue before just after installing a MOCKING FRAMEWORK that uses the .NET Profiler API…  Apparently there’s a conflict between the test impact features of Visual Studio and some mocking products using the .NET profiler API…  Maybe because VS 2010 also uses this feature for Test Impact purposes, I don’t know… Anyways, here’s the fix…  Go to your VS 2010 and click the “Test” menu.  Then go to the “Edit Test Settings” and choose EACH test setting file applying the following actions (normally 2 files being “Local” and TraceAndTestImpact”: -          Select the Data And Diagnostic option on the left -          Make sure that the ASP.NET Client Proxy for Intellitrace and Test Impact option is NOT SELECTED -          Make sure that the Test Impact option is NOT SELECTED -          Save and close   Edit Test Settings   Problem solved…  For me having to choose between the “Test Impact” features and the “Mocking Framework” was a no brainer, bye bye test impact…  I did not investigate much on this subject but I feel there might be a way to have them both working by enabling one after the other in a precise sequence…  Feel free to leave a comment if you know how to make them both work at the same time!   Hope this helps someone out there !

    Read the article

  • Create Pivot collections much faster than DeepZoomTools CollectionCreator class

    - by John Conwell
    I've been playing with Microsoft Live Labs Pivot to create a hierarchy of collections all linked together to allow someone to explore a hierarchy of data visually. The problem has been the generation time of the entire hierarchy. I end up creating 500 - 600 collections total and it takes hours and hours using the CollectionCreator class that comes with the DeepZoomTools.  So digging around I found a way to make the actual DeepZoom collection creation wicked fast. Dont use the CollectionCreator!  Turns out Pivot doesnt actually use the image pyramid generated by the CollectionCreator. Or if it does, its only when you open a new collection it shows all the images zooming in. But once the zoom in is complete, Pivot uses the individual DeepZoom images. What Pivot does need is the xml generated by the CollectionCreator, which is in a very simple format.  So what i did was manually generate the xml for the collection image pyramid, and then create the folder structure required (one folder per level of the pyramid), and put a single pixel png file in each folder.  Now, I can create the required files and folders for 500 collections in about 10 seconds. Sweet! Now you still have to use the ImageCreator to create a DeepZoom image for each image in the collection and that still takes some time, but at least the total processing time is way better.

    Read the article

  • Security Issue in LinkedIn &ndash; View any 3rd profile without a premium account.

    - by Shaurya Anand
    Originally posted on: http://geekswithblogs.net/shauryaanand/archive/2013/06/25/153230.aspxI discovered this accidently when my wife forwarded a contact on LinkedIn from her tablet, using the mobile interface of the website. On opening the contact on my desktop, I was surprised to see, I need to upgrade my account to view the contact. Doing some research along with my wife, I found this simple security vulnerability from LinkedIn that can let anyone view a contact’s full profile even when you have a “not upgraded” LinkedIn account and that the contact is a “3rd + Everyone Else”. Here’s an example of what I am talking about. I just made a random search on LinkedIn for a contact whose name starts with Sacha. Do note, this is just a walkthrough and I am not publicizing any Sacha. I check the “3rd + Everyone Else” and find a “LinkedIn Member”. On clicking this person’s profile to view, I am presented with the following page, asking me to upgrade. Make a note of this page’s web address and you get the profile id from it. For example, for this contact, the page address is: http://www.linkedin.com/profile/view?id=868XXX35 The Profile Id for this contact is 868XXX35. Now, open following page where the Profile Id is the same as the one we grabbed a moment earlier. https://touch.www.linkedin.com/?#profile/868XXX35 The mobile page exposes this contact information and you even get the possibility to connect to this person without an introduction mail (InMail). I hope someone from LinkedIn sees and issues a fix for this. I am pretty sure, it’s something that they don’t want the user to do without purchasing an upgrade package.

    Read the article

  • Creating a Document Library with Content Type in code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/15/154360.aspxIn the past, I have shown how to create a list content type and add the content type to a list in code.  As a Developer, many of the artifacts which we create are widgets which have a List or Document Library as the back end.   We need to be able to create our applications (Web Part, etc.) without having the user involved except to enter the list item data.  Today, I will show you how to do the same with a document library.    A summary of what we will do is as follows:   1.   Create an Empty SharePoint Project in Visual Studio 2.   Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution 3.   Create a new Feature and add and event receiver  all the code will be in the event receiver 4.   Add the fields which will extend the built-in Document content type 5.   If the Content Type does not exist, Create it 6.   If the Document Library does not exist, Create it with the new Content Type inherited from the Document Content Type 7.   Delete the Document Content Type from the Library (as we have a new one which inherited from it) 8.   Add the fields which we want to be visible from the fields added to the new Content Type   Here we go:   Create an Empty SharePoint Project in Visual Studio      Add a Code Folder in the solution and Drag and Drop Utilities and Extensions Libraries to the solution       The Utilities and Extensions Library will be part of this project which I will provide a download link at the end of this post.  Drag and drop them into your project.  If Dragged and Dropped from windows explorer you will need to show all files and then include them in your project.  Change the Namespace to agree with your project.   Create a new Feature and add and event receiver  all the code will be in the event receiver.  Here We added a new Feature called “CreateDocLib”  and then right click to add an Event Receiver All of our code will be in this Event Receiver.  For this Demo I will only be using the Feature Activated Event.      From this point on we will be looking at code!    We are adding two constants for use columGroup (How we want SharePoint to Group them, usually Company Name) and ctName(ContentType Name)  using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace CreateDocLib.Features.CreateDocLib { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("56e6897c-97c4-41ac-bc5b-5cd2c04f2dd1")] public class CreateDocLibEventReceiver : SPFeatureReceiver { const string columnGroup = "DJ"; const string ctName = "DJDocLib"; } }     Here we are creating the Feature Activated event.   Adding the new fields (Site Columns) ,  Testing if the Content Type Exists, if not adding it.  Testing if the document Library exists, if not adding it.   #region DocLib public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { //add the fields addFields(spWeb); //add content type SPContentType testCT = spWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(spWeb, ctName); } if ((spWeb.Lists.TryGetList("MyDocuments") == null)) { //create the list if it dosen't to exist CreateDocLib(spWeb); } } } #endregion The addFields method uses the utilities library to add site columns to the site. We can add as many fields within this method as we like. Here we are adding one for demonstration purposes. Icon as a Url type.  public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Icon", SPFieldType.URL, false, columnGroup); }The addContentType method add the new Content Type to the site Content Types. We have already checked to see that it does not exist. In addition, here is where we add the linkages from our site columns previously created to our new Content Type   private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Document"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } Here we are adding just one linkage as we only have one additional field in our Content Type public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Icon", ct); } Next we add the logic to create our new Document Library, which we have already checked to see if it exists.  We create the document library and turn on content types.  Add the new content type and then delete the old “Document” content types.   private void CreateDocLib(SPWeb web) { using (var site = new SPSite(web.Url)) { var web1 = site.RootWeb; var listId = web1.Lists.Add("MyDocuments", string.Empty, SPListTemplateType.DocumentLibrary); var lib = web1.Lists[listId] as SPDocumentLibrary; lib.ContentTypesEnabled = true; var docType = web.ContentTypes[ctName]; lib.ContentTypes.Add(docType); lib.ContentTypes.Delete(lib.ContentTypes["Document"].Id); lib.Update(); AddLibrarySettings(web1, lib); } }  Finally, we set some document library settings on our new document library with the AddLibrarySettings method. We then ensure that the new site column is visible when viewed in the browser.  private void AddLibrarySettings(SPWeb web, SPDocumentLibrary lib) { lib.OnQuickLaunch = true; lib.ForceCheckout = true; lib.EnableVersioning = true; lib.MajorVersionLimit = 5; lib.EnableMinorVersions = true; lib.MajorWithMinorVersionsLimit = 5; lib.Update(); var view = lib.DefaultView; view.ViewFields.Add("Icon"); view.Update(); } Okay, what's cool here: In a few lines of code, we have created site columns, A content Type, a document library. As a developer, I use this functionality all the time. For instance, I could now just add a web part to this same solutionwhich uses this document Library. I love SharePoint! Here is the complete solution: Create Document Library Code

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Rocky Mountain Tech Trifecta v3.0

    - by Jeff Certain
    The Rocky Mountain Tech Trifecta is an annual event held in Denver in late February or early March. The last couple of these have been amazing events, with great speakers like Beth Massi, Scott Hanselman, David Yack, Kathleen Dollard, Ben Hoelting, Paul Nielsen… need I go on? Registration is open at http://www.rmtechtrifecta.com. The speaker list hasn’t been finalized, but it’s sure to be another great event. Don’t miss it!

    Read the article

  • Microsoft Async CTP for DDD9 UK Developer Conference - slides and source code now available

    - by Liam Westley
    Thanks to all the nice comments from people who attended my presentation at DDD9, and extra thanks to Jon Skeet, Mark Rendle and Mike Hadlow for coming on stage for the last ten minutes to help debate whether the Async CTP is the correct way to go to enhance C# 5.0. The presentation is available at Prezi.com http://prezi.com/gysz5nohltye, which I can recommend as a refreshing change to the more standard PowerPoint slidedecks. I've also uploaded all the code samples into a single ZIP file. You will need to install the Async CTP to be able to run them, and I would remind everyone that the current Async CTP is not compatible with either ASP.NET MVC 3 RTM or Visual Studio 2010 Service Pack 1 so you may need to use a test system of virtual machine to play with it! Source code - http://www.tigernews.co.uk/blog-twickers/ddd9/AsyncSrc.zip Again, thanks for all the positive feedback and the whole of the DDD team for putting on a fantastic conference for all the presenters and delegates.

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • FAQ&ndash;Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready function. This function will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor) function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • Silverlight Goes Mobile!

    - by PeterTweed
    The most exciting announcements from Mix 2010 last week for me were the release of the Windows Phone 7 Series SDK and the news that the platform utilizes Silverlight for the application development technology. From the press and exposure that the platform is being given and the experience that is promised it looks like the Windows Phone 7 Series could eventually compete with the iPhone. For me this is exciting as Silverlight can now be used to develop RIA apps, easily deployed desktop apps and mobile apps. As someone who delivers enterprise technology solutions this equates to a whole bunch of opportunity knocking at the door and asking to join the party. Watch this space for future posts on developing apps on the Windows Phone 7 Series platform!

    Read the article

  • The death of Kodak digital

    - by Ken Hortsch
    Months ago Kodak announced that it was discontinuing its digital video to focus on “significant opportunities for profitable growth”.  Three years ago I picked up the little Kodak Zi6 (pronounced Zix) for the kids for Christmas.  It is an HD pocket video camera with a nice 3” LCD all built into a something a bit longer than an deck of cards.  It is low tech and great!  The kids have had a ball with it, and for around $100 it was perfect.  It comes with 2 AA rechargeable batteries and the recharger.  You can add an SD card, but don’t need to, and the USB is not a cable but a pop-out dongle so everything is right in the one package.  Too many companies look for the next big thing and fail to see the stuff that is good enough, and right in front of their eyes.

    Read the article

  • Richmond Code Camp 2010.1 &ndash; A Lap Around MEF

    - by John Blumenauer
    Thanks to all the attendees who came to my Lap Around MEF session at Richmond Code Camp today.   It seems many developers are seeking ways to make their applications more dynamic and extensible.  Hopefully, I provided them with a number of ideas on to get started with MEF and utilize it to tackle this challenge.  The slides from my session can be found HERE.  If you experience any problems downloading the slides or code, please let me know.

    Read the article

  • The Road to New Orleans: IT Grand Prix

    - by Enrique Lima
    Four teams race for charity. They need your help. Four teams of MCPs are racing to TechEd in New Orleans on a quest to win $10,000 for the charity of their choice. But they can't win without your help--pick a team, join their pit crew, and earn them points toward victory! While they're on the ground, they need your help in the cloud--pick a team, join their virtual pit crew, and earn them points by meeting online challenges. Join us, be part of this amazing drive to raise awareness and help out by becoming part of the virtual pit crew. I am a pit crew member for the Gold Team.

    Read the article

  • Upgarde from Asp.Net MVC 1 to MVC 2 - how to and issues with JsonRequestBehavior

    - by Renso
    Goal Upgrade your MVC 1 app to MVC 2 Issues You may get errors about your Json data being returned via a GET request violating security principles - we also address this here. This post is not intended to delve into why the Json GET request is or may be an issue, just how to resolve it as part of upgrading from MVC1 to 2. Solution First remove all references from your projects to the MVC 1 dll and replace it with the MVC 2 dll. Now update your web.config file in your web app root folder by simply changing references to assembly="System.Web.Mvc, Version 1.0.0.0 to Version 2.0.0.0, there are a couple of references in your config file, here are probably most of them you may have:         <compilation debug="true" defaultLanguage="c#">             <assemblies>                        <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />             </assemblies>         </compilation>           <pages masterPageFile="~/Views/Masters/CRMTemplate.master" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validateRequest="False">             <controls>                 <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />   Secondly, if you return Json objects from an ajax call via the GET method you ahve several options to fix this depending on your situation: 1. The simplest, as in my case I did this for an internal web app, you may simply do:             return Json(myObject, JsonRequestBehavior.AllowGet);   2. In Mvc if you have a controller base you could wrap the Json method with:         public new JsonResult Json(object data)         {             return Json(data, "application/json", JsonRequestBehavior.AllowGet);                    }   3. The most work would be to decorate your Actions with:         [AcceptVerbs(HttpVerbs.Get)]   4. Another tnat is also a lot of work that needs to be done to every ajax call returning Json is:                             msg = $.ajax({ url: $('#ajaxGetSampleUrl').val(), dataType: 'json', type: 'POST', async: false, data: { name: theClass }, success: function(data, result) { if (!result) alert('Failure to retrieve the Sample Data.'); } }).responseText;   This should cover all the issues you may run into when upgrading. Let me kow if you run into any other ones.

    Read the article

  • Microsoft Releases TFS Scrum process template

    - by Matt deClercq
    Microsoft has announced and released a Beta version of a new process template for Team Foundation Server 2010 that is based purely on Scrum Methodology and terms. You can download the new template from the link below TFS Scrum v1.0 BetaFor a more in depth look review see the Visual Studio Magazine announcement HERE

    Read the article

  • Managing .NET Deployment Configuration With Rake

    - by Liam McLennan
    Rake is a ruby internal DSL for build scripting. With (or without) the help of albacore rake makes an excellent build scripting tool for .NET projects. The albacore documentation does a good job of explaining how to build solutions with rake but there is nothing to assist with another common build task – updating configuration files. The following ruby script provides some helper methods for performing common configuration changes that are required as part of a build process.  class ConfigTasks def self.set_app_setting(config_file, key, value) ovsd_element = config_file.root.elements['appSettings'].get_elements("add[@key='#{key}']")[0] ovsd_element.attributes['value'] = value end def self.set_connection_string(config_file, name, connection_string) conn_string_element = config_file.root.elements['connectionStrings'].get_elements("add[@name='#{name}']")[0] conn_string_element.attributes['connectionString'] = connection_string end def self.set_debug_compilation(config_file, debug_compilation) compilation_element = config_file.root.elements['system.web'].get_elements("compilation")[0] compilation_element.attributes['debug'] = false end private def self.write_xml_to_file(xml_document, file) File.open(file, 'w') do |config_file| formatter = REXML::Formatters::Default.new formatter.write(xml_document, config_file) end end end To use, require the file and call the class methods, passing the configuration file name and any other parameters. require 'config_tasks' ConfigTasks.set_app_setting 'web.config', 'enableCache', 'false'

    Read the article

  • Devs For Wendy

    - by Brian Schroer
    If you’re a developer in the New York City area, please check out Devs For Wendy, benefitting Wendy Friedlander and her family… Wendy is a 30 year old software agilista from Long Island. She's a strong WPF developer and a firm believer in the agile method of development including pair programming and TDD. Wendy is wife and mother of a beautiful girl named Kaylee who will be 2 in August. In August of 2009 Wendy learned that she had a rare and agressive pediatric cancer called aveolar rhabdomyosarcoma. Her treatment consists of high dose chemotherapy and radiation. She has had to leave her job, and her husband has been forced into part time work in order to care for their daughter. Please join us at 7pm on July 7th 2010 for a dinner benefiting Wendy brought to you by the NYC development community. You can also donate via PayPal.

    Read the article

  • Applications: The Mathematics of Movement, Part 2

    - by TechTwaddle
    In part 1 of this series we saw how we can make the marble move towards the click point, with a fixed speed. In this post we’ll see, first, how to get rid of Atan2(), sine() and cosine() in our calculations, and, second, reducing the speed of the marble as it approaches the destination, so it looks like the marble is easing into it’s final position. As I mentioned in one of the previous posts, this is achieved by making the speed of the marble a function of the distance between the marble and the destination point. Getting rid of Atan2(), sine() and cosine() Ok, to be fair we are not exactly getting rid of these trigonometric functions, rather, replacing one form with another. So instead of writing sin(?), we write y/length. You see the point. So instead of using the trig functions as below, double x = destX - marble1.x; double y = destY - marble1.y; //distance between destination and current position, before updating marble position distanceSqrd = x * x + y * y; double angle = Math.Atan2(y, x); //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction incrX = speed * Math.Cos(angle); incrY = speed * Math.Sin(angle); marble1.x += incrX; marble1.y += incrY; we use the following, double x = destX - marble1.x; double y = destY - marble1.y; //distance between destination and marble (before updating marble position) lengthSqrd = x * x + y * y; length = Math.Sqrt(lengthSqrd); //unit vector along the same direction as vector(x, y) unitX = x / length; unitY = y / length; //update marble position incrX = speed * unitX; incrY = speed * unitY; marble1.x += incrX; marble1.y += incrY; so we replaced cos(?) with x/length and sin(?) with y/length. The result is the same.   Adding oomph to the way it moves In the last post we had the speed of the marble fixed at 6, double speed = 6; to make the marble decelerate as it moves, we have to keep updating the speed of the marble in every frame such that the speed is calculated as a function of the length. So we may have, speed = length/12; ‘length’ keeps decreasing as the marble moves and so does speed. The Form1_MouseUp() function remains the same as before, here is the UpdatePosition() method, private void UpdatePosition() {     double incrX = 0, incrY = 0;     double lengthSqrd = 0, length = 0, lengthSqrdNew = 0;     double unitX = 0, unitY = 0;     double speed = 0;     double x = destX - marble1.x;     double y = destY - marble1.y;     //distance between destination and marble (before updating marble position)     lengthSqrd = x * x + y * y;     length = Math.Sqrt(lengthSqrd);     //unit vector along the same direction as vector(x, y)     unitX = x / length;     unitY = y / length;     //speed as a function of length     speed = length / 12;     //update marble position     incrX = speed * unitX;     incrY = speed * unitY;     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and marble (after updating marble position)     x = destX - (marble1.x);     y = destY - (marble1.y);     lengthSqrdNew = x * x + y * y;     /*      * End Condition:      * 1. If there is not much difference between lengthSqrd and lengthSqrdNew      * 2. If the marble has moved more than or equal to a distance of totLenToTravel (see Form1_MouseUp)      */     x = startPosX - marble1.x;     y = startPosY - marble1.y;     double totLenTraveledSqrd = x * x + y * y;     if ((int)totLenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because Total Len has been traveled");         timer1.Enabled = false;     }     else if (Math.Abs((int)lengthSqrd - (int)lengthSqrdNew) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New");         timer1.Enabled = false;     } } A point to note here is that, in this implementation, the marble never stops because it travelled a distance of totLenToTravelSqrd (first if condition). This happens because speed is a function of the length. During the final few frames length becomes very small and so does speed; and so the amount by which the marble shifts is quite small, and the second if condition always hits true first. I’ll end this series with a third post. In part 3 we will cover two things, one, when the user clicks, the marble keeps moving in that direction, rebounding off the screen edges and keeps moving forever. Two, when the user clicks on the screen, the marble moves towards it, with it’s speed reducing by every frame. It doesn’t come to a halt when the destination point is reached, instead, it continues to move, rebounds off the screen edges and slowly comes to halt. The amount of time that the marble keeps moving depends on how far the user clicks from the marble. I had mentioned this second situation here. Finally, here’s a video of this program running,

    Read the article

  • Introducing Microsoft SQL Server 2008 R2

    - by CatherineRussell
    I was thrilled to find a free ebook: Introducing Microsoft SQL Server 2008 R2,by Ross Mistry and Stacia Misner The purpose in Introducing Microsoft SQL Server 2008 R2 is to point out both the new and the improved in the latest version of SQL Server. Great info and they are sharing it for free. Feel free to follow the authors and ask quesitons on Twitter. @RossMistry @StaciaMisner   Here is the link to the book: http://blogs.msdn.com/b/microsoft_press/archive/2010/04/14/free-ebook-introducing-microsoft-sql-server-2008-r2.aspx

    Read the article

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >