Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 87/688 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Using SharePoint PeoplePicker control in custom ASP.NET pages

    - by Jignesh Gangajaliya
    I was developing custom ASP.NET page for a SharePoint project, and the page uses SharePoint PeoplePicker control. I needed to manipulate the control on the client side based on the user inputs. PeoplePicker Picker is a complex control and the difficult bit is that it contains many controls on the page (use the page source viewer to see the HTML tags generated). So getting into the right bit is tricky and also the default JavaScript functions like, control.disabled; control.focus(); will not work with PeoplePicker control. After some trial and error I came up with the solution to manipulate the control using JavaScript.  Here I am posting the JavaScript code snippet to enable/disable the PeoplePicker Control: function ToggleDisabledPeoplePicker(element, isDisabled) {     try     {         element.disabled = isDisabled;     }            catch(exception)     {}            if ((element.childNodes) && (element.childNodes.length > 0))     {         for (var index = 0; index < element.childNodes.length; index++)         {             ToggleDisabledPeoplePicker(element.childNodes[index], isDisabled);         }     } } // to disable the control ToggleDisabledPeoplePicker(document.getElementById("<%=txtMRA.ClientID%>"), true); The script shown below can be used to set focus back to the PeoplePicker control from custom JavaScript validation function: var found = false;         function SetFocusToPeoplePicker(element) {     try     {         if (element.id.lastIndexOf("upLevelDiv") !=-1)         {             element.focus();             found = true;             return;         }     }             catch(exception)     {}             if ((element.childNodes) && (element.childNodes.length > 0))     {         for (var index = 0; index < element.childNodes.length; index++)         {             if (found)             {                 found = false;                 break;             }                      SetFocusToPeoplePicker(element.childNodes[index]);         }     } } - Jignesh

    Read the article

  • ReplaceBetweenTags function with delegate to describe transformation

    - by Michael Freidgeim
    I've created a function that allow to replace content between XML tags with data, that depend on original content within tag, in particular to MAsk credit card number.The function uses MidBetween extension from My StringHelper class /// <summary> /// /// </summary> /// <param name="thisString"></param> /// <param name="openTag"></param> /// <param name="closeTag"></param> /// <param name="transform"></param> /// <returns></returns> /// <example> /// // mask <AccountNumber>XXXXX4488</AccountNumber> ///requestAsString  = requestAsString.ReplaceBetweenTags("<AccountNumber>", "</AccountNumber>", CreditCard.MaskedCardNumber); ///mask cvv ///requestAsString = requestAsString.ReplaceBetweenTags("<FieldName>CC::VerificationCode</FieldName><FieldValue>", "</FieldValue>", cvv=>"XXX"); /// </example> public static string ReplaceBetweenTags(this string thisString, string openTag, string closeTag, Func<string, string> transform) { //See also http://stackoverflow.com/questions/1359412/c-sharp-remove-text-in-between-delimiters-in-a-string-regex string sRet = thisString; string between = thisString.MidBetween(openTag, closeTag, true); if (!String.IsNullOrEmpty(between)) sRet=thisString.Replace(openTag + between + closeTag, openTag + transform(between) + closeTag); return sRet; } public static string ReplaceBetweenTags(this string thisString, string openTag, string closeTag, string newValue) { //See also http://stackoverflow.com/questions/1359412/c-sharp-remove-text-in-between-delimiters-in-a-string-regex string sRet = thisString; string between = thisString.MidBetween(openTag, closeTag, true); if (!String.IsNullOrEmpty(between)) sRet = thisString.Replace(openTag + between + closeTag, openTag + newValue + closeTag); return sRet; }

    Read the article

  • Persisting settings without using Options dialog in Visual Studio

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2013/11/02/persisting-settings-without-using-options-dialog-in-visual-studio.aspxIn one of my previous blog post we have seen persisting settings using Visual Studio's options dialog. Visual Studio options has many advantages in automatically persisting user options for you. However, during our latest Team Rooms extension development, we decided to provide our users; ability to use our preferences directly from Team Explorer. The main reason was that we had only one simple option for user and we thought it is cumbersome for user to go to Tools –> Options dialog to change this. Another reason was, we wanted to highlight this setting to user as soon as he is using our extension.   So if you are in such a scenario where you do not want to use VS options window, but still would like to persist the settings, this post will guide you through. Visual Studio SDK provides two ways to persist settings in your extensions. One is using DialogPage as shown in my previous post. Another way is to use by implementing IProfileManager interface which I will explain in this post. Please note that the class implementing IProfileManager should be independent class. This is because, VS instantiates this class during Tools –> Import and Export Settings. IProfileManager provides 2 different sets of methods (total 4 methods) to persist the settings. They are LoadSettingsFromXml and SaveSettingsToXml – Implement these methods to persist settings to disk from VS settings storage. The VS will persist your settings along with other options to disk. LoadSettingsFromStorage and SaveSettingsToStorage – Implement these methods to persist settings to local storage, usually it be registry. VS calls LoadSettingsFromStorage method when it is initializing the package too. We are going to use the 2nd set of methods for this example. First, we are creating a separate class file called UserOptions.cs. Please note that, we also need to implement IComponent, which can be done by inheriting Component along with IProfileManager. [ComVisible(true)] [Guid("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")] public class UserOptions : Component, IProfileManager { private const string SUBKEY_NAME = "TForVS2013"; private const string TRAY_NOTIFICATIONS_STRING = "TrayNotifications"; ... } Define the property so that it can be used to set and get from other classes. public bool TrayNotifications { get; set; } Implement the members of IProfileManager. public void LoadSettingsFromStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME)) { if (reg != null) { // Key already exists, so just update this setting. TrayNotifications = Convert.ToBoolean(reg.GetValue(TRAY_NOTIFICATIONS_STRING, true)); } } } catch (TeamRoomException exception) { TrayNotifications = true; ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void LoadSettingsFromXml(IVsSettingsReader reader) { reader.ReadSettingBoolean(TRAY_NOTIFICATIONS_STRING, out _isTrayNotificationsEnabled); TrayNotifications = (_isTrayNotificationsEnabled == 1); } public void ResetSettings() { } public void SaveSettingsToStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME, true)) { if (reg != null) { // Key already exists, so just update this setting. reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } else { reg = Package.UserRegistryRoot.CreateSubKey(SUBKEY_NAME); reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } } } catch (TeamRoomException exception) { ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void SaveSettingsToXml(IVsSettingsWriter writer) { writer.WriteSettingBoolean(TRAY_NOTIFICATIONS_STRING, TrayNotifications ? 1 : 0); } Let me elaborate on the method implementation. The Package class provides UserRegistryRoot (which is HKCU\Microsoft\VisualStudio\12.0 for VS2013) property which can be used to create and read the registry keys. So basically, in the methods above, I am checking if the registry key exists already and if not, I simply create it. Also, in case there is an exception I return the default values. If the key already exists, I update the value. Also, note that you need to make sure that you close the key while exiting from the method. Very simple right? Accessing and settings is simple too. We just need to use the exposed property. UserOptions.TrayNotifications = true; UserOptions.SaveSettingsToStorage(); Reading settings is as simple as reading a property. UserOptions.LoadSettingsFromStorage(); var trayNotifications = UserOptions.TrayNotifications; Lastly, the most important step. We need to tell Visual Studio shell that our package exposes options using the UserOptions class. For this we need to decorate our package class with ProvideProfile attribute as below. [ProvideProfile(typeof(UserOptions), "TForVS2013", "TeamRooms", 110, 110, false, DescriptionResourceID = 401)] public sealed class TeamRooms : Microsoft.VisualStudio.Shell.Package { ... } That's it. If everything is alright, once you run the package you will also see your options appearing in "Import Export settings" window, which allows you to export your options.

    Read the article

  • Silverlight Cream for January 13, 2011 -- #1026

    - by Dave Campbell
    In this Issue: András Velvárt, Tony Champion, Joost van Schaik, Jesse Liberty, Shawn Wildermuth, John Papa, Michael Crump, Sacha Barber, Alex Knight, Peter Kuhn, Senthil Kumar, Mike Hole, and WindowsPhoneGeek. Above the Fold: Silverlight: "Create Custom Speech Bubbles in Silverlight." Michael Crump WP7: "Architecting WP7 - Part 9 of 10: Threading" Shawn Wildermuth Expression Blend: "PathListBox: Text on the path" Alex Knight From SilverlightCream.com: Behaviors for accessing the Windows Phone 7 MarketPlace and getting feedback András Velvárt shares almost insider information about how to get some user interaction with your WP7 app in the form of feedback ... he has 4 behaviors taken straight from his very cool SufCube app that he's sharing. Reloading a Collection in the PivotViewer Tony Champion keeps working with the PivotViewer ... this time discussing the fact that you can't Reload or Refresh the current collection from the server ... at least not initially, but he did find one :) Tombstoning MVVMLight ViewModels with SilverlightSerializer on Windows Phone 7 Joost van Schaik takes a shot at helping us all with Tombstoning a WP7 app... he's using Mike Talbot's SilverlightSerializer and created extension methods for it for tombstoning that he's willing to share with us. Windows Phone From Scratch #17: MVVM Light Toolkit Soup To Nuts Part 2 Jesse Liberty is up to Part 17 in his WP7 series, and this is the 2nd post on MVVMLight and WP7, and is digging into behaviors. Architecting WP7 - Part 9 of 10: Threading Shawn Wildermuth is up to part 9 of 10 in his series on Architecting WP7 apps. This episode finds Shawn discussing Threading ... know how to use and choose between BackgroundWorker and ThreadPool? ... Shawn will explain. Silverlight TV 57: Performance Tuning Your Apps In the latest Silverlight TV, John Papa chats with Mike Cook about tuning your Silverlight app to get the performance up there where your users will be happy. Create Custom Speech Bubbles in Silverlight. Michael Crump's already gotten a lot of airplay out of this, but it's so cool.. comic-style callout shapes without using the dlls that you normally would... in other words, paths, and very cool hand-drawn looks on some too... very cool, Michael! Showcasing Cinch MVVM framework / PRISM 4 interoperability Sacha Barber has a post up on CodeProject that demonstratest using Cinch and Prism4 together... handily using MEF since Cinch relies on MEFedMVVM... this is a heck of a post... lots of code, lots of explanations. PathListBox: Text on the path Alex Knight keeps making this PathListBox series better ... this time he is putting text on the path... moving text... too cool, Alex! Windows Phone 7: Pinch Gesture Sample Peter Kuhn digs into the WP7 toolkit and examines GestureListener, pinch events, and clipping... examples and code supplied. How to change the StartPage of the Windows Phone 7 Application in Visual Studio 2010 ? Senthil Kumar discusses how to change the StartPage of your WP7 app, or get the program running if you happen to move or rename MainPage.xaml WP7 Text Boxes – OnEnter (my 1st Behaviour) Mike Hole has a post up about the issue with the keyboard appearing in front of the textbox, and maybe using the enter key to drop it... and he's developed a behavior for that process. WP7 ContextMenu in depth | Part1: key concepts and API WindowsPhoneGeek has some good articles that I haven't posted, but I'll catch up. This one is a nice tutorial on the WP7 Context menu... good explanation, diagrams, and code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Windows 8: Everything from design, build, and how to sell a Metro style app

    - by Thomas Mason
    For me, there are a lot of similarities between an application developed for Windows Phone and a Metro style app developed for Windows 8. A Windows Phone 7 application (rather than an XNA game) is built in .NET and XAML against a subset of the .NET framework and the application has a lifecycle which needs to be conscious of battery life and so is split out into foreground/background pieces. The application is sandboxed in terms of its interactions with the local device and is packaged with a manifest which describes those interactions. The app needs to be aware of network connectivity status and its work on the network is done asynchronously to preserve the user experience.The app is packaged and deployed to a Marketplace which the user browses to find the app, read reviews, perhaps purchase it and then install it and receive updates over time. Quite a lot of those statements are as true of a Windows 8 Metro style app as they are for a Windows Phone app and so a Windows Phone app developer already has a good head start when it comes to building Metro style apps for Windows 8. With that in mind, there is an event to help developers with a Windows Phone app in Marketplace to begin the process of looking at Windows 8 and whether you can get a quick win by bringing your Phone application onto Windows. The idea of the event was to provide a space where developers can get together over 2 days and take the time out to look at what it means to take their app from Windows Phone to Windows 8. Kicking off on Saturday 16th June at 10am, we are told they have plenty of power sockets, WiFi, whiteboards, drinks, pizza, games, prizes and some quiet space that you can work in. Including people on hand with Windows Phone and Windows 8 experience to help everything along the way. There will be an attendee-voted schedule of talks but we’ll keep these out of your way if you just want to get on and code. We’ll also provide information around submitting your app to the Windows Store If you have a Windows Phone app in Marketplace, now’s a great time to look at getting it onto Windows 8. Sign up. Bring your laptop. Bring your app. Bring Windows 8 and Visual Studio 11. And everyone will their best to help you get your app onto Windows 8. Location & Venue TBA but it will be in central London, accessible by major railway and underground transportation. Day 1 Saturday 16th June 10am – 9pm Day 2 Sunday 17th June 10am – 4pm

    Read the article

  • WIF

    - by kaleidoscope
    Windows Identity Foundation (WIF) enables .NET developers to externalize identity logic from their application, improving developer productivity, enhancing application security, and enabling interoperability. It is a framework for implementing claims-based identity in your applications. With WIF one can create more secure applications by reducing custom implementations and using a single simplified identity model based on claims. Windows Identity Foundation is part of Microsoft's identity and access management solution built on Active Directory that also includes: · Active Directory Federation Services 2.0 (formerly known as "Geneva" Server): a security token service for IT that issues and transforms claims and other tokens, manages user access and enables federation and access management for simplified single sign-on · Windows CardSpace 2.0 (formerly known as Windows CardSpace "Geneva"): for helping users navigate access decisions and developers to build customer authentication experiences for users. Reference : http://msdn.microsoft.com/en-us/security/aa570351.aspx Geeta

    Read the article

  • Mirroring git and mercurial repos the lazy way

    - by Greg Malcolm
    I maintain Python Koans on mirrored on both Github using git and Bitbucket using mercurial. I get pull requests from both repos but it turns out keeping the two repos in sync is pretty easy. Here is how it's done... Assuming I’m starting again on a clean laptop, first I clone both repos ~/git $ hg clone https://bitbucket.org/gregmalcolm/python_koans ~/git $ git clone [email protected]:gregmalcolm/python_koans.git python_koans2 The only thing that makes a folder a git or mercurial repository is the .hg folder in the root of python_koans and the .git folder in the root of python_koans2. So I just need to move the .git folder over into the python_koans folder I'm using for mercurial: ~/git $ rm -rf python_koans/.git ~/git $ mv python_koans2/.git python_koans ~/git $ ls -la python_koans total 48 drwxr-xr-x 11 greg staff 374 Mar 17 15:10 . drwxr-xr-x 62 greg staff 2108 Mar 17 14:58 .. drwxr-xr-x 12 greg staff 408 Mar 17 14:58 .git -rw-r--r-- 1 greg staff 34 Mar 17 14:54 .gitignore drwxr-xr-x 13 greg staff 442 Mar 17 14:54 .hg -rw-r--r-- 1 greg staff 48 Mar 17 14:54 .hgignore -rw-r--r-- 1 greg staff 365 Mar 17 14:54 Contributor Notes.txt -rw-r--r-- 1 greg staff 1082 Mar 17 14:54 MIT-LICENSE -rw-r--r-- 1 greg staff 5765 Mar 17 14:54 README.txt drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 2 drwxr-xr-x 10 greg staff 340 Mar 17 14:54 python 3 That’s about it! Now git and mercurial are tracking files in the same folder. Of course you will still need to set up your .gitignore to ignore mercurial’s dotfiles and .hgignore to ignore git’s dotfiles or there will be squabbling in the backseat. ~/git $ cd python_koans/ ~/git/python_koans $ cat .gitignore *.pyc *.swp .DS_Store answers .hg <-- Ignore mercurial ~/git/python_koans $ cat .hgignore syntax: glob *.pyc *.swp .DS_Store answers .git <-- Ignore git Because both my mirrors are both identical as far as tracked files are concerned I won’t yet see anything if I check statuses at this point: ~/git/python_koans $ git status # On branch master nothing to commit (working directory clean) ~/git/python_koans $ hg status ~/git/python_koans But how about if I accept a pull request from the bitbucket (mercuial) site? ~/git/python_koans $ hg status ~/git/python_koans $ git status # On branch master # Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: python 2/koans/about_decorating_with_classes.py # modified: python 2/koans/about_iteration.py # modified: python 2/koans/about_with_statements.py # modified: python 3/koans/about_decorating_with_classes.py # modified: python 3/koans/about_iteration.py # modified: python 3/koans/about_with_statements.py Mercurial doesn’t have any changes to track right now, but git has changes. Commit and push them up to github and balance is restored to the force: ~/git/python_koans $ git commit -am "Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks'" [master 79ca184] Merge from bitbucket mirror: 'gpiancastelli - Fix for issue #21 and some other tweaks' 6 files changed, 78 insertions(+), 63 deletions(-) ~/git/python_koans $ git push origin master Or just use hg-git? The github developers have actually published a plugin for automatic mirroring: http://hg-git.github.com I haven’t used it because at the time I tried it a couple of years ago I was having problems getting all the parts to play nice with each other. Probably works fine now though..

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • Bug with Set / Get Accessor in .Net 3.5

    - by MarkPearl
    I spent a few hours scratching my head on this one... So I thought I would blog about it in case someone else had the same headache. Assume you have a class, and you are wanting to use the INotifyPropertyChanged interface so that when you bind to an instance of the class, you get the magic sauce of binding to do you updates to the UI. Well, I had a special instance where I wanted one of the properties of the class to add some additional formatting to itself whenever someone changed its value (see the code below).   class Test: INotifyPropertyChanged {     private string_inputValue;     public stringInputValue     {         get        {             return_inputValue;         }         set        {             if(value!= _inputValue)             {                 _inputValue = value+ "Extra Stuff";                 NotifyPropertyChanged("InputValue");                     }         }     }     public eventPropertyChangedEventHandler PropertyChanged;     public voidNotifyPropertyChanged(stringinfo)     {         if(PropertyChanged != null)         {             PropertyChanged(this, newPropertyChangedEventArgs(info));         }     } }   Everything looked fine, but when I ran it in my WPF project, the textbox I was binding to would not update? I couldn’t understand it! I thought the code made sense, so why wasn’t it working? Eventually StackOverflow came to the rescue, where I was told that it was a bug in the .Net 3.5 Runtime and that a fix was scheduled in .Net 4 For those who have the same problem, here is the workaround… You need to put the NotifyPropertyChanged method on the application thread! public string InputValue { get { return _inputValue; } set { if (value != _inputValue) { _inputValue = value + "Extra Stuff"; // // React to the type of measurement // Application.Current.Dispatcher.BeginInvoke((Action)delegate { NotifyPropertyChanged("InputValue"); }); } } }

    Read the article

  • Silverlight for Everyone!!

    - by subodhnpushpak
    Someone asked me to compare Silverlight / HTML development. I realized that the question can be answered in many ways: Below is the high level comparison between a HTML /JavaScript client and Silverlight client and why silverlight was chosen over HTML / JavaScript client (based on type of users and major functionalities provided): 1. For end users Browser compatibility Silverlight is a plug-in and requires installation first. However, it does provides consistent look and feel across all browsers. For HTML / DHTML, there is a need to tweak JavaScript for each of the browser supported. In fact, tags like <span> and <div> works differently on different browser / version. So, HTML works on most of the systems but also requires lot of efforts coding-wise to adhere to all standards/ browsers / versions. Out of browser support No support in HTML. Third party tools like  Google gears offers some functionalities but there are lots of issues around platform and accessibility. Out of box support for out-of-browser support. provides features like drag and drop onto application surface. Cut and copy paste in HTML HTML is displayed in browser; which, in turn provides facilities for cut copy and paste. Silverlight (specially 4) provides rich features for cut-copy-paste along with full control over what can be cut copy pasted by end users and .advanced features like visual tree printing. Rich user experience HTML can provide some rich experience by use of some JavaScript libraries like JQuery. However, extensive use of JavaScript combined with various versions of browsers and the supported JavaScript makes the solution cumbersome. Silverlight is meant for RIA experience. User data storage on client end In HTML only small amount of data can be stored that too in cookies. In Silverlight large data may be stored, that too in secure way. This increases the response time. Post back In HTML / JavaScript the post back can be stopped by use of AJAX. Extensive use of AJAX can be a bottleneck as browser stack is used for the calls. Both look and feel and data travel over network.                           In Silverlight everything run the client side. Calls are made to server ONLY for data; which also reduces network traffic in long run. 2. For Developers Coding effort HTML / JavaScript can take considerable amount to code if features (requirements) are rich. For AJAX like interfaces; knowledge of third party kits like DOJO / Yahoo UI / JQuery is required which has steep learning curve. ASP .Net coding world revolves mostly along <table> tags for alignments whereas most popular tools provide <div> tags; which requires lots of tweaking. AJAX calls can be a bottlenecks for performance, if the calls are many. In Silverlight; coding is in C#, which is managed code. XAML is also very intuitive and Blend can be used to provide look and feel. Event handling is much clean than in JavaScript. Provides for many clean patterns like MVVM and composable application. Each call to server is asynchronous in silverlight. AJAX is in built into silverlight. Threading can be done at the client side itself to provide for better responsiveness; etc. Debugging Debugging in HTML / JavaScript is difficult. As JavaScript is interpreted; there is NO compile time error handling. Debugging in Silverlight is very helpful. As it is compiled; it provides rich features for both compile time and run time error handling. Multi -targeting browsers HTML / JavaScript have different rendering behaviours in different browsers / and their versions. JavaScript have to be written to sublime the differences in browser behaviours. Silverlight works exactly the same in all browsers and works on almost all popular browser. Multi-targeting desktop No support in HTML / JavaScript Silverlight is very close to WPF. Bot the platform may be easily targeted while maintaining the same source code. Rich toolkit HTML /JavaScript have limited toolkit as controls Silverlight provides a rich set of controls including graphs, audio, video, layout, etc. 3. For Architects Design Patterns Silverlight provides for patterns like MVVM (MVC) and rich (fat)  client architecture. This segregates the "separation of concern" very clearly. Client (silverlight) does what it is expected to do and server does what it is expected of. In HTML / JavaScript world most of the processing is done on the server side. Extensibility Silverlight provides great deal of extensibility as custom controls may be made. Extensibility is NOT restricted by browser but by the plug-in silverlight runs in. HTML / JavaScript works in a certain way and extensibility is generally done on the server side rather than client end. Client side is restricted by the limitations of the browser. Performance Silverlight provides localized storage which may be used for cached data storage. this reduces the response time. As processing can be done on client side itself; there is no need for server round trips. this decreases the round about time. Look and feel of the application is downloaded ONLY initially, afterwards ONLY data is fetched form the server. Security Silverlight is compiled code downloaded as .XAP; As compared to HTML / JavaScript, it provides more secure sandboxed approach. Cross - scripting is inherently prohibited in silverlight by default. If proper guidelines are followed silverlight provides much robust security mechanism as against HTML / JavaScript world. For example; knowing server Address in obfuscated JavaScript is easier than a compressed compiled obfuscated silverlight .XAP file. Some of these like (offline and Canvas support) will be available in HTML5. However, the timelines are not encouraging at all. According to Ian Hickson, editor of the HTML5 specification, the specification to reach the W3C Candidate Recommendation stage during 2012, and W3C Recommendation in the year 2022 or later. see http://en.wikipedia.org/wiki/HTML5 for details. The above is MY opinion. I will love to hear yours; do let me know via comments. Technorati Tags: Silverlight

    Read the article

  • Adventures in Scrum: Lesson 2 - For the record

    - by Martin Hinshelwood
    At SSW we have always done Agile. Recently we have started doing Scrum and we have nearly completed our first Sprint ever using Scrum. As you probably guessed from my previous post, it looks like it is going to be a “Failed Sprint”, but the Scrum Team (This includes the ScrumMaster and the Product Owner) has learned a huge amount about working in the Scrum Framework. We have been running with a “Proxy Product Owner” for the last two weeks, but a simple mistake occurred either during the “Product Planning Meeting” or the “Sprint Planning Meeting” that could have prevented this Sprint from failing. We has a heated discussion on the vision of someone not in the room which ended with the assertion that the Product Owner would be quizzed again on their vision. This did not happen and we ran with the “Proxy Product Owner’s vision for two weeks. Product Owner vision: Update Component A of Product A to Silverlight Proxy Product Owner vision: Update Product A to Silverlight Do you see the problem? Worse than that, as we had a lot of junior members of the Scrum Team and we are just feeling our way around how Scrum will work at SSW I missed implementing a fundamental rule. That’s right, it was me. It does not matter that I did not know about this rule, its on the site and I should have read it. Would a police officer let you off if you did not know that a red light meant stop? I think not… But, what is this amazing rule I hear you shout.. Its simple, as per our rule I should have sent the following email: “ Dear Proxy Product Owner, For the record, I disagree that the Product Owner wants us to ‘Update Product A to Silverlight’ as I still think that he wants us to ‘Update Component A of Product A to Silverlight’ and not the entire application. Regards Martin” - ‘For the record’ - Rules to being Software Consultants - Dealing with Clients This email should have been copied to the entire Scrum Team, which would have included the Product Owner, who would have nipped this misunderstanding in the bud and we would have had one less impediment. Technorati Tags: SSW,SSW Rules,SSW Standards,Scrum,Product Owner,ScrumMaster,Sprint,Sprint Planning Meeting,Product Planning Meeting

    Read the article

  • Toughest Developer Puzzle Ever

    - by Josh Holmes
    For the second year in a row, my friend and colleague Jeff Blankenburg has created what is quickly proving to live up to it’s namesake – the Toughest Developer Puzzle Ever. Some of the puzzles are technical, some are not but all require that you understand the web, development and technology to solve. Even if you don’t get in on the fantastic prizes that Jeff has lined up, there’s great bragging rights in being able to solve the Toughest Developer Puzzle Ever. This year, I was honored enough to get to create three of the puzzles myself – let me know what you think of them. I’m not going to tell you which ones I created now and definitely don’t ask me for hints – Jeff has threatened me if I give any of the puzzle away… ;) All I can say now is “Good luck!”

    Read the article

  • Converting Creole to HTML, PDF, DOCX, ..

    - by Marko Apfel
    Challenge We documented a project on Github with the Wiki there. For most articles we used Creole as markup language. Now we have to deliver a lot of the content to our client in an usual format like PDF or DOCX. So we need a automatism to extract all relevant content, merge it together and convert the stuff to a new format. Problem One of the most popular toolsets to convert between several formats is Pandoc. But unfortunally Pandoc does not support Creole (see the converting matrix). Approach So we need an intermediate step: Converting from Creole to a supported Pandoc format. Creolo/c is a Creole to Html converter and does exactly what we need. After converting our Creole content to Html we could use Pandoc for all the subsequent tasks. Solution Getting the Creole stuff First at all we need the Creole content on our locale machines. This is easy. Because the Github Wiki themselves is a Git repository we could clone it to our machine. In the working copy we see now all the files and the suffix gives us the hint for the markup language. Converting and Merging Creole content to Html Because we would like all content from several Creole files in one HTML file, we have to convert and merge all the input files to one output file. Creole/c has an option (-b) to generate only the Html-stuff below a Html <Body>-tag. And this is hook for us to start. We have to create manually the additional preluding Html-tags (<html>, <head>, ..), then we merge all needed Creole content to our output file and last we add the closing tags. This could be done straightforward with a little bit old DOS magic: REM === Generate the intro tags === ECHO ^<html^> > %TMP%\output.html ECHO ^<head^> >> %TMP%\output.html ECHO ^<meta name="generator" content="creole/c"^> >> %TMP%\output.html ECHO ^</head^> >> %TMP%\output.html ECHO ^<body^> >> %TMP%\output.html REM === Mix in all interesting Creole stuff with creole/c === .\Creole-C\bin\creole.exe -b .\..\datamodel+overview.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdCaptureMode.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdDamageReducingActivity.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+lookup+IncidentDamageCodes.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+Attachments.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+TrafficLights.creole >> %TMP%\output.html REM === Generate the outro tags === ECHO ^</body^> >> %TMP%\output.html ECHO ^</html^> >> %TMP%\output.html REM === Convert the Html file to Docx with Pandoc === .\Pandoc\bin\pandoc.exe -o .\Database-Schema.docx %TMP%\output.html Some explanation for this The first ECHO call creates the file. Therefore the beginning <html> tag is send via > to a temporary working file. All following calls add content to the existing file via >>. The tag-characters < and > must be escaped. This is done by the caret sign (^). We use a file in the default temporary folder (%TMP%) to avoid writing in our current folders. (better for continuous integration) Both toolsets (Creole/c and Pandoc) are copied to a versioned tools folder in the Wiki. This is committable and no problem after pushing – Github does not do anything with it. In this folder is also the batch (Export-Docx.bat) for all the steps. Pandoc recognizes the conversion by the suffixes of the file names. So it is enough to specify only the input and output files.

    Read the article

  • Slides and Links from SQL Azure session at BizSpark Azure Day in London

    - by Eric Nelson
    A big thanks to all who attended my two sessions on SQL Azure yesterday (29th March 2010). As promised, my slides and links from the session. SQL Azure Overview for Bizspark day View more presentations from Eric Nelson. Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure SQL Azure maximum database size rises from 10GB to 50GB in June TCO and ROI calculator for Windows Azure SQL Azure Migration Wizard

    Read the article

  • My Expert F# Book has now arrived!

    - by MarkPearl
    So it has finally arrived from Amazon. Expert F# by Don Syme, Adam Granicz & Antonio Cisternino. I got a note from the post office yesterday that I needed to collect a package from their offices. After paying a 10% customs fee (that I wasn’t expecting) I had my new Yellow & Black F# Book… it’s so shinny. Trust my luck though – I have a few university assignments due this week as well as a crazy week of work so it has been sitting on my desk for a day and I haven’t managed to get into it. Eventually I managed to take a few minutes this evening to page through it and it looks really good. I can’t wait! So my goal this week is to cover Chapter 2 (by the end of the weekend) and put the appropriate posts up. F# is slowly working on me but I am keen to get a deeper understanding of the language which I am hoping this book will help me achieve.

    Read the article

  • WinForm to WPF: A Quick Reference Guide

    - by mbcrump
    “Michael Sorens provides a handy wallchart to help migration between WinForm / WPF, VS 2008 / 2010, and .NET 3.5 / 4.0.  this can be downloaded for free from the speech-bubble at the head of the article. He also describes the current weaknesses in WPF, and the most  obvious differences between the two.” I have posted this in my cube and it has already started making a difference. Read the full article here.

    Read the article

  • Calculating Screen Resolutions Using WPF

    - by Jeff Ferguson
    WPF measures all elements in device independent pixels (DIPs). These DIPs equate to device pixels if the current display monitor is set to the default of 96 DPI. However, for monitors set to a DPI setting that is different than 96 DPI, then WPF DIPs will not correspond directly to monitor pixels. Consider, for example, the WPF properties SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth. If your monitor resolution is set to 1024 pixels wide by 768 pixels high, and your monitor is set to 96 DPI, then WPF will report the value of SystemParameters.PrimaryScreenHeight as 768 and the value of SystemParameters.PrimaryScreenWidth as 1024. No problem. This aligns nicely because the WPF device independent pixel value (96) matches your monitor's DPI setting (96). However, if your monitor is not set to display pixels at 96 DPI, then SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth will not return what you expect. The values returned by these properties may be greater than or less than what you expect, depending on whether or not your monitor's DPI value is less than or greater than 96. Since the SystemParameters.PrimaryScreenHeight and SystemParameters.PrimaryScreenWidth properties are WPF properties, their values are measured in WPF DIPs, rather than taking monitor DPI into effect. Once again: WPF measures all elements in device independent pixels (DIPs). To combat this issue, you must take your monitor's DPI settings into effect if you're looking for the monitor's width and height using the monitor's DPI settings. The handy code block below will help you calculate these values regardless of the DPI setting on your monitor: Window MainWindow = Application.Current.MainWindow; PresentationSource MainWindowPresentationSource = PresentationSource.FromVisual(MainWindow); Matrix m = MainWindowPresentationSource.CompositionTarget.TransformToDevice; DpiWidthFactor = m.M11; DpiHeightFactor = m.M22; double ScreenHeight = SystemParameters.PrimaryScreenHeight * DpiHeightFactor; double ScreenWidth = SystemParameters.PrimaryScreenWidth * DpiWidthFactor; The values of ScreenHeight and ScreenWidth should, after this code is executed, match the resolution that you see in the display's Properties window.

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Silverlight Cream for November 20, 2011 -- #1169

    - by Dave Campbell
    In this Issue: Andrea Boschin, Michael Crump, Michael Sync, WindowsPhoneGeek, Jesse Liberty, Derik Whittaker, Sumit Dutta, Jeff Blankenburg(-2-), and Beth Massi. Above the Fold: WP7: "Silver VNC 1.0 for Windows Phone "Mango"" Andrea Boschin Metro/WinRT/W8: "Lighting up your C# Metro apps by being a Share Source" Derik Whittaker LightSwitch: "Using the Save and Query Pipeline to “Archive” Deleted Records" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Silver VNC 1.0 for Windows Phone "Mango" Andrea Boschin published the first release of his "Silver VNC" version 1.0 on CodePlex. Check out the video on the blog post to see the capabilities, then go grab it from CodePlex. Fixing a broken toolbox (In Visual Studio 2010 SP1) Not Silverlight or Metro, but near to us all is Visual Studio... read how Michael Crump resolves the 'broken' toolbox that we all get now and then Windows Phone 7 – USB Device Not Recognized Error Michael Sync is looking for ideas about an error he gets any time he updates his phone. Windows Phone Toolkit MultiselectList in depth| Part2: Data Binding WindowsPhoneGeek has up the second part of his tutorial series on the MultiselectList from the Windows Phone Toolkit... this part is about data binding, complete with lots of code, discussion, pictures, and project to download New Mini-Tutorial Video Series Jesse Liberty started a new video series based on his Mango Mini tutorials. They will be on Channel 9, and he has a link on this post to the index. The firs of the series is on animation without code Lighting up your C# Metro apps by being a Share Source Derik Whittaker continues investigating Metro with this post about how to set your app up to share its content with other apps Part 21 - Windows Phone 7 - Toast Push Notification Sumit Dutta has part 21 of his WP7 series up and is talking about Toast Notification by creating a Windows form app for sending notifications to the WP7 app for viewing 31 Days of Mango | Day #6: Motion Jeff Blankenburg's Day 6 in his Mango series is about the Motion class which combines the data we get from the Accelerometer, Compass, and Gyroscope of the last couple days of posts 31 Days of Mango | Day #7: Raw Camera Data In Day 7, Jeff Blankenburg talks about the Camera on the WP7 and how to use the raw data in your own application Using the Save and Query Pipeline to “Archive” Deleted Records Beth Massi's latest LightSwith post is this one on tapping into the Save and Query pipelines to perform some data processing prior to saving or pulling data Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • VSDB to SSDT part 4 : Redistributable database deployment package with SqlPackage.exe

    - by Etienne Giust
    The goal here is to use SSDT SqlPackage to deploy the output of a Visual Studio 2012 Database project… a bit in the same fashion that was detailed here : http://geekswithblogs.net/80n/archive/2012/09/12/vsdb-to-ssdt-part-3--command-line-deployment-with-sqlpackage.exe.aspx   The difference is we want to do it on an environment where Visual Studio 2012 and SSDT are not installed. This might be the case of your Production server.   Package structure So, to get started you need to create a folder named “DeploymentSSDTRedistributable”. This folder will have the following structure :         The dacpac and dll files are the outputs of your Visual Studio 2012 Database project. If your database project references another database project, you need to put their dacpac and dll here too, otherwise deployment will not work. The publish.xml file is the publish configuration suitable for your target environment. It holds connexion strings, SQLVARS parameters and deployment options. Review it carefully. The SqlDacRuntime folder (an arbitrary chosen name) will hold the SqlPackage executable and supporting libraries   Contents of the SqlDacRuntime folder Here is what you need to put in the SqlDacRuntime folder  :      You will be able to find these files in the following locations, on a machine with Visual Studio 2012 Ultimate installed : C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin : SqlPackage.exe Microsoft.Data.Tools.Schema.Sql.dll  Microsoft.Data.Tools.Utilities.dll Microsoft.SqlServer.Dac.dll C:\Windows\Microsoft.NET\assembly\GAC_MSIL\Microsoft.SqlServer.TransactSql.ScriptDom\v4.0_11.0.0.0__89845dcd8080cc91 Microsoft.SqlServer.TransactSql.ScriptDom.dll   Deploying   Now take your DeploymentSSDTRedistributable deployment package to your remote machine. In a standard command window, place yourself inside the DeploymentSSDTRedistributable  folder.   You can first perform a check of what will be updated in the target database. The DeployReport task of SqlPackage.exe will help you do that. The following command will output an xml of the changes:   "SqlDacRuntime/SqlPackage.exe" /Action:DeployReport /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml /OutputPath:./ChangesToDeploy.xml      You might get some warnings on Log and Data file like I did. You can ignore them. Also, the tool is warning about data loss when removing a column from a table. By default, the publish.xml options will prevent you from deploying when data loss is occuring (see the BlockOnPossibleDataLoss inside the publish.xml file). Before actual deployment, take time to carefully review the changes to be applied in the ChangesToDeploy.xml file.    When you are satisfied, you can deploy your changes with the following command : "SqlDacRuntime/SqlPackage.exe" /Action:Publish /SourceFile:./Our.Database.dacpac /Profile:./Release.publish.xml   Et voilà !  Your dacpac file has been deployed to your database. I’ve been testing this on a SQL 2008 Server (not R2) but it should work on 2005, 2008 R2 and 2012 as well.   Many thanks to Anuj Chaudhary for his article on the subject : http://www.anujchaudhary.com/2012/08/sqlpackageexe-automating-ssdt-deployment.html

    Read the article

  • Menu widget - no jQuery nor Javascript required - pure CSS

    - by Renso
    Goal: Create a menu widget that does not require any javascript, extremely lightweight, very fast, soley based on CSS, compatible with FireFox and Chrome. Issues: May have some rendering issues in some versions of IE, sorry :-) Instruments: css file html with specific menu format jQuery-ui library - optional if you want to use your own images/colors Implementation Details: HTML: <div id="header">   <div id="header_Menubar">     <ul class="linkList0 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">         <li class="first more ui-state-default ui-corner-top ui-tabs-selected"><a title="Home" href="/Home">Home</a>             <ul class="linkList01 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">                 <li class="ifirst ui-state-default ui-corner-top"><abbr title="Go Home"></abbr><a title="Home" href="/Home">Home</a></li>             </ul>         </li>         <li class="more ui-state-default ui-corner-top ui-tabs-selected"><a title="Menu 2" href="/Menu2a">Menu 2</a>             <ul class="linkList01 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">                 <li class="ifirst ui-state-default ui-corner-top"><abbr title="Menu 2 a"></abbr><a title="Menu 2 a" href="/Menu2a">Menu 2 a</a></li>                 <li class="ilast ui-state-default ui-corner-top"><abbr title="Menu 2 b"></abbr><a title="Menu 2 b" href="/Menu2b">Menu 2 b</a></li>             </ul>         </li>         <li class="more red ui-state-default ui-corner-top ui-tabs-selected"><a title="Menu 3" href="/Menu3 d">Menu 3</a>             <ul class="linkList01 ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all">                 <li class="ifirst ui-state-default ui-corner-top"><abbr title="Menu 3 a"><a title="Menu 3 a" href="/Menu3a">Menu 3 a</a></abbr></li>                 <li class="ui-state-default ui-corner-top"><abbr title="Menu 3 b"><a title="Menu 3 b" href="/Menu3b">Menu 3 b</a></abbr></li>                 <li class="ui-state-default ui-corner-top"><abbr title="Menu 3 c"><a title="Menu 3 c" href="/Menu3c">Menu 3 c</a></abbr></li>                 <li class="ilast ui-state-default ui-corner-top"><abbr title="Menu 3 d"><a title="Menu 3 d" href="/Menu3d">Menu 3 d</a></abbr></li>             </ul>         </li>     </ul>     </div> </div> CSS: /*    =Menu     -----------------------------------------------------------------------------------------    */ #header #header_Menubar {     margin: 0;     padding: 0;     border: 0;     width: 100%;     height: 22px; } #header {     background-color: #99cccc;     background-color: #aaccee;     background-color: #5BA3E0;     background-color: #006cb1; } /* Set menu bar background color     */ #header #header_Menubar {     background-attachment: scroll;     background-position: left center;     background-repeat: repeat-x; } /*    Set main (horizontal) menu typology    */ #header .linkList0 {     padding: 0 0 1em 0;     margin-bottom: 1em;     font-family: 'Trebuchet MS', 'Lucida Grande',           Verdana, Lucida, Geneva, Helvetica,           Arial, sans-serif;     font-weight: bold;     font-size: 1.085em;     font-size: 1em; } /*    Set all ul properties    */ #header .linkList0, #header .linkList0 ul {     list-style: none;     margin: 0;     padding: 0;     list-style-position: outside; } /*    Set all li properties    */ #header .linkList0 > li {     float: left;     position: relative;     font-size: 90%;     margin: 0 0 -1px;     width: 9.7em;     padding-right: 2em;     z-index: 100;    /*IE7:    Fix for IE7 hiding drop down list behind some other page elements    */ } /*    Set all li properties    */ #header .linkList01 > li {     width: 190px; } #header .linkList0 .linkList01 li {     margin-left: 0px; } /*    Set all list background image properties    */ /*#header .linkList0 li a {     background-position: left center;     background-image: url(  '../Content/Images/VerticalButtonBarGradientFade.png' );     background-repeat: repeat-x;     background-attachment: scroll; }*/ /*    Set all A ancor properties    */ #header .linkList0 li a {     display: block;     text-decoration: none;     line-height: 22px; } /*    IE7: Fix for a bug in IE7 where the margins between list items is doubled - need to set height explicitly    */ *+html #header .linkList0 ul li {     height: auto;     margin-bottom: -.3em; } /*    Menu:    Set different borders for different nested level lists     --------------------------------------------------------------    */ #header .linkList0 > li a {     border-left: 10px solid Transparent;     border-right: none; } #header .linkList0 > li a {     border-left: 0px;     margin-left: 0px;     border-right: none; } #header .linkList0 .linkList01 > li a {     border-left: 8px solid #336699;     border-right: none;     border: 1px solid Transparent;     -moz-border-radius: 5px 5px 5px 5px;     -moz-box-shadow: 3px 3px 4px #696969; } #header .linkList0 .linkList01 .linkList001 > li a {     border-left: 6px solid #336699;     border-right: none;     border: 1px solid Transparent;     -moz-border-radius: 5px 5px 5px 5px;     -moz-box-shadow: 3px 3px 4px #696969; } #header .linkList0 .linkList01 .linkList001 .linkList0001 > li a {     border-left: 4px solid #336699;     border-right: none;     border: 1px solid Transparent;     -moz-border-radius: 5px 5px 5px 5px;     -moz-box-shadow: 3px 3px 4px #696969; }     /*    Link and Visited pseudo-class settings for all lists (ul)    */ #header .linkList0 a:link, #header .linkList0 a:visited {     display: block;     text-decoration: none;     padding-left: 1em; } /*    Hide all the nested/sub menu items    */ #header .linkList0 ul {     display: none;     padding: 0;     position: absolute;    /*Important: must not impede on other page elements when drop down opens up    */ } /*    Hide all detail popups    */ #header .detailPopup {     display: none; } /*    Set the typology of all sub-menu list items li    */ /*#header .linkList0 ul li {     background-color: #AACCEE;     background-position: left center;     background-image: url(  '../Content/Images/VerticalButtonBarGradientFade.png' );     background-repeat: repeat-x;     background-attachment: scroll; }*/ #header .linkList0 ul li.more {     background: Transparent url('../Content/Images/ArrowRight.gif') no-repeat right center; } /*    Header list's margin and padding for all list items    */ #header .linkList0 ul li {     margin: 0 0 0 1em;     padding: 0; } #header .linkList01 ul li {     margin: 0;     padding: 0;     width: 189px; } /*    Set margins for the third li sibling (Plan a Call) to display to the right of the parent menu     to avoid the sub-menu overlaying the menu items below    */ #header .linkList0 li.more .linkList01 li.more > ul.linkList001 {     margin: -1.7em 0 0 13.2em;    /*Important, must be careful, if tbe EM since gap increases too much bewteen nested lists the gap will make the nested-list collapse prematurely    */ } /*    Set right hand arrow for list items with sub-menus (class-more)    */ #header li.more {     background: Transparent url('../Content/Images/ArrowRight.gif') no-repeat right center;     padding-right: 48px; } /*    Menu:    Dynamic Behavior of menu items (hover, visted, etc)     -----------------------------------------------------------    */ #header .linkList0 li a:link, #header .linkList01 li a:link {     display: block; } #header .linkList0 li a:visited, #header .linkList01 li a:visited {     display: block; } #header .linkList0 > li:hover { } #header .linkList01 > li:hover a ,#header .linkList001 > li:hover a {     text-decoration: underline; } #header .linkList0 > li abbr:hover span.detailPopup {     display: block;     position: absolute;     top: 1em;     left: 17em;     border: double 1px #696969;     border-style: outset;     width: 120%;     height: auto;     padding: 5px;     font-weight: 100; } #header .linkList0 > li:hover ,#header .linkList0 .linkList01 > li:hover { } #header .linkList0 .linkList01 .linkList001 > li:hover { } #header .linkList0 .linkList01 .linkList001 .linkList0001 > li:hover { } /*    Display the hidden sub menu when hovering over the parent ul's li    */ #header .linkList0 li:hover > ul {     display: block; } /*    Display the hidden sub menu when hovering over the parent ul's li    */ #header .linkList0 .linkList01 li:hover > ul {     display: block;         background: -moz-linear-gradient(top, #1E83CC, #619FCD);     /* Chrome, Safari:*/     background: -webkit-gradient(linear,                 center top, center bottom, from(#1E83CC), to(#619FCD)); } /*    Display the hidden sub menu when hovering over the parent ul's li    */ #header .linkList0 .linkList01 .linkList001 li:hover > ul {     display: block; } /*    Set right hand arrow for list items with sub-menus (class-more) on hover    */ #header li.more:hover { } Also some CSS for global settings that will affect this menu, you of course will have some other styling, but included it here so you can see how/why some css properties were set here: /* Neutralize styling:    Elements we want to clean out entirely: */ html, body {     margin: 0;     padding: 0;     font: 62.5%/120% Verdana, Arial, Helvetica, sans-serif; } /* Neutralize styling:    Elements with a vertical margin: */ h1, h2, h3, h4, h5, h6, p, pre, blockquote, ul, ol, dl, address {     margin: 0;    /*    most browsers set some default value that is not shared by all browsers    */     padding: 0;        /*    some borowsers default padding, set to 0 for all    */ } /* Apply left margin:    Only to the few elements that need it: */ li, dd, blockquote {     margin-left: 1em; }

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Convert Dynamic to Type and convert Type to Dynamic

    - by Jon Canning
    public static class DynamicExtensions     {         public static T FromDynamic<T>(this IDictionary<string, object> dictionary)         {             var bindings = new List<MemberBinding>();             foreach (var sourceProperty in typeof(T).GetProperties().Where(x => x.CanWrite))             {                 var key = dictionary.Keys.SingleOrDefault(x => x.Equals(sourceProperty.Name, StringComparison.OrdinalIgnoreCase));                 if (string.IsNullOrEmpty(key)) continue;                 var propertyValue = dictionary[key];                 bindings.Add(Expression.Bind(sourceProperty, Expression.Constant(propertyValue)));             }             Expression memberInit = Expression.MemberInit(Expression.New(typeof(T)), bindings);             return Expression.Lambda<Func<T>>(memberInit).Compile().Invoke();         }         public static dynamic ToDynamic<T>(this T obj)         {             IDictionary<string, object> expando = new ExpandoObject();             foreach (var propertyInfo in typeof(T).GetProperties())             {                 var propertyExpression = Expression.Property(Expression.Constant(obj), propertyInfo);                 var currentValue = Expression.Lambda<Func<string>>(propertyExpression).Compile().Invoke();                 expando.Add(propertyInfo.Name.ToLower(), currentValue);             }             return expando as ExpandoObject;         }     }

    Read the article

  • Monitoring Database disk space

    - by Michael Freidgeim
    An article Data files: To Autogrow Or Not To Autogrow? recommends NOT to rely on auto-grow, because it causing delays in unplanned times.We should mtonitor database files(both data and log), and if they close to max capacity, manually increase the size. However it doesn't give references, how to monitor the free space inside databases. I've tried to look how to do it. It can be done manually using   execute sp_spaceused for the database in question or  sp_SOS (can be downloaded from http://searchsqlserver.techtarget.com/tip/Find-size-of-SQL-Server-tables-and-other-objects-with-stored-procedure)Alternatively you can run SQL commands as suggested in Http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=82359 by Michael Valentine Jonesselect [FREE_SPACE_MB] = convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) from dbo.sysfiles aMore useful article Monitor database file sizes with SQL Server Jobs describes how to setup monitoring Finally I found the excellent articleManaging Database Data Usage With Custom Space Alerts, that can be followed even support personnel without much DBA experience.

    Read the article

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >