Search Results

Search found 24043 results on 962 pages for 'private methods'.

Page 337/962 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Access Control Service: Passive/Active Transition Sample

    - by Your DisplayName here!
    Here you can find my updated ACS2 sample. In addition to the existing front ends (web [WS-Federation], console [SOAP & REST], Silverlight [REST]) and error handling, it now also includes a WPF client that shows the passive/active transition with a SOAP service as illustrated here. All the ACS interaction is encapsulated in a WPF user control that: retrieves the JSON feed displays a list of supported identity providers triggers the sign in via a browser control retrieves the token response packages the token as a GenericXmlSecurityToken (to be used directly with the WIF ChannelFactory extensions methods) All you need to supply is the ACS namespace and the realm. Have fun!

    Read the article

  • General approach to isometrics

    - by MrThys
    I am currently discovering the world of isometrics, now I found out there are two approaches to creating the tilemap; Just create 2:1 ratio tile-images and draw those. Creating squares and transforming them to the 2:1 ratio. What is the general approach on developing an isometric game? Now I was wondering a few things; How do more known games like AOE1/2 do this? What are the pros/cons of both methods? Which method is preferred to be used in this day and age? Edit added more general question

    Read the article

  • A WPF Image Button

    - by psheriff
    Instead of a normal button with words, sometimes you want a button that is just graphical. Yes, you can put an Image control in the Content of a normal Button control, but you still have the button outline, and trying to change the style can be rather difficult. Instead I like creating a user control that simulates a button, but just accepts an image. Figure 1 shows an example of three of these custom user controls to represent minimize, maximize and close buttons for a borderless window. Notice the highlighted image button has a gray rectangle around it. You will learn how to highlight using the VisualStateManager in this blog post.Figure 1: Creating a custom user control for things like image buttons gives you complete control over the look and feel.I would suggest you read my previous blog post on creating a custom Button user control as that is a good primer for what I am going to expand upon in this blog post. You can find this blog post at http://weblogs.asp.net/psheriff/archive/2012/08/10/create-your-own-wpf-button-user-controls.aspx.The User ControlThe XAML for this image button user control contains just a few controls, plus a Visual State Manager. The basic outline of the user control is shown below:<Border Grid.Row="0"        Name="borMain"        Style="{StaticResource pdsaButtonImageBorderStyle}"        MouseEnter="borMain_MouseEnter"        MouseLeave="borMain_MouseLeave"        MouseLeftButtonDown="borMain_MouseLeftButtonDown">  <VisualStateManager.VisualStateGroups>  ... MORE XAML HERE ...  </VisualStateManager.VisualStateGroups>  <Image Style="{StaticResource pdsaButtonImageImageStyle}"         Visibility="{Binding Path=Visibility}"         Source="{Binding Path=ImageUri}"         ToolTip="{Binding Path=ToolTip}" /></Border>There is a Border control named borMain and a single Image control in this user control. That is all that is needed to display the buttons shown in Figure 1. The definition for this user control is in a DLL named PDSA.WPF. The Style definitions for both the Border and the Image controls are contained in a resource dictionary names PDSAButtonStyles.xaml. Using a resource dictionary allows you to create a few different resource dictionaries, each with a different theme for the buttons.The Visual State ManagerTo display the highlight around the button as your mouse moves over the control, you will need to add a Visual State Manager group. Two different states are needed; MouseEnter and MouseLeave. In the MouseEnter you create a ColorAnimation to modify the BorderBrush color of the Border control. You specify the color to animate as “DarkGray”. You set the duration to less than a second. The TargetName of this storyboard is the name of the Border control “borMain” and since we are specifying a single color, you need to set the TargetProperty to “BorderBrush.Color”. You do not need any storyboard for the MouseLeave state. Leaving this VisualState empty tells the Visual State Manager to put everything back the way it was before the MouseEnter event.<VisualStateManager.VisualStateGroups>  <VisualStateGroup Name="MouseStates">    <VisualState Name="MouseEnter">      <Storyboard>        <ColorAnimation             To="DarkGray"            Duration="0:0:00.1"            Storyboard.TargetName="borMain"            Storyboard.TargetProperty="BorderBrush.Color" />      </Storyboard>    </VisualState>    <VisualState Name="MouseLeave" />  </VisualStateGroup></VisualStateManager.VisualStateGroups>Writing the Mouse EventsTo trigger the Visual State Manager to run its storyboard in response to the specified event, you need to respond to the MouseEnter event on the Border control. In the code behind for this event call the GoToElementState() method of the VisualStateManager class exposed by the user control. To this method you will pass in the target element (“borMain”) and the state (“MouseEnter”). The VisualStateManager will then run the storyboard contained within the defined state in the XAML.private void borMain_MouseEnter(object sender,  MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,    "MouseEnter", true);}You also need to respond to the MouseLeave event. In this event you call the VisualStateManager as well, but specify “MouseLeave” as the state to go to.private void borMain_MouseLeave(object sender, MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,     "MouseLeave", true);}The Resource DictionaryBelow is the definition of the PDSAButtonStyles.xaml resource dictionary file contained in the PDSA.WPF DLL. This dictionary can be used as the default look and feel for any image button control you add to a window. <ResourceDictionary  ... >  <!-- ************************* -->  <!-- ** Image Button Styles ** -->  <!-- ************************* -->  <!-- Image/Text Button Border -->  <Style TargetType="Border"         x:Key="pdsaButtonImageBorderStyle">    <Setter Property="Margin"            Value="4" />    <Setter Property="Padding"            Value="2" />    <Setter Property="BorderBrush"            Value="Transparent" />    <Setter Property="BorderThickness"            Value="1" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="Background"            Value="Transparent" />  </Style>  <!-- Image Button -->  <Style TargetType="Image"         x:Key="pdsaButtonImageImageStyle">    <Setter Property="Width"            Value="40" />    <Setter Property="Margin"            Value="6" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />  </Style></ResourceDictionary>Using the Button ControlOnce you make a reference to the PDSA.WPF DLL from your WPF application you will see the “PDSAucButtonImage” control appear in your Toolbox. Drag and drop the button onto a Window or User Control in your application. I have not referenced the PDSAButtonStyles.xaml file within the control itself so you do need to add a reference to this resource dictionary somewhere in your application such as in the App.xaml.<Application.Resources>  <ResourceDictionary>    <ResourceDictionary.MergedDictionaries>      <ResourceDictionary         Source="/PDSA.WPF;component/PDSAButtonStyles.xaml" />    </ResourceDictionary.MergedDictionaries>  </ResourceDictionary></Application.Resources>This will give your buttons a default look and feel unless you override that dictionary on a specific Window or User Control or on an individual button. After you have given a global style to your application and you drag your image button onto a window, the following will appear in your XAML window.<my:PDSAucButtonImage ... />There will be some other attributes set on the above XAML, but you simply need to set the x:Name, the ToolTip and ImageUri properties. You will also want to respond to the Click event procedure in order to associate an action with clicking on this button. In the sample code you download for this blog post you will find the declaration of the Minimize button to be the following:<my:PDSAucButtonImage       x:Name="btnMinimize"       Click="btnMinimize_Click"       ToolTip="Minimize Application"       ImageUri="/PDSA.WPF;component/Images/Minus.png" />The ImageUri property is a dependency property in the PDSAucButtonImage user control. The x:Name and the ToolTip we get for free. You have to create the Click event procedure yourself. This is also created in the PDSAucButtonImage user control as follows:private void borMain_MouseLeftButtonDown(object sender,  MouseButtonEventArgs e){  RaiseClick(e);}public delegate void ClickEventHandler(object sender,  RoutedEventArgs e);public event ClickEventHandler Click;protected void RaiseClick(RoutedEventArgs e){  if (null != Click)    Click(this, e);}Since a Border control does not have a Click event you will create one by using the MouseLeftButtonDown on the border to fire an event you create called “Click”.SummaryCreating your own image button control can be done in a variety of ways. In this blog post I showed you how to create a custom user control and simulate a button using a Border and Image control. With just a little bit of code to respond to the MouseLeftButtonDown event on the border you can raise your own Click event. Dependency properties, such as ImageUri, allow you to set attributes on your custom user control. Feel free to expand on this button by adding additional dependency properties, change the resource dictionary, and even the animation to make this button look and act like you want.NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A WPF Image  Button” from the drop down list.

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • Mobile redirect strategy

    - by Kevin
    Looking for help on deciding how to redirect users to a mobile optimized version of my site (m.mysite.com). Looking at two methods: Server configuration (.htaccess or even varnish) Webapp (php) The problem I see with #1 is with the "view full site" link on the mobile site. If a user clicks that link and they go to mysite.com won't the server just redirect them back to m.mysite.com? For #2 I could create a cookie that is checked in addition to the user agent. Any suggestions/comments? Is there a better way to "remember" if the user clicked "visit full site"? Thanks, Kevin

    Read the article

  • Reset the controls within asp placeholder

    - by alaa9jo
    I use placeholders very much in many projects,but in one of the projects I was asked to reset the controls within a specific placeholder to their original state after the user has finished from inserting his/her data. As everyone of us know,to keep the controls within a placeholder from disappearing after postbacks you have to recreate them,we add such a code in i.e. page_load,the controls will be recreated and their values will be loaded from viewstate (for some of them like textbox,checkboxs,..etc) automatically,that placeholder contains only textboxs so what I need is to block/ loading/clear that viewstate after inserting the data. First thought: Customizing placeholder I thought about it and tried overriding many methods but no success at all...maybe I'm missing something not sure. Second thought: recreate the controls 2 times: In page_load,I recreate the controls within that placeholder then in button click (the button that saves user's data) I recreate them once more and it worked! I just thought of sharing my experience in that case with everyone in case anyone needed it,any better suggestion(s) is welcomed.

    Read the article

  • Virtualhost entries gets over-written when apache httpd.conf is rebuilt

    - by Amitabh
    Background: We have been trying to get a wildcard SSL working on multiple sub domains on a single dedicated address.. We have two sub domains next.my-personal-website.com and blog.my-personal-website.com Part of our strategy has been to edit the httpd.conf and add the NameVirtualHost xx.xx.144.72:443 directive and the virtualhost entries for port 443 for the subdomains there. This works good if we just edit the httpd.conf, add the entries, save it and restart the apache. The problem: But if we add a new sub domain from cpanel or we run the # /usr/local/cpanel/bin/apache_conf_distiller --update # /scripts/rebuildhttpdconf the virtualhost entries that we added manually are no more there in the newly generated httpd.conf file. Only the virtualhost entry for the main domain for port 443 that was there before we made edits to the httpd.conf is there(assuming we are not discussing virtualhost entries for port 80). I understand we need to put the new virtualhost entries in some include files as mentioned here in the cpanel documentation. But am not sure where to. So the question would be where do I put the NameVirtualHost xx.xx.144.72:443 directive and the two virtualhost directive for port 443, so that they are not overwritten when httpd.conf is rebuilt/regenerated later. Virtualhost entries: The two virtualhost entries for the subdomains are: <VirtualHost xx.xx.144.72:443> ServerName next.my-personal-website.com ServerAlias www.next.my-personal-website.com DocumentRoot /home/myguardi/public_html/next.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/next.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/next.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and <VirtualHost xx.xx.144.72:443> ServerName blog.my-personal-website.com ServerAlias www.blog.my-personal-website.com DocumentRoot /home/myguardi/public_html/blog.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/blog.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and the automatically generated virtualhost entry for the main domain for port 443 is <VirtualHost xx.xx.144.72:443> ServerName my-personal-website.com ServerAlias www.my-personal-website.com DocumentRoot /home/myguardi/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-personal-website.com combined CustomLog /usr/local/apache/domlogs/my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/myguardi/my-personal-website.com/*.conf" I really appreciate if somebody can tell me how to proceed on this. Thank you. Update: Include directives present are: `Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" Include "/usr/local/apache/conf/modsec2.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" ` These are the entries that are generated before any virtualhost entry is defined. Towards the end of the httpd.conf file , the following two entries are added Include "/usr/local/apache/conf/includes/post_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/post_virtualhost_2.conf" The older httpd.conf file before we added the virtualhost entries for sub domains for port 443 can be viewed here

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Visual Studio Macro – Identifier to String Literal

    - by João Angelo
    When implementing public methods with parameters it’s important to write boiler-plate code to do argument validation and throw exceptions when needed, ArgumentException and ArgumentNullException being the most recurrent. Another thing that is important is to correctly specify the parameter causing the exception through the proper exception constructor. In order to take advantage of IntelliSense completion in these scenarios I use a Visual Studio macro binded to a keyboard shortcut that converts the identifier at the cursor position to a string literal. And here’s the macro: Sub ConvertIdentifierToStringLiteral() Dim targetWord As String Dim document As EnvDTE.TextDocument document = CType(DTE.ActiveDocument.Object, EnvDTE.TextDocument) If document.Selection.Text.Length > 0 Then targetWord = document.Selection.Text document.Selection.ReplacePattern(targetWord, """" + targetWord + """") Else Dim cursorPoint As EnvDTE.TextPoint cursorPoint = document.Selection.ActivePoint() Dim editPointLeft As EnvDTE.EditPoint Dim editPointRight As EnvDTE.EditPoint editPointLeft = cursorPoint.CreateEditPoint() editPointLeft.WordLeft(1) editPointRight = editPointLeft.CreateEditPoint() editPointRight.WordRight(1) targetWord = editPointLeft.GetText(editPointRight) editPointLeft.ReplaceText(editPointRight, """" + targetWord + """", 0) End If End Sub

    Read the article

  • How properly perform passing operation result to View

    - by atomAltera
    I'm developing web site on self made MVC engine. I have actionController that handles operations like register, login, post submit and etc. actionController receives operation name and parameters. Of course it mast handle errors such user with same nick already exists or password is to short about which action handler have to notify user. The question is which is the best way to organize errors, such that View could easily get localized user notification message. I see two ways First one: define error constants like ERR_NICK_BUSY = '1' ERR_NICK_INVALID = '2' ... and localization map local[ERR_NICK_BUSY] = 'User with the same nick already registered' local[ERR_NICK_INVALID ] = 'Nick, you entered is invalid' ... And second one: define abstract constants like ERR_FIELD_BUSY = '1' ERR_FIELD_INVALID = '2' ... and pass them with field name. In this case localization looks like local['nick_'+ERR_FIELD_BUSY] = 'User with the same nick already registered' ... I don't like both this methods. Can you advise something else?

    Read the article

  • Ubuntu Desktop Password

    - by doug
    I inherited a machine with Ubuntu desktop installed. It has a password in place and I have no idea what the password may be. I cannot get to the command line to use the methods I have found online. No matter how many times I press "Shift" during the boot process it still goes all the way to the desktop login. I never see grub. I am not sure which version I have but I think may be 9 or 10. Thanks Doug

    Read the article

  • Nesting Linq-to-Objects query within Linq-to-Entities query –what is happening under the covers?

    - by carewithl
    var numbers = new int[] { 1, 2, 3, 4, 5 }; var contacts = from c in context.Contacts where c.ContactID == numbers.Max() | c.ContactID == numbers.FirstOrDefault() select c; foreach (var item in contacts) Console.WriteLine(item.ContactID); Linq-to-Entities query is first translated into Linq expression tree, which is then converted by Object Services into command tree. And if Linq-to-Entities query nests Linq-to-Objects query, then this nested query also gets translated into an expression tree. a) I assume none of the operators of the nested Linq-to-Objects query actually get executed, but instead data provider for particular DB (or perhaps Object Services) knows how to transform the logic of Linq-to-Objects operators into appropriate SQL statements? b) Data provider knows how to create equivalent SQL statements only for some of the Linq-to-Objects operators? c) Similarly, data provider knows how to create equivalent SQL statements only for some of the non-Linq methods in the Net Framework class library? EDIT: I know only some Sql so I can't be completely sure, but reading Sql query generated for the above code it seems data provider didn't actually execute numbers.Max method, but instead just somehow figured out that numbers.Max should return the maximum value and then proceed to include in generated Sql query a call to TSQL's build-in MAX function. It also put all the values held by numbers array into a Sql query. SELECT CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN '0X0X' ELSE '0X1X' END AS [C1], [Extent1].[ContactID] AS [ContactID], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[Title] AS [Title], [Extent1].[AddDate] AS [AddDate], [Extent1].[ModifiedDate] AS [ModifiedDate], [Extent1].[RowVersion] AS [RowVersion], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[CustomerTypeID] END AS [C2], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[InitialDate] END AS [C3], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryDesintation] END AS [C4], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryDestination] END AS [C5], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryActivity] END AS [C6], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryActivity] END AS [C7], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[Notes] END AS [C8], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[RowVersion] END AS [C9], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[BirthDate] END AS [C10], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[HeightInches] END AS [C11], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[WeightPounds] END AS [C12], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[DietaryRestrictions] END AS [C13] FROM [dbo].[Contact] AS [Extent1] LEFT OUTER JOIN (SELECT [Extent2].[ContactID] AS [ContactID], [Extent2].[BirthDate] AS [BirthDate], [Extent2].[HeightInches] AS [HeightInches], [Extent2].[WeightPounds] AS [WeightPounds], [Extent2].[DietaryRestrictions] AS [DietaryRestrictions], [Extent3].[CustomerTypeID] AS [CustomerTypeID], [Extent3].[InitialDate] AS [InitialDate], [Extent3].[PrimaryDesintation] AS [PrimaryDesintation], [Extent3].[SecondaryDestination] AS [SecondaryDestination], [Extent3].[PrimaryActivity] AS [PrimaryActivity], [Extent3].[SecondaryActivity] AS [SecondaryActivity], [Extent3].[Notes] AS [Notes], [Extent3].[RowVersion] AS [RowVersion], cast(1 as bit) AS [C1] FROM [dbo].[ContactPersonalInfo] AS [Extent2] INNER JOIN [dbo].[Customers] AS [Extent3] ON [Extent2].[ContactID] = [Extent3].[ContactID]) AS [Project1] ON [Extent1].[ContactID] = [Project1].[ContactID] LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll3].[C1] AS [C1] FROM (SELECT [UnionAll2].[C1] AS [C1] FROM (SELECT [UnionAll1].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable1] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable2]) AS [UnionAll1] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable3]) AS [UnionAll2] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable4]) AS [UnionAll3] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable5]) AS [c]) AS [Limit1] ON 1 = 1 LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll7].[C1] AS [C1] FROM (SELECT [UnionAll6].[C1] AS [C1] FROM (SELECT [UnionAll5].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable6] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable7]) AS [UnionAll5] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable8]) AS [UnionAll6] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable9]) AS [UnionAll7] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable10]) AS [c]) AS [Limit2] ON 1 = 1 CROSS JOIN (SELECT MAX([UnionAll12].[C1]) AS [A1] FROM (SELECT [UnionAll11].[C1] AS [C1] FROM (SELECT [UnionAll10].[C1] AS [C1] FROM (SELECT [UnionAll9].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable11] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable12]) AS [UnionAll9] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable13]) AS [UnionAll10] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable14]) AS [UnionAll11] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable15]) AS [UnionAll12]) AS [GroupBy1] WHERE [Extent1].[ContactID] IN ([GroupBy1].[A1], (CASE WHEN ([Limit1].[C1] IS NULL) THEN 0 ELSE [Limit2].[C1] END)) Based on this, is it possible that Linq2Entities provider indeed doesn't execute non-Linq and Linq-to-Object methods, but instead creates equivalent SQL statements for some of them ( and for others it throws an exception )? Thank you in advance

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • How to Share Files Online with Ubuntu One

    - by Chris Hoffman
    Ubuntu One, Ubuntu’s built-in cloud file storage service, allows you to make files publically available online or share them privately with others. You can share files over the Internet right from Ubuntu’s file browser. Ubuntu One has two file-sharing methods: Publish, which makes a file publically available on the web to anyone who knows its address, and Share, which shares a folder with other Ubuntu One users. HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux

    Read the article

  • LWJGL SlickUtil Texture Binding

    - by Matthew Dockerty
    I am making a 3D game using LWJGL and I have a texture class with static variables so that I only need to load textures once, even if I need to use them more than once. I am using Slick Util for this. When I bind a texture it works fine, but then when I try to render something else after I have rendered the model with the texture, the texture is still being bound. How do I unbind the texture and set the rendermode to the one that was in use before any textures were bound? Some of my code is below. The problem I am having is the player texture is being used in the box drawn around the player after it the model has been rendered. Model.java public class Model { public List<Vector3f> vertices = new ArrayList<Vector3f>(); public List<Vector3f> normals = new ArrayList<Vector3f>(); public ArrayList<Vector2f> textureCoords = new ArrayList<Vector2f>(); public List<Face> faces = new ArrayList<Face>(); public static Model TREE; public static Model PLAYER; public static void loadModels() { try { TREE = OBJLoader.loadModel(new File("assets/model/tree_pine_0.obj")); PLAYER = OBJLoader.loadModel(new File("assets/model/player.obj")); } catch (Exception e) { e.printStackTrace(); } } public void render(Vector3f position, Vector3f scale, Vector3f rotation, Texture texture, float shinyness) { glPushMatrix(); { texture.bind(); glColor3f(1, 1, 1); glTranslatef(position.x, position.y, position.z); glScalef(scale.x, scale.y, scale.z); glRotatef(rotation.x, 1, 0, 0); glRotatef(rotation.y, 0, 1, 0); glRotatef(rotation.z, 0, 0, 1); glMaterialf(GL_FRONT, GL_SHININESS, shinyness); glBegin(GL_TRIANGLES); { for (Face face : faces) { Vector2f t1 = textureCoords.get((int) face.textureCoords.x - 1); glTexCoord2f(t1.x, t1.y); Vector3f n1 = normals.get((int) face.normal.x - 1); glNormal3f(n1.x, n1.y, n1.z); Vector3f v1 = vertices.get((int) face.vertex.x - 1); glVertex3f(v1.x, v1.y, v1.z); Vector2f t2 = textureCoords.get((int) face.textureCoords.y - 1); glTexCoord2f(t2.x, t2.y); Vector3f n2 = normals.get((int) face.normal.y - 1); glNormal3f(n2.x, n2.y, n2.z); Vector3f v2 = vertices.get((int) face.vertex.y - 1); glVertex3f(v2.x, v2.y, v2.z); Vector2f t3 = textureCoords.get((int) face.textureCoords.z - 1); glTexCoord2f(t3.x, t3.y); Vector3f n3 = normals.get((int) face.normal.z - 1); glNormal3f(n3.x, n3.y, n3.z); Vector3f v3 = vertices.get((int) face.vertex.z - 1); glVertex3f(v3.x, v3.y, v3.z); } texture.release(); } glEnd(); } glPopMatrix(); } } Textures.java public class Textures { public static Texture FLOOR; public static Texture PLAYER; public static Texture SKYBOX_TOP; public static Texture SKYBOX_BOTTOM; public static Texture SKYBOX_FRONT; public static Texture SKYBOX_BACK; public static Texture SKYBOX_LEFT; public static Texture SKYBOX_RIGHT; public static void loadTextures() { try { FLOOR = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/model/floor.png"))); FLOOR.setTextureFilter(GL11.GL_NEAREST); PLAYER = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/model/tree_pine_0.png"))); PLAYER.setTextureFilter(GL11.GL_NEAREST); SKYBOX_TOP = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_top.png"))); SKYBOX_TOP.setTextureFilter(GL11.GL_NEAREST); SKYBOX_BOTTOM = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_bottom.png"))); SKYBOX_BOTTOM.setTextureFilter(GL11.GL_NEAREST); SKYBOX_FRONT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_front.png"))); SKYBOX_FRONT.setTextureFilter(GL11.GL_NEAREST); SKYBOX_BACK = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_back.png"))); SKYBOX_BACK.setTextureFilter(GL11.GL_NEAREST); SKYBOX_LEFT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_left.png"))); SKYBOX_LEFT.setTextureFilter(GL11.GL_NEAREST); SKYBOX_RIGHT = TextureLoader.getTexture("PNG", new FileInputStream(new File("assets/textures/skybox_right.png"))); SKYBOX_RIGHT.setTextureFilter(GL11.GL_NEAREST); } catch (Exception e) { e.printStackTrace(); } } } Player.java public class Player { private Vector3f position; private float yaw; private float moveSpeed; public Player(float x, float y, float z, float yaw, float moveSpeed) { this.position = new Vector3f(x, y, z); this.yaw = yaw; this.moveSpeed = moveSpeed; } public void update() { if (Keyboard.isKeyDown(Keyboard.KEY_W)) walkForward(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_S)) walkBackwards(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_A)) strafeLeft(moveSpeed); if (Keyboard.isKeyDown(Keyboard.KEY_D)) strafeRight(moveSpeed); if (Mouse.isButtonDown(0)) yaw += Mouse.getDX(); LowPolyRPG.getInstance().getCamera().setPosition(-position.x, -position.y, -position.z); LowPolyRPG.getInstance().getCamera().setYaw(yaw); } public void walkForward(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw))); } public void walkBackwards(float distance) { position.setX(position.getX() - distance * (float) Math.sin(Math.toRadians(yaw))); position.setZ(position.getZ() + distance * (float) Math.cos(Math.toRadians(yaw))); } public void strafeLeft(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw - 90))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw - 90))); } public void strafeRight(float distance) { position.setX(position.getX() + distance * (float) Math.sin(Math.toRadians(yaw + 90))); position.setZ(position.getZ() - distance * (float) Math.cos(Math.toRadians(yaw + 90))); } public void render() { Model.PLAYER.render(new Vector3f(position.x, position.y + 12, position.z), new Vector3f(3, 3, 3), new Vector3f(0, -yaw + 90, 0), Textures.PLAYER, 128); GL11.glPushMatrix(); GL11.glTranslatef(position.getX(), position.getY(), position.getZ()); GL11.glRotatef(-yaw, 0, 1, 0); GL11.glScalef(5.8f, 21, 2.2f); GL11.glDisable(GL11.GL_LIGHTING); GL11.glLineWidth(3); GL11.glBegin(GL11.GL_LINE_STRIP); GL11.glColor3f(1, 1, 1); glVertex3f(1f, 0f, -1f); glVertex3f(-1f, 0f, -1f); glVertex3f(-1f, 1f, -1f); glVertex3f(1f, 1f, -1f); glVertex3f(-1f, 0f, 1f); glVertex3f(1f, 0f, 1f); glVertex3f(1f, 1f, 1f); glVertex3f(-1f, 1f, 1f); glVertex3f(1f, 1f, -1f); glVertex3f(-1f, 1f, -1f); glVertex3f(-1f, 1f, 1f); glVertex3f(1f, 1f, 1f); glVertex3f(1f, 0f, 1f); glVertex3f(-1f, 0f, 1f); glVertex3f(-1f, 0f, -1f); glVertex3f(1f, 0f, -1f); glVertex3f(1f, 0f, 1f); glVertex3f(1f, 0f, -1f); glVertex3f(1f, 1f, -1f); glVertex3f(1f, 1f, 1f); glVertex3f(-1f, 0f, -1f); glVertex3f(-1f, 0f, 1f); glVertex3f(-1f, 1f, 1f); glVertex3f(-1f, 1f, -1f); GL11.glEnd(); GL11.glEnable(GL11.GL_LIGHTING); GL11.glPopMatrix(); } public Vector3f getPosition() { return new Vector3f(-position.x, -position.y, -position.z); } public float getX() { return position.getX(); } public float getY() { return position.getY(); } public float getZ() { return position.getZ(); } public void setPosition(Vector3f position) { this.position = position; } public void setPosition(float x, float y, float z) { this.position.setX(x); this.position.setY(y); this.position.setZ(z); } } Thanks for the help.

    Read the article

  • Complexity of a web application

    - by Dominik G
    I am currently writing my Master's Thesis on maintainability of a web application. I found some methods like the "Maintainability Index" by Coleman et.al. or the "Software Maintainability Index" by Muthanna et.al. For both of them one needs to calculate the cyclomatic complexity. So my question is: Is it possible to measure the cyclomatic complexity of a web application? In my opinion there are three parts to a web application: Server code (PHP, C#, Python, Perl, etc.) Client code (JavaScript) HTML (links and forms as operators, GET-parameters and form fields as operands!?) What do you think? Is there another point of view on the complexity of web application? Did I miss something?

    Read the article

  • T4Toolbox and Visual Studio 2010

    - by Ben Griswold
    I’ve been using the T4Toolbox to help generate my ASP.NET MVC models and scaffolding for a while now.  Another developer tried using my generator project last week and ran into troubles due to a breaking change around the RenderCore() and TransformText() methods in support for VS 2010.  If you upgraded to the latest version of T4Toolbox and receive a build error similar to the following, you are probably in the same boat: GeneratedTextTransformation.[Template].RenderCore(): no suitable method found to override We took the easy way out.  I had him uninstall the latest version of T4Toolbox and install version 9.7.25.1 which my templates were initially coded against.  For now, that worked great, but it sounds like I’ll be doing some rework of the 20+ templates in my project to support Visual Studio 2010 when we migrate later this month.

    Read the article

  • libssh2 and simultaneous connections

    - by Florian Margaine
    I'm writing a node.js C++ module using the C library libssh2. The module is supposed to be a bridge to connect to SSH over HTTPS. Right now, I'm still in the design/learning phase of v8 API and C++, and I have a design question: libssh2 is a C library, all its methods are global. From what I see in the examples, libssh2 can only handle one connection at a time. If I want to allow simultaneous connections to different SSH servers, do I have to fork a process to completely separate the libssh2 "instances", or is forking a thread enough? I don't know enough of the separation limit used there. Any idea on how to handle this is appreciated.

    Read the article

  • APress Deal of the Day 10/August/2014 - Pro ASP.NET Web API Security

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/08/10/apress-deal-of-the-day-10august2014---pro-asp.net-web.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430257820 is Pro ASP.NET Web API Security. “ASP.NET Web API is a key part of ASP.NET MVC 4. It has become the platform of choice for building RESTful services. Securing ASP.NET Web API applications requires a move away from traditional WCF-based techniques in favor of new SOAP-less methods. The evaluation, selection and analysis of these new techniques is the focus of this book.”

    Read the article

  • Microsoft Unity 2.0 Released

    - by shiju
    Microsoft's dependency injection framework Unity Application Block (Unity) has reached version 2.0. The release available on Codeplex at http://unity.codeplex.com/ Unity 2.0  Change LogRelease Notes - Unity 2 for DesktopRelease Notes - Unity 2 for SilverlightOfficial Download - Unity 2.0Official Download - Unity 2.0 for SilverlightAdditional Materials DownloadMigration GuideVideos UnityContainer Fluent InterfaceUnity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement, as shown in the following code.  IUnityContainer container = new UnityContainer() .RegisterType<IFormsAuthentication, FormsAuthenticationService>() .RegisterType<IMembershipService, AccountMembershipService>() .RegisterInstance<MembershipProvider>(Membership.Provider) .RegisterType<IDinnerRepository, DinnerRepository>();    

    Read the article

  • jMonkey Quest Database

    - by theJollySin
    I am building a game in jMonkey (Java) and I have so far only used default quest text. But now I need to start populating a lot of quests with text. My design requires A LOT of quests texts. What is the best way to build a database of quest texts in jMonkey? I don't have a lot of real experience with databases. Is there a database that integrates well with jMonkey? Here are the ideal properties I want in my database, in order of priority: Reasonably light learning curve Easy portability (in Java) to Windows, Linux, and Mac OSX Good interface with Java Good interface with jMonkey The ability to add properties to the quests: ID, level, gender, quest chain ID, etc. Or am I wrong in thinking I need to use some giant monster like SQL? I haven't been able to find much information on this, so are people using some non-database methods for storing things like quest text in jMonkey?

    Read the article

  • Googlebot fetches my pages very frequent, rel-nofollow, meta-noindex or robots.txt-disallow

    - by trante
    Googlebot fetches pages in my site very frequently. And this slowens my website. I don't want Googlebot to crawl too frequent. I decreased crawl rate from Google webmaster tools. But I'm supposing to use these three tools: Adding rel="nofollow" to my inner pages. So Googlebot won't crawl and index them. Adding meta tag "noindex" so Google will remove this page from index and won't get it again. Adding Disallow: /mySomeFolder/ to robots.txt and Googlebot won't crawl that pages. I'm planning to use these methods for my 56.000 pages, except the most important 6-7 pages. Which method would you prefer and what would be disadvantages or advantages ? Or won't it change my website speed etc..

    Read the article

  • How do I set a touch listener in a child scene in AndEngine?

    - by Siddharth
    In my game, I want to implement touch listener for my child scene objects. Basically I tried all the possible way to implement this that I have usually done for my normal scenes, but those methods do not work here. Could somebody provide some guidance for setting touch area listener in child scene? Here is my code: menuScene.setTouchAreaBindingEnabled(true); menuScene.registerTouchArea(resumeButtonSprite); menuScene.registerTouchArea(retryButtonSprite); menuScene.registerTouchArea(exitButtonSprite); menuScene.setOnAreaTouchListener(new IOnAreaTouchListener() { @Override public boolean onAreaTouched(TouchEvent pSceneTouchEvent, ITouchArea pTouchArea, float pTouchAreaLocalX, float pTouchAreaLocalY) { System.out.println("Touch"); return true; } }); In this code, menuScene was a child scene activity. Also after research I found that my engine was stopped while the child scene was activated so the touch event is not detected. I want to implement a pause menu in my game so any desirable solution for a pause menu implementation would help.

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >