Search Results

Search found 15081 results on 604 pages for 'passing by reference'.

Page 111/604 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Cross-language Extension Method Calling

    - by Tom Hines
    Extension methods are a concise way of binding functions to particular types. In my last post, I showed how Extension methods can be created in the .NET 2.0 environment. In this post, I discuss calling the extensions from other languages. Most of the differences I find between the Dot Net languages are mainly syntax.  The declaration of Extensions is no exception.  There is, however, a distinct difference with the framework accepting excensions made with C++ that differs from C# and VB.  When calling the C++ extension from C#, the compiler will SOMETIMES say there is no definition for DoCPP with the error: 'string' does not contain a definition for 'DoCPP' and no extension method 'DoCPP' accepting a first argument of type 'string' could be found (are you missing a using directive or an assembly reference?) If I recompile, the error goes away. The strangest problem with calling the C++ extension from C# is that I first must make SOME type of reference to the class BEFORE using the extension or it will not be recognized at all.  So, if I first call the DoCPP() as a static method, the extension works fine later.  If I make a dummy instantiation of the class, it works.  If I have no forward reference of the class, I get the same error as before and recompiling does not fix it.  It seems as if this none of this is supposed to work across the languages. I have made a few work-arounds to get the examples to compile and run. Note the following examples: Extension in C# using System; namespace Extension_CS {    public static class CExtension_CS    {  //in C#, the "this" keyword is the key.       public static void DoCS(this string str)       {          Console.WriteLine("CS\t{0:G}\tCS", str);       }    } } Extension in C++ /****************************************************************************\  * Here is the C++ implementation.  It is the least elegant and most quirky,  * but it works. \****************************************************************************/ #pragma once using namespace System; using namespace System::Runtime::CompilerServices;     //<-Essential // Reference: System.Core.dll //<- Essential namespace Extension_CPP {        public ref class CExtension_CPP        {        public:               [Extension] // or [ExtensionAttribute] /* either works */               static void DoCPP(String^ str)               {                      Console::WriteLine("C++\t{0:G}\tC++", str);               }        }; } Extension in VB ' Here is the VB implementation.  This is not as elegant as the C#, but it's ' functional. Imports System.Runtime.CompilerServices ' Public Module modExtension_VB 'Extension methods can be defined only in modules.    <Extension()> _       Public Sub DoVB(ByVal str As String)       Console.WriteLine("VB" & Chr(9) & "{0:G}" & Chr(9) & "VB", str)    End Sub End Module   Calling program in C# /******************************************************************************\  * Main calling program  * Intellisense and VS2008 complain about the CPP implementation, but with a  * little duct-tape, it works just fine. \******************************************************************************/ using System; using Extension_CPP; using Extension_CS; using Extension_VB; // vitual namespace namespace TestExtensions {    public static class CTestExtensions    {       /**********************************************************************\        * For some reason, this needs a direct reference into the C++ version        * even though it does nothing than add a null reference.        * The constructor provides the fake usage to please the compiler.       \**********************************************************************/       private static CExtension_CPP x = null;   // <-DUCT_TAPE!       static CTestExtensions()       {          // Fake usage to stop compiler from complaining          if (null != x) {} // <-DUCT_TAPE       }       static void Main(string[] args)       {          string strData = "from C#";          strData.DoCPP();          strData.DoCS();          strData.DoVB();       }    } }   Calling program in VB  Imports Extension_CPP Imports Extension_CS Imports Extension_VB Imports System.Runtime.CompilerServices Module TestExtensions_VB    <Extension()> _       Public Sub DoCPP(ByVal str As String)       'Framework does not treat this as an extension, so use the static       CExtension_CPP.DoCPP(str)    End Sub    Sub Main()       Dim strData As String = "from VB"       strData.DoCS()       strData.DoVB()       strData.DoCPP() 'fake    End Sub End Module  Calling program in C++ // TestExtensions_CPP.cpp : main project file. #include "stdafx.h" using namespace System; using namespace Extension_CPP; using namespace Extension_CS; using namespace Extension_VB; void main(void) {        /*******************************************************\         * Extension methods are called like static methods         * when called from C++.  There may be a difference in         * syntax when calling the VB extension as VB Extensions         * are embedded in Modules instead of classes        \*******************************************************/     String^ strData = "from C++";     CExtension_CPP::DoCPP(strData);     CExtension_CS::DoCS(strData);     modExtension_VB::DoVB(strData); //since Extensions go in Modules }

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer

    - by Elton Stoneman
    This is the second in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Part 2 is nice and easy. From Part 1 we exposed our service over the Azure Service Bus Relay using the netTcpRelayBinding and verified we could set up our network to listen for relayed messages. Assuming we want to consume that service in .NET from an environment which is fairly unrestricted for us, but quite restricted for attackers, we can use netTcpRelay and shared secret authentication. Pattern applicability This is a good fit for scenarios where: the consumer can run .NET in full trust the environment does not restrict use of external DLLs the runtime environment is secure enough to keep shared secrets the service does not need to know who is consuming it the service does not need to know who the end-user is So for example, the consumer is an ASP.NET website sitting in a cloud VM or Azure worker role, where we can keep the shared secret in web.config and we don't need to flow any identity through to the on-premise service. The service doesn't care who the consumer or end-user is - say it's a reference data service that provides a list of vehicle manufacturers. Provided you can authenticate with ACS and have access to Service Bus endpoint, you can use the service and it doesn't care who you are. In this post, we’ll consume the service from Part 1 in ASP.NET using netTcpRelay. The code for Part 2 (+ Part 1) is on GitHub here: IPASBR Part 2 Authenticating and authorizing with ACS In this scenario the consumer is a server in a controlled environment, so we can use a shared secret to authenticate with ACS, assuming that there is governance around the environment and the codebase which will prevent the identity being compromised. From the provider's side, we will create a dedicated service identity for this consumer, so we can lock down their permissions. The provider controls the identity, so the consumer's rights can be revoked. We'll add a new service identity for the namespace in ACS , just as we did for the serviceProvider identity in Part 1. I've named the identity fullTrustConsumer. We then need to add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus (see Part 1 for a walkthrough creating Service Idenitities): Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: fullTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send This sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. Adding a Service Reference The Part 2 sample client code is ready to go, but if you want to replicate the steps, you’re going to add a WSDL reference, add a reference to Microsoft.ServiceBus and sort out the ServiceModel config. In Part 1 we exposed metadata for our service, so we can browse to the WSDL locally at: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc?wsdl If you add a Service Reference to that in a new project you'll get a confused config section with a customBinding, and a set of unrecognized policy assertions in the namespace http://schemas.microsoft.com/netservices/2009/05/servicebus/connect. If you NuGet the ASB package (“windowsazure.servicebus”) first and add the service reference - you'll get the same messy config. Either way, the WSDL should have downloaded and you should have the proxy code generated. You can delete the customBinding entries and copy your config from the service's web.config (this is already done in the sample project in Sixeyed.Ipasbr.NetTcpClient), specifying details for the client:     <client>       <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                 behaviorConfiguration="SharedSecret"                 binding="netTcpRelayBinding"                 contract="FormatService.IFormatService" />     </client>     <behaviors>       <endpointBehaviors>         <behavior name="SharedSecret">           <transportClientEndpointBehavior credentialType="SharedSecret">             <clientCredentials>               <sharedSecret issuerName="fullTrustConsumer"                             issuerSecret="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/>             </clientCredentials>           </transportClientEndpointBehavior>         </behavior>       </endpointBehaviors>     </behaviors>   The proxy is straight WCF territory, and the same client can run against Azure Service Bus through any relay binding, or directly to the local network service using any WCF binding - the contract is exactly the same. The code is simple, standard WCF stuff: using (var client = new FormatService.FormatServiceClient()) { outputString = client.ReverseString(inputString); } Running the sample First, update Solution Items\AzureConnectionDetails.xml with your service bus namespace, and your service identity credentials for the netTcpClient and the provider:   <!-- ACS credentials for the full trust consumer (Part2): -->   <netTcpClient identityName="fullTrustConsumer"                 symmetricKey="E3feJSMuyGGXksJi2g2bRY5/Bpd2ll5Eb+1FgQrXIqo="/> Then rebuild the solution and verify the unit tests work. If they’re green, your service is listening through Azure. Check out the client by navigating to http://localhost:53835/Sixeyed.Ipasbr.NetTcpClient. Enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • WPF properties memory management

    - by mrpyo
    I'm trying to build binding system similar to the one that is used in WPF and I ran into some memory leaking problems, so here comes my question - how is memory managed in WPF property system? From what I know in WPF values of DependencyProperties are stored in external containers - what I wanna know is how are they collected when DependencyObject dies? Simplest solution would be to store them is some weak reference dictionary - but here comes the main problem I ran into - when there is a listener on property that needs reference to its (this property) parent it holds it (the parent) alive (when value of weak reference dictionary points somewhere, even indirectly, to key - it can't be collected). How is it avoided in WPF without the need of using weak references everywhere?

    Read the article

  • Team Foundation Server – How to pass ReferencePath argument to MSBuild

    - by Gopinath
    When we manually build a .NET project using Visual Studio, the reference paths set in Project Properties are picked up by Visual Studio for referring to dependent DLLs. But the project is built using TFS, the reference path’s specified in project properties are not considered. This is because Reference Paths are user specific settings and they are not stored in .proj files(they are stored in user settings files). The TFS build may break if it does not find the required DLLs in GAC. We can solve the problem by passing ReferencePath parameter to MSBuild in TFS build configurations. Go to Team Explorer Select Build Defintion >> Edit Build Definition Switch to Process tab Navigate to Advanced Section and locate MSBuild Arguments Add the following: /p:ReferencePath=”{File path}”

    Read the article

  • Good practice about Javascript referencing

    - by AngeloBad
    I am fighting about a web application script optimization. I have an ASP.NET web app that reference jQuery in the master page, and in every child page can reference other library or JavaScript extension. I would like to optimize the application with YUI for .NET. The question is, I should put all the libraries reference in the master page or to compress all the JavaScript code in a single file, or I should create a file for every page that contains only the code useful to the page? Is there any guidance to follow? Thanks!

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • OpenJDK In The News: Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012

    - by $utils.escapeXML($entry.author)
    The OpenJDK Community continues to host the development of the reference implementation of Java SE 8. Weekly developer preview builds of JDK 8 continue to be available from jdk8.java.net.OpenJDK continues to thrive with contributions from Oracle, as well as other companies, researchers and individuals.The OpenJDK Web Site Terms of Use was recently updated to allow work on Java Specification Requests (JSRs) for Java SE to take place in the OpenJDK Community, alongside their corresponding reference implementations, so that specification leads can satisfy the new transparency requirements of the Java Community Process (JCP 2.8).“The recent decision by the Java SE 8 Expert Group to defer modularity to Java SE 9 will allow us to focus on the highly-anticipated Project Lambda, the Nashorn JavaScript engine, the new Date/Time API, and Type Annotations, along with numerous other performance, simplification, and usability enhancements,” said Georges Saab, vice president, Software Development, Java Platform Group at Oracle. “We are continuing to increase our communication and transparency by developing the reference implementation and the Oracle-led JSRs in the OpenJDK community.”Quotes taken from the 14th press release from Oracle mentioning OpenJDK, titled "Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012".

    Read the article

  • Working with Primary Keys and Generators - Quickstart with NHibernate (Part 4)

    - by BobPalmer
    In this NHibernate tutorial, I'll be digging into the ID tag and Generator classes.  I had originally planned on finishing up a series on relationships (parent/child, etc.) but felt this would be an interesting topic for folks, and I also wanted to start integrating some of the current NHibernate reference. Since this article also includes some reference sections (and since I have not had a chance to check for every possible parameter value), I used the current reference as a baseline, and would welcome any feedback or technical updates that I can incorporate. You can find the entire article up on Google Docs at this link: http://docs.google.com/Doc?id=dg3z7qxv_24f3ch2rf7 As always, feedback, suggestions, and technical corrections are greatly appreciated! Enjoy! - Bob

    Read the article

  • clone() only partially doable; how to react?

    - by hllnll
    Suppose we have a class-hierarchy whose base class B requires a clone() method to be defined in its derivations. Whilst this works perfectly for most of them there is this one derivation X that holds an optional reference to a resource/object that can only be referenced from one instance of X. Whilst the rest of X can be cloned without problems this reference would have to be set to null/nil/nullptr. What is the correct way to handle this situation? X::clone() could throw an UncloneableException. Though this would be correct, it is not satisfying. Set the reference to null/nil/nullptr and leave no notice of it. The user of X has to know about Xs "particularities". Require derivations of B to define B::nearestClone(): the user of B has to be aware that the return value might not be a 1:1 clone. Something else?

    Read the article

  • What should be stored in UserContext?

    - by HonorGod
    From my general understanding I believe UserContext for a web application is supposed to hold user authentication and authorization (user roles) information. As part of user roles, there are definitions on who can access what data and accordingly the corresponding reference data is loaded into the UserContext as well. Is this a good practice to load and use reference data from UserContext? Does this have any impact with the number of sessions vs size of data it is holding inside JVM? I am thinking we use UserContext only for authentication and authorization but load the reference data from cache on demand and use it if required.

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • vb.net sqlite how to loop through selected records and pass each record as a parameter to another fu

    - by mazrabul
    Hi, I have a sqlite table with following fields: Langauge level hours German 2 50 French 3 40 English 1 60 German 1 10 English 2 50 English 3 60 German 1 20 French 2 40 I want to loop through the records based on language and other conditions and then pass the current selected record to a different function. So I have the following mixture of actual code and psudo code. I need help with converting the psudo code to actual code, please. I am finding it difficult to do so. Here is what I have: Private sub mainp() Dim oslcConnection As New SQLite.SQLiteConnection Dim oslcCommand As SQLite.SQLiteCommand Dim langs() As String = {"German", "French", "English"} Dim i as Integer = 0 oslcConnection.ConnectionString = "Data Source=" & My.Settings.dbFullPath & ";" oslcConnection.Open() oslcCommand = oslcConnection.CreateCommand Do While i <= langs.count If langs(i) = "German" Then oslcCommand.CommandText = "SELECT * FROM table WHERE language = '" & langs(i) & "';" For each record selected 'psudo code If level = 1 Then 'psudo code update level to 2 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code If level = 2 Then 'psudo code update level to 3 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code Next 'psudo code End If If langs(i) = "French" Then oslcCommand.CommandText = "SELECT * FROM table WHERE language = '" & langs(i) & "';" For each record selected 'psudo code If level = 1 Then 'psudo code update level to 2 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code If level = 2 Then 'psudo code update level to 3 'psudo code minorp(currentRecord) 'psudo code: calling minorp function and passing the whole record as a parameter End If 'psudo code Next 'psudo code End If Loop End Sub Many thanks for your help.

    Read the article

  • How to pass common arguments to Perl modules

    - by Leonard
    I'm not thrilled with the argument-passing architecture I'm evolving for the (many) Perl scripts that have been developed for some scripts that call various Hadoop MapReduce jobs. There are currently 8 scripts (of the form run_something.pl) that are run from cron. (And more on the way ... we expect anywhere from 1 to 3 more for every function we add to hadoop.) Each of these have about 6 identical command-line parameters, and a couple command line parameters that are similar, all specified with Euclid. The implementations are in a dozen .pm modules. Some of which are common, and others of which are unique.... Currently I'm passing the args globally to each module ... Inside run_something.pl I have: set_common_args (%ARGV); set_something_args (%ARGV); And inside Something.pm I have sub set_something_args { (%MYARGS) =@_; } So then I can do if ( $MYARGS{'--needs_more_beer'} ) { $beer++; } I'm seeing that I'm probably going to have additional "common" files that I'll want to pass args to, so I'll have three or four set_xxx_args calls at the top of each run_something.pl, and it just doesn't seem too elegant. On the other hand, it beats passing the whole stupid argument array down the call chain, and choosing and passing individual elements down the call chain is (a) too much work (b) error-prone (c) doesn't buy much. In lots of ways what I'm doing is just object-oriented design without the object-oriented language trappings, and it looks uglier without said trappings, but nonetheless ... Anyone have thoughts or ideas?

    Read the article

  • Tomcat does not pick up the class file - the JSP file is not displayed

    - by blueSky
    I have a Java code which is a controller for a jsp page, called: HomeController.java. Code is as follows: @Controller public class HomeController { protected final transient Log log = LogFactory.getLog(getClass()); @RequestMapping(value = "/mypage") public String home() { System.out.println("HomeController: Passing through..."); return "home"; } } There is nothing especial in the jsp page: home.jsp. If I go to this url: http://localhost:8080/adcopyqueue/mypage I can view mypage and everything works fine. Also in the tomcat Dos page I can see the comment: HomeController: Passing through... As expected. Now under the same directory that I have HomeController.java, I've created another file called: LoginController.java. Following is the code: @Controller public class LoginController { protected final transient Log log = LogFactory.getLog(getClass()); @RequestMapping(value = "/loginpage") public String login() { System.out.println("LoginController: Passing through..."); return "login"; } } And under the same place which I have home.jsp, I've created login.jsp. Also under tomcat folders, LoginController.class exists under the same folder that HomeController.class exists and login.jsp exists under the same folder which home.jsp exists. But when I go to this url: http://localhost:8080/adcopyqueue/loginpage Nothing is displayed! I think tomcat does not pick up LoginController.class b/c on the tomcat Dos window, I do NOT see this comment: LoginController: Passing through... Instead I see following which I do not know what do they mean? [ INFO] [http-8080-1 01:43:45] (AppInfo.java:populateAppInfo:34) got manifest [ INFO] [http-8080-1 01:43:45] (AppInfo.java:populateAppInfo:36) manifest entrie s 8 The structure and the code for HomeController.java and LoginController.java plus the jsp files match. I have no idea why tomcat sees one of the files and not the other? Clean build did not help. Does anybody have any idea? Any help is greatly appraciated.

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Installing Ruby 1.8.6 via RVM on Snow Leopard

    - by Neil Middleton
    I'm trying to install ruby 1.8.6 onto Snow Leopard - but am getting some make errors: ossl_x509revoked.c: In function ‘ossl_x509revoked_new’: ossl_x509revoked.c:48: warning: passing argument 2 of ‘ASN1_dup’ from incompatible pointer type ossl_x509revoked.c: In function ‘DupX509RevokedPtr’: ossl_x509revoked.c:64: warning: passing argument 2 of ‘ASN1_dup’ from incompatible pointer type readline.c: In function ‘username_completion_proc_call’: readline.c:730: error: ‘username_completion_function’ undeclared (first use in this function) readline.c:730: error: (Each undeclared identifier is reported only once readline.c:730: error: for each function it appears in.) make[1]: *** [readline.o] Error 1 make: *** [all] Error 1 Anyone have any ideas?

    Read the article

  • Installing Ruby 1.8.6 via RVM on Snow Leopard

    - by Neil Middleton
    I'm trying to install ruby 1.8.6 onto Snow Leopard - but am getting some make errors: ossl_x509revoked.c: In function ‘ossl_x509revoked_new’: ossl_x509revoked.c:48: warning: passing argument 2 of ‘ASN1_dup’ from incompatible pointer type ossl_x509revoked.c: In function ‘DupX509RevokedPtr’: ossl_x509revoked.c:64: warning: passing argument 2 of ‘ASN1_dup’ from incompatible pointer type readline.c: In function ‘username_completion_proc_call’: readline.c:730: error: ‘username_completion_function’ undeclared (first use in this function) readline.c:730: error: (Each undeclared identifier is reported only once readline.c:730: error: for each function it appears in.) make[1]: *** [readline.o] Error 1 make: *** [all] Error 1 Anyone have any ideas?

    Read the article

  • HttpUtility.UrlEncode in console application

    - by iar
    I'd like to use HttpUtility.UrlEncode in a console application, VB.NET, VS 2010 Beta 2. System.Web.HttpUtility.UrlEncode(item) Error message: 'HttpUtility' is not a member of 'Web'. In this question Anjisan suggests to add a reference to System.Web, as follows: In your solution explorer, right click on references Choose "add reference" In the "Add Reference" dialog box, use the .NET tab Scroll down to System.Web, select that, and hit ok However, I don't have a System.Web entry at that location.

    Read the article

  • using ILMerge with .NET 4 libraries

    - by Sarah Vessels
    I'm having trouble using ILMerge in my post-build after upgrading from .NET 3.5/Visual Studio 2008 to .NET 4/Visual Studio 2010. I have a Solution with several projects whose target framework is set to ".NET Framework 4". I use the following ILMerge command to merge the individual project DLLs into a single DLL: if not $(ConfigurationName) == Debug if exist "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" /lib:"C:\Windows\Microsoft.NET\Framework64\v4.0.30319" /lib:"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies" /keyfile:"$(SolutionDir)$(SolutionName).snk" /targetplatform:v4 /out:"$(SolutionDir)bin\development\$(SolutionName).dll" "$(SolutionDir)Connection\$(OutDir)Connection.dll" ...other project DLLs... /xmldocs If I leave off specifying the location of the .NET 4 framework directory, I get an "Unresolved assembly reference not allowed: System" error from ILMerge. If I leave off specifying the location of the MSTest directory, I get an "Unresolved assembly reference not allowed: Microsoft.VisualStudio.QualityTools.UnitTestFramework" error. The ILMerge command above works and produces a DLL. When I reference that DLL in another .NET 4 C# project, however, and try to use code within it, I get the following warning: The primary reference "MyILMergedDLL" could not be resolved because it has an indirect dependency on the .NET Framework assembly "mscorlib, Version=4.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.65535.65535" than the version "4.0.0.0" in the current target framework. If I then remove the /targetplatform:v4 flag and try to use MyILMergedDLL.dll, I get the following error: The type 'System.Xml.Serialization.IXmlSerializable' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. It doesn't seem like I should have to do that. Whoever uses my MyILMergedDLL.dll API should not have to add references to whatever libraries it references. How can I get around this?

    Read the article

  • Visual Studio 2008 doesn't create *.refresh files for external DLL references... what am I missing?

    - by Cory Larson
    Hi all-- I've got a question about something that's just been irritating me. A colleague and I are building a support framework for our current client that we want to reference in other projects. The DLL we want as a reference in our project would be an external reference. We're adding it by doing "Add Reference...", then browsing to the location of the .dll. What I want Visual Studio to do is only add the .xml, .pdb, and a .dll.refresh file, but instead it copies the actual .dll (and .xml and .pdb) into the bin. When we rebuild the framework project, the other project that uses its .dll gets all out of whack until we drop and re-add the reference. Everything I've read online says that VS2008 is supposed to create the .dll.refresh files for you, but it never does. Any ideas? Am I missing something or doing something wrong? At this point I'm ready to add a pre-build event to simply copy the framework .dll into my bin, but the .refresh file seems like less of a hassle if it would just work. Thanks, Cory

    Read the article

  • WCF - The maximum nametable character count quota (16384) has been exceeded while reading XML data.

    - by Jankhana
    I'm having a WCF Service that uses wsHttpBinding. The server configuration is as follows : <bindings> <wsHttpBinding> <binding name="wsHttpBinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> At the client side I'm including the Service reference of the WCF-Service. It works great if I have limited functions say 90 Operation Contract in my IService but if add one more OperationContract than I'm unable to Update the Service reference nor i'm able to add that service reference. In this article it's mentioned that by changing those config files(i.e devenv.exe.config, WcfTestClient.exe.config and SvcUtil.exe.config) it will work but even including those bindings in those config files still that error pops up saying There was an error downloading 'http://10.0.3.112/MyService/Service1.svc/mex'. The request failed with HTTP status 400: Bad Request. Metadata contains a reference that cannot be resolved: 'http://10.0.3.112/MyService/Service1.svc/mex'. There is an error in XML document (1, 89549). The maximum nametable character count quota (16384) has been exceeded while reading XML data. The nametable is a data structure used to store strings encountered during XML processing - long XML documents with non-repeating element names, attribute names and attribute values may trigger this quota. This quota may be increased by changing the MaxNameTableCharCount property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 89549. If the service is defined in the current solution, try building the solution and adding the service reference again. Any idea how to solve this????

    Read the article

  • Compiling scipy on Windows 32-bit: linker error with libf77blas.a

    - by Sridhar Ratnakumar
    Has anyone tried compiling SciPy 0.7.1 on Windows using numpy-1.3.0 that was built with the pre-built ATLAS libraries (atlas3.6.0_WinNT_P4SSE2.zip) linked in the installation document. I get the following linker error, and have no ideas as to how to fix this issue. $ python setup.py config --compiler=mingw32 build --compiler=mingw32 install --root=i [...] creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\scipy creating build\temp.win32-2.6\Release\scipy\integrate compile options: '-DNO_ATLAS_INFO=2 -I"C:\Documents and Settings\apy\Application Data\Python\Python26\site-packages\numpy\core\inc lude" -IC:\Python26\include -IC:\Python26\PC -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -DNO_ATLAS_INFO=2 -I"C:\Documents and Settings\apy\Application Data\Python\Python26\ site-packages\numpy\core\include" -IC:\Python26\include -IC:\Python26\PC -c scipy\integrate\_odepackmo dule.c -o build\temp.win32-2.6\Release\scipy\integrate\_odepackmodule.o C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.6\Release\scipy\integrate\_odepackmodule .o -LC:\atlas3.6.0_WinNT_P4SSE2 -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.5 -LC:\Python26\libs -LC:\Act ivePython32Python26\PCbuild -Lbuild\temp.win32-2.6 -lodepack -llinpack_lite -lmach -latlas -lcblas -lf77blas -llapack -lpython26 - lg2c -o build\lib.win32-2.6\scipy\integrate\_odepack.pyd C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_daxpy.o):ATL_F77wrap_axpy.c:(.text+0x3c): undefined reference to `ATL _daxpy' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_dscal.o):ATL_F77wrap_scal.c:(.text+0x26): undefined reference to `ATL _dscal' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_dcopy.o):ATL_F77wrap_copy.c:(.text+0x3d): undefined reference to `ATL _dcopy' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_idamax.o):ATL_F77wrap_amax.c:(.text+0x1e): undefined reference to `AT L_idamax' C:\atlas3.6.0_WinNT_P4SSE2/libf77blas.a(ATL_F77wrap_ddot.o):ATL_F77wrap_dot.c:(.text+0x36): undefined reference to `ATL_d dot' collect2: ld returned 1 exit status error: Command "C:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -g -Wall -mno-cygwin -shared build\temp.win32-2.6\Release\scipy\integrat e\_odepackmodule.o -LC:\atlas3.6.0_WinNT_P4SSE2 -LC:\MinGW\lib -LC:\MinGW\lib\gcc\mingw32\3.4.5 -LC:\Python 26\libs -LC:\Python26\PCbuild -Lbuild\temp.win32-2.6 -lodepack -llinpack_lite -lmach -latlas -lcblas -lf77blas -llap ack -lpython26 -lg2c -o build\lib.win32-2.6\scipy\integrate\_odepack.pyd" failed with exit status 1 Does anyone know what could have gone wrong here? Looking for ATL_daxpy, for example, in libf77blas.a resulted in: $ strings libf77blas.a | grep -i daxpy _daxpy_ _atl_f77wrap_daxpy_ ATL_F77wrap_daxpy.o/ daxpy.o/ 1081731936 1003 513 100755 420 ` daxpy.f _daxpy_ _atl_f77wrap_daxpy_ _atl_f77wrap_daxpy_ _ATL_daxpy There is _ATL_daxpy, but no ATL_daxpy.

    Read the article

  • java client program to send digest authentication request using HttpClient API

    - by Rajesh
    I have restlet sample client program which sends the digest request. Similar to this I need java client program which sends a digest request using HttpClient api. Can anybody send me sample code. Thanks in advance. Reference reference = new Reference("http://localhost:8092/authenticate"); Client client = new Client(Protocol.HTTP); Request request = new Request(Method.GET, reference); Response response = client.handle(request); System.out.println("response: "+response.getStatus()); Form form = new Form(); form.add("username", "rajesh"); form.add("uri", reference.getPath()); // Loop over the challengeRequest objects sent by the server. for (ChallengeRequest challengeRequest : response .getChallengeRequests()) { // Get the data from the server's response. if (ChallengeScheme.HTTP_DIGEST .equals(challengeRequest.getScheme())) { Series<Parameter> params = challengeRequest.getParameters(); form.add(params.getFirst("nonce")); form.add(params.getFirst("realm")); form.add(params.getFirst("domain")); form.add(params.getFirst("algorithm")); form.add(params.getFirst("qop")); } } // Compute the required data String a1 = Engine.getInstance().toMd5( "rajesh" + ":" + form.getFirstValue("realm") + ":" + "rajesh"); String a2 = Engine.getInstance().toMd5( request.getMethod() + ":" + form.getFirstValue("uri")); form.add("response", Engine.getInstance().toMd5( a1 + ":" + form.getFirstValue("nonce") + ":" + a2)); ChallengeResponse challengeResponse = new ChallengeResponse( ChallengeScheme.HTTP_DIGEST, "", ""); challengeResponse.setCredentialComponents(form); // Send the completed request request.setChallengeResponse(challengeResponse); response = client.handle(request); // Should be 200. System.out.println(response.getStatus());

    Read the article

  • kAudioSessionProperty_CurrentHardwareSampleRate input/output

    - by iter
    kAudioSessionProperty_CurrentHardwareSampleRate seems to describe the input sampling rate. I wonder if there is a way to determine the available output sampling rate on an iPhone / iPad (iPhone supports 44.1K; iPad, 48K). http://developer.apple.com/iphone/library/documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html#//apple_ref/doc/c_ref/kAudioSessionProperty_CurrentHardwareSampleRate

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >