Search Results

Search found 14316 results on 573 pages for 'reference manual'.

Page 76/573 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • How to sum cells depending on the content of a neighbor cell

    - by dannymcc
    I have an Excel document with the following columns; Date | Reference | Amount 23/01/11 | 111111111 | £20.00 25/09/11 | 222222222 | £30.00 11/11/11 | 111111111 | £40.00 01/04/11 | 333333333 | £10.00 31/03/11 | 333333333 | £33.00 20/03/11 | 111111111 | £667.00 21/11/11 | 222222222 | £564.00 I am trying to find a way of summarising the content in the following way; Reference : 111111111 Total: £727 So far the only way I have been able to achieve this is to filter the list by each reference number (manually) and then add a simple SUM formula to the bottom of the list of amounts. Are there any tricks that anyone knows that may speed this up? What I am trying to achieve is a spreadsheet that highlights each reference number that collectively exceeds over £2,000.

    Read the article

  • What do you do when you need whole-words search in Firefox?

    - by romkyns
    Example: I am looking to see if "arg" is a special keyword in Lua. I go to the Lua reference manual at http://www.lua.org/manual/5.1/manual.html. I search for "arg". I find hundreds of occurrences of the word "argument(s)" or "vararg". Any ideas? I know Firefox won't implement whole words as a core feature (something about cluttering up the ui... argh...), and I couldn't find a good addon that implemented this well.

    Read the article

  • what's POST code 18 / can I run an ASRock P55 Pro + Core i3 530?

    - by Michael Borgwardt
    When I switch my newly built PC on, the fans start up, but I get nothing on the monitor, and the POST display on the motherboard runs quickly through various codes and then stops at code 18, which does not appear in the manual (the list there seems identical to this one). This lasts about 10 seconds, after which the machine shuts down. After a pause (also about 10 seconds) it starts up again, and this repeats until I cut the power. Interestingly, when I push the reset button, it stops at POST code 16 (which is also not listed in the manual). Does anyone have information about the meaning of those codes? Motherboard: ASRock P55 Pro CPU: Intel Core i3 530 Graphics Card: Sapphire HHD 5750 Does anyone have experience with that Motherboard/CPU combination? It says on the packaging and in the manual that it's only compatible with Core i5 and i7 (no mention of i3), but on the maker's product page, the i3 is listed as compatible as well.

    Read the article

  • How do I get Visual Studio build process to pass condition to library(external) project

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

  • MSBuild condition across projects

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

  • Pass MSBuild condition to library project

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • How to login as other in ubuntu 12.04

    - by murali
    i have upgraded my ubuntu 11.10 to 12.04. i could not see other login option in the login screen. it shows only Guest login and User login. The User Login ask only password and i had never entered in as User login so that i do not know about password of User login. my problem is how to login as root from the login screen? how can i get Other login option to login as root or some other user? before ask this question i have tried the following: try to add the greeter-show-manual-login=true line at the bottom of /etc/lightdm/lightdm.conf file as Guest login but i get access denied error. i do not know the password of User login (ask only password while login) to purpose of adding above line. from the safe mode login, i could login as root but i could not add the above line the lightdm.conf file . i got read only error so that i tried to change the permission to 777 like the following > chmod 777 lightdm.conf (i am within the /etc/lightdm/). but i got the error the file marked as/is read only in the file system. In 11.10 version i have created 4 users. i can see that the users exist in 12.10 . so i am sure my self users are not removed while upgrate. In short, i need Other login option on my login screen? how to get it? please help me. * Edited Question:* i have add the following line the /etc/lightdm/lighdm.conf file on recovery mode greeter-show-manual-login=true and i saved the file using wq command. now my /etc/lightdm/lighdm.conf file looking as the following: [SeatDefaults] greeter-session=unity-greeter user-session=ubuntu greeter-show-manual-login=true if i commit any mistake please correct me. by this problem i have wasted the two working day and all the works are in pending... please help me.

    Read the article

  • Regular-Expressions.info Thoroughly Updated

    - by Jan Goyvaerts
    RegexBuddy 4 was released earlier this month. This is a major upgrade that significantly improves RegexBuddy’s ability to emulate the features and deficiencies of the latest versions of all the popular regex flavors as well as many past versions of these flavors. Along with that, the Regular-Expressions.info website has been thoroughly updated with new content. Both the tutorial and reference sections have been significantly expanded to cover all the features of the latest regular expression flavors. There are also new tutorial and reference subsections that explain the syntax used by replacement strings when searching and replacing with regular expressions. I’m also reviving this blog. In the coming weeks you can expect blog post that highlight the new topics on the Regular-Expressions.info website. Later on I’ll blog about more intricate regex-related issues that RegexBuddy 4 emulates but that the website doesn’t talk about or only mentions in passing. RegexBuddy 4.0.0 is aware of 574 different aspects (syntactic and behavioral differences) of 94 regular expression flavors. These numbers are surely to grow with future 4.x.x releases. While RegexBuddy juggles it all with ease, that’s far too much detail to cover in a tutorial or reference that any person would want to read. So the tutorial and reference cover the important features and behaviors, while the blog will serve the corner cases as tidbits. Subscribe to the Regex Guru RSS Feed if you don’t want to miss any articles.

    Read the article

  • Part 3: Customization Strategy or how long does it take

    - by volker.eckardt(at)oracle.com
    The previous part in this blog should have made us aware, that many procedures are required to manage all these steps. To review your status let me ask you a question:What is your Customization Strategy?Your answer might be something like, 'customization strategy, well, we have standards and we let requirement documents approve'.Let me ask you another question:How long does it take to redeploy all your customizations into a fresh installation?In 90% of all installations the answer to this question would be: we can't!Although no one would have to do it (hopefully), just thinking about it and recognizing that we have today too many manual steps involved, different procedures and sometimes (undocumented) manual steps to complete a customization installation. And ... in general too many customizations.Why is working with customizations often so complicated and time consuming?Here are the key reasons as I have identified them in my projects:Customization standards defined, but not maintainedDifferent knowledge on developer side (results getting an individual developer touch)No need to automate deployment (not forced by client)Different documentation styles, not easy to hand over to someone elseDifferent development concepts, difficult for the maintenanceJust the minimum present for testing, often positive testing onlyDeviations from naming conventions accepted, although definedComplicated procedures, therefore sometimes partially ignoredAnd last but not least, hand made version control (still)If you would have to 'redeploy all your customizations' you would have to Follow all your own standards and best practiceTrack deviations and define corrective tasksAutomate as much as possible, minimize manual tasksDo not allow any change coming in without version controlUtilize products to support you in deploymentMinimize hand made scripts and extensive documentationReview regularly used techniques to guarantee that all are in line with the current release and also easy maintainableCreate solution libraries and force the team to contribute and reuseDefine quality activities and execute themDefine a procedure to release customizationsI know, it is easy to write down, but much harder to manage. Will provide some guidelines in my next blog.Volker

    Read the article

  • update-java-alternatives vs update-alternatives --config java

    - by Stan Smith
    Thanks in advance from this Ubuntu noob... On Ubuntu 12.04 LTS I have installed Sun's JDK7, Eclipse, and the Arduino IDE. I want the Arduino to use OpenJDK 6 and want Eclipse to use Sun's JDK 7. From my understanding I need to manually choose which Java to use before running each application. This led me to the 'update-java-alternatives -l' command... when I run this I only see the following: java-1.6.0-openjdk-amd64 1061 /usr/lib/jvm/java-1.6.0-openjdk-amd64 ...but when I run 'update-alternatives --config java' I see the following: *0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java manual mode 2 /usr/lib/jvm/jdk1.7.0/bin/java manual mode 3 /usr/lib/jvm/jre1.7.0/bin/java manual mode I don't understand why the update-java-alternatives doesn't display the same 3 options. I also don't understand how to switch between OpenJDK6 and JDK7. Can someone please explain how I can go about using the OpenJDK6 for Arduino development and Sun JDK7 for Eclipse/Android development? Thank you in advance for any assistance or feedback you can offer. Stan

    Read the article

  • How to organize a programming course?

    - by Bogdan Gavril
    I've been given the task to train our manual testers to become developers in test (write test automation!). some have basic programing knowledge (either dabbling in PHP or reading stuff) and some who have no experience. Note that I do have teaching experience, but with real students, not employees, and one concern is that they will not put extra hours except the 20% management gave them for the transition. Language to be taught and used: C# We have 8 hours per week to do this and should decide if they will make it in 2 months. I am thinking of a combined approach: use a manual such as Head First C# (although I'm not happy with the labs, they're mostly games and I don't want to add UI complexities) have them read from the manual do labs with them, solving more and more difficult problems and explain the theoretical stuff as well have them do a bigger project towards the end Some questions: do you have a better suggestion as far as manuals go? do you have a better aproach? Focus less on labs? what kind of assesments should I use and how often? should I let them do a bigger project (bank system or small game) and how much time should I invest in that? ideeas on labs? other resources ? Any other tips would be most welcomed! Thanks!

    Read the article

  • A new tool in beta: Conflict Alert

    - by Alex Davies
    You know that manual merges are a real pain? Well, I’ve just released a Visual Studio extension that makes manual merges a thing of the past. No source control system can automatically merge two edits to the same line of code. Conflict Alert solves this by warning you that you are heading down a path that will cause a manual merge later down the line. You choose whether you want to carry on, or talk to your teammate and find out what they are doing. Have you ever warned your teammates that you are doing a big refactor, and that they should ‘keep out of class X’? Conflict Alert tells them for you automatically by highlighting the sections of code that you have edited.   It doesn’t need to connect to your source control system, so it works no matter which you use. Its a first release, and I hope it is useful. Any feedback would be gratefully received. Grab a teammate and try it now.

    Read the article

  • How do I install the latest 2012 TeX Live on 12.04?

    - by user74713
    The Tex Live included in Ubuntu 12.04 is very old (2009), and I would like to install the latest version to be able to edit the Ubuntu manual. How do I do that from the terminal? Hello, my name is Chris. I am a student pursuing a career in technical writing and I would like to assist the Ubuntu community while gaining experience & building my resume. I need to install upstream version of TeX Live for 12.04 to edit the manual for Ubuntu. I am having a difficult time installing it per the directions @ http://ubuntu-manual.org/getinvolved/editors#install-texlive. TeX Live documents are on my computer, but I am not able to run the install. No TeX Live program found on my computer. Any help is greatly appreciated! ~Thanx!~ Below I have listed the prior attempts & links to view the posts of each attempt: Backports I have tried using the official backports of the latest (2012) TeX-Live via their PPA. Please refer to link below for the particulars. How do I install the latest 2012 TeX Live on 12.04? Latex I've also tried running Latex as suggested. Please refer to link below for the particulars. http://ubuntuforums.org/showthread.php?t=2019051 PPA Causing Issue?? or something else I came across a post concerning the ability to install programs via the terminal and am wondering if it may be my problem??? Please refer to link below for the particulars. PPA - TeX Live Cannot install anything through Terminal - apt-get -f install

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • Centralizing a resource file among multiple projects in one solution (C#/WPF)

    - by MarkPearl
    One of the challenges one faces when doing multi language support in WPF is when one has several projects in one solution (i.e. a business layer & ui layer) and you want multi language support. Typically each solution would have a resource file – meaning if you have 3 projects in a solution you will have 3 resource files.   For me this isn’t an ideal solution, as you normally want to send the resource files to a translator and the more resource files you have, the more fragmented the dictionary will be and the more complicated it will be for the translator. This can easily be overcome by creating a single project that just holds your translation resources and then exposing it to the other projects as a reference as explained in the following steps. Step 1 Step 1 -  Add a class library to your solution that will contain just the resource files. Your solution will now have an additional project as illustrated below. Step 2 Reference this project to the other projects. Step 3 Move all the resources from the other resource files to the translation projects resource file. Step 4 Set the translations projects resource files access modifier to public. Step 5 Reference all other projects to use the translation resource file instead of their local resource file. To do this in xaml you would need to expose the project as a namespace at the top of the xaml file… note that the example below is for a project called MaxCutLanguages – you need to put the correct project name in its place.   xmlns:MaxCutLanguages="clr-namespace:MaxCutLanguages;assembly=MaxCutLanguages"   And then in the actual xaml you need to replace any text with a reference to the resource file. <TextBlock Text="{x:Static MaxCutLanguages:Properties.Resources.HelloWorld}"/> End Result You can now delete all the resource files in the other projects as you now have one centralized resource file.

    Read the article

  • Join Us!! Live Webinar: Using UPK for Testing

    - by Di Seghposs
    Create Manual Test Scripts 50% Faster with Oracle User Productivity Kit  Thursday, March 29, 2012 11:00 am – 12:00 pm ET Click here to register now for this informative webinar. Oracle UPK enhances the testing phase of the implementation lifecycle by reducing test plan creation time, improving accuracy, and providing the foundation for reusable training documentation, application simulations, and end-user performance support—all critical assets to support an enterprise application implementation. With Oracle UPK: Reduce manual test plan development time - Accelerate the testing cycle by significantly reducing the time required to create the test plan. Improve test plan accuracy - Capture test steps automatically using Oracle UPK and import those steps directly to any of these testing suites eliminating many of the errors that occur when writing manual tests. Create the foundation for reusable assets - Recorded simulations can be used for other lifecycle phases of the project, such as knowledge transfer for training and support. With its integration to Oracle Application Testing Suite, IBM Rational, and HP Quality Center, Oracle UPK allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing  issues. Join this live webinar and learn how to decrease your time to deployment and enhance your testing plans today! 

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • Trouble linking libboost libraries to compile sslsniff on RHEL

    - by rwong48
    Trying to build sslsniff on a RHEL 5.2 system here. When compiling sslsniff on RHEL I hit the same errors when using libboost packages (from repositories like rpmforge) and compiling libboost from source (which appeared to be successful.) I tried this on a fresh system as well (no previous/failed/garbage installs of libboost etc.) # make g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SSLConnectionManager.o -MD -MP -MF .deps/SSLConnectionManager.Tpo -c -o SSLConnectionManager.o SSLConnectionManager.cpp mv -f .deps/SSLConnectionManager.Tpo .deps/SSLConnectionManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FirefoxUpdater.o -MD -MP -MF .deps/FirefoxUpdater.Tpo -c -o FirefoxUpdater.o FirefoxUpdater.cpp mv -f .deps/FirefoxUpdater.Tpo .deps/FirefoxUpdater.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT Logger.o -MD -MP -MF .deps/Logger.Tpo -c -o Logger.o Logger.cpp mv -f .deps/Logger.Tpo .deps/Logger.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SessionCache.o -MD -MP -MF .deps/SessionCache.Tpo -c -o SessionCache.o SessionCache.cpp mv -f .deps/SessionCache.Tpo .deps/SessionCache.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT SSLBridge.o -MD -MP -MF .deps/SSLBridge.Tpo -c -o SSLBridge.o SSLBridge.cpp mv -f .deps/SSLBridge.Tpo .deps/SSLBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HTTPSBridge.o -MD -MP -MF .deps/HTTPSBridge.Tpo -c -o HTTPSBridge.o HTTPSBridge.cpp mv -f .deps/HTTPSBridge.Tpo .deps/HTTPSBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT sslsniff.o -MD -MP -MF .deps/sslsniff.Tpo -c -o sslsniff.o sslsniff.cpp mv -f .deps/sslsniff.Tpo .deps/sslsniff.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FingerprintManager.o -MD -MP -MF .deps/FingerprintManager.Tpo -c -o FingerprintManager.o FingerprintManager.cpp mv -f .deps/FingerprintManager.Tpo .deps/FingerprintManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT AuthorityCertificateManager.o -MD -MP -MF .deps/AuthorityCertificateManager.Tpo -c -o AuthorityCertificateManager.o `test -f 'certificate/AuthorityCertificateManager.cpp' || echo './'`certificate/AuthorityCertificateManager.cpp mv -f .deps/AuthorityCertificateManager.Tpo .deps/AuthorityCertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT TargetedCertificateManager.o -MD -MP -MF .deps/TargetedCertificateManager.Tpo -c -o TargetedCertificateManager.o `test -f 'certificate/TargetedCertificateManager.cpp' || echo './'`certificate/TargetedCertificateManager.cpp mv -f .deps/TargetedCertificateManager.Tpo .deps/TargetedCertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT CertificateManager.o -MD -MP -MF .deps/CertificateManager.Tpo -c -o CertificateManager.o `test -f 'certificate/CertificateManager.cpp' || echo './'`certificate/CertificateManager.cpp mv -f .deps/CertificateManager.Tpo .deps/CertificateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpBridge.o -MD -MP -MF .deps/HttpBridge.Tpo -c -o HttpBridge.o `test -f 'http/HttpBridge.cpp' || echo './'`http/HttpBridge.cpp mv -f .deps/HttpBridge.Tpo .deps/HttpBridge.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpConnectionManager.o -MD -MP -MF .deps/HttpConnectionManager.Tpo -c -o HttpConnectionManager.o `test -f 'http/HttpConnectionManager.cpp' || echo './'`http/HttpConnectionManager.cpp mv -f .deps/HttpConnectionManager.Tpo .deps/HttpConnectionManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT HttpHeaders.o -MD -MP -MF .deps/HttpHeaders.Tpo -c -o HttpHeaders.o `test -f 'http/HttpHeaders.cpp' || echo './'`http/HttpHeaders.cpp mv -f .deps/HttpHeaders.Tpo .deps/HttpHeaders.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT UpdateManager.o -MD -MP -MF .deps/UpdateManager.Tpo -c -o UpdateManager.o UpdateManager.cpp mv -f .deps/UpdateManager.Tpo .deps/UpdateManager.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT OCSPDenier.o -MD -MP -MF .deps/OCSPDenier.Tpo -c -o OCSPDenier.o `test -f 'http/OCSPDenier.cpp' || echo './'`http/OCSPDenier.cpp mv -f .deps/OCSPDenier.Tpo .deps/OCSPDenier.Po g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" -DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE=\"sslsniff\" -DVERSION=\"0.6\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -ggdb -g -O2 -MT FirefoxAddonUpdater.o -MD -MP -MF .deps/FirefoxAddonUpdater.Tpo -c -o FirefoxAddonUpdater.o FirefoxAddonUpdater.cpp mv -f .deps/FirefoxAddonUpdater.Tpo .deps/FirefoxAddonUpdater.Po g++ -ggdb -g -O2 -lssl -lboost_filesystem -lpthread -lboost_thread -llog4cpp -o sslsniff SSLConnectionManager.o FirefoxUpdater.o Logger.o SessionCache.o SSLBridge.o HTTPSBridge.o sslsniff.o FingerprintManager.o AuthorityCertificateManager.o TargetedCertificateManager.o CertificateManager.o HttpBridge.o HttpConnectionManager.o HttpHeaders.o UpdateManager.o OCSPDenier.o FirefoxAddonUpdater.o SSLConnectionManager.o: In function `__static_initialization_and_destruction_0': /usr/local/include/boost/system/error_code.hpp:208: undefined reference to `boost::system::get_system_category()' /usr/local/include/boost/system/error_code.hpp:209: undefined reference to `boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:214: undefined reference to `boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:215: undefined reference to `boost::system::get_generic_category()' /usr/local/include/boost/system/error_code.hpp:216: undefined reference to `boost::system::get_system_category()' There's more, but I guess there's a post length limit.. Most of them appear related to boost::system so I added -lboost_system to the linker command and got farther: # g++ -ggdb -g -O2 -lssl -lboost_filesystem -lpthread -lboost_thread -llog4cpp -o sslsniff SSLConnectionManager.o FirefoxUpdater.o Logger.o SessionCache.o SSLBridge.o HTTPSBridge.o sslsniff.o FingerprintManager.o AuthorityCertificateManager.o TargetedCertificateManager.o CertificateManager.o HttpBridge.o HttpConnectionManager.o HttpHeaders.o UpdateManager.o OCSPDenier.o FirefoxAddonUpdater.o -lboost_system SSLConnectionManager.o: In function `thread<boost::_bi::bind_t<void, boost::_mfi::mf3<void, SSLConnectionManager, boost::shared_ptr<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> > >, boost::asio::ip::basic_endpoint<boost::asio::ip::tcp>, bool>, boost::_bi::list4<boost::_bi::value<SSLConnectionManager*>, boost::_bi::value<boost::shared_ptr<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> > > >, boost::_bi::value<boost::asio::ip::basic_endpoint<boost::asio::ip::tcp> >, boost::_bi::value<bool> > > >': /usr/local/include/boost/thread/detail/thread.hpp:191: undefined reference to `boost::thread::start_thread()' SSLConnectionManager.o: In function `~thread_data': /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to `boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to `boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to `boost::detail::thread_data_base::~thread_data_base()' /usr/local/include/boost/thread/detail/thread.hpp:40: undefined reference to `boost::detail::thread_data_base::~thread_data_base()' Now the errors are related to boost::detail and boost::filesystem::detail. I've tried using boost 1.35 and 1.42 (latest). On my own Ubuntu system, I installed the libraries from Ubuntu repositories and I was able to compile+link sslsniff just fine. Thanks in advance.

    Read the article

  • Weblogic 10.0: SAMLSignedObject.verify() failed to validate signature value

    - by joshea
    I've been having this problem for a while and it's driving me nuts. I'm trying to create a client (in C# .NET 2.0) that will use SAML 1.1 to sign on to a WebLogic 10.0 server (i.e., a Single Sign-On scenario, using browser/post profile). The client is on a WinXP machine and the WebLogic server is on a RHEL 5 box. I based my client largely on code in the example here: http://www.codeproject.com/KB/aspnet/DotNetSamlPost.aspx (the source has a section for SAML 1.1). I set up WebLogic based on instructions for SAML Destination Site from here:http://www.oracle.com/technology/pub/articles/dev2arch/2006/12/sso-with-saml4.html I created a certificate using makecert that came with VS 2005. makecert -r -pe -n "CN=whatever" -b 01/01/2010 -e 01/01/2011 -sky exchange whatever.cer -sv whatever.pvk pvk2pfx.exe -pvk whatever.pvk -spc whatever.cer -pfx whatever.pfx Then I installed the .pfx to my personal certificate directory, and installed the .cer into the WebLogic SAML Identity Asserter V2. I read on another site that formatting the response to be readable (ie, adding whitespace) to the response after signing would cause this problem, so I tried various combinations of turning on/off .Indent XMLWriterSettings and turning on/off .PreserveWhiteSpace when loading the XML document, and none of it made any difference. I've printed the SignatureValue both before the message is is encoded/sent and after it arrives/gets decoded, and they are the same. So, to be clear: the Response appears to be formed, encoded, sent, and decoded fine (I see the full Response in the WebLogic logs). WebLogic finds the certificate I want it to use, verifies that a key was supplied, gets the signed info, and then fails to validate the signature. Code: public string createResponse(Dictionary<string, string> attributes){ ResponseType response = new ResponseType(); // Create Response response.ResponseID = "_" + Guid.NewGuid().ToString(); response.MajorVersion = "1"; response.MinorVersion = "1"; response.IssueInstant = System.DateTime.UtcNow; response.Recipient = "http://theWLServer/samlacs/acs"; StatusType status = new StatusType(); status.StatusCode = new StatusCodeType(); status.StatusCode.Value = new XmlQualifiedName("Success", "urn:oasis:names:tc:SAML:1.0:protocol"); response.Status = status; // Create Assertion AssertionType assertionType = CreateSaml11Assertion(attributes); response.Assertion = new AssertionType[] {assertionType}; //Serialize XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("samlp", "urn:oasis:names:tc:SAML:1.0:protocol"); ns.Add("saml", "urn:oasis:names:tc:SAML:1.0:assertion"); XmlSerializer responseSerializer = new XmlSerializer(response.GetType()); StringWriter stringWriter = new StringWriter(); XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.Indent = false;//I've tried both ways, for the fun of it settings.Encoding = Encoding.UTF8; XmlWriter responseWriter = XmlTextWriter.Create(stringWriter, settings); responseSerializer.Serialize(responseWriter, response, ns); responseWriter.Close(); string samlString = stringWriter.ToString(); stringWriter.Close(); // Sign the document XmlDocument doc = new XmlDocument(); doc.PreserveWhiteSpace = true; //also tried this both ways to no avail doc.LoadXml(samlString); X509Certificate2 cert = null; X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser); store.Open(OpenFlags.ReadOnly); X509Certificate2Collection coll = store.Certificates.Find(X509FindType.FindBySubjectDistinguishedName, "distName", true); if (coll.Count < 1) { throw new ArgumentException("Unable to locate certificate"); } cert = coll[0]; store.Close(); //this special SignDoc just overrides a function in SignedXml so //it knows to look for ResponseID rather than ID XmlElement signature = SamlHelper.SignDoc( doc, cert, "ResponseID", response.ResponseID); doc.DocumentElement.InsertBefore(signature, doc.DocumentElement.ChildNodes[0]); // Base64Encode and URL Encode byte[] base64EncodedBytes = Encoding.UTF8.GetBytes(doc.OuterXml); string returnValue = System.Convert.ToBase64String( base64EncodedBytes); return returnValue; } private AssertionType CreateSaml11Assertion(Dictionary<string, string> attributes){ AssertionType assertion = new AssertionType(); assertion.AssertionID = "_" + Guid.NewGuid().ToString(); assertion.Issuer = "madeUpValue"; assertion.MajorVersion = "1"; assertion.MinorVersion = "1"; assertion.IssueInstant = System.DateTime.UtcNow; //Not before, not after conditions ConditionsType conditions = new ConditionsType(); conditions.NotBefore = DateTime.UtcNow; conditions.NotBeforeSpecified = true; conditions.NotOnOrAfter = DateTime.UtcNow.AddMinutes(10); conditions.NotOnOrAfterSpecified = true; //Name Identifier to be used in Saml Subject NameIdentifierType nameIdentifier = new NameIdentifierType(); nameIdentifier.NameQualifier = domain.Trim(); nameIdentifier.Value = subject.Trim(); SubjectConfirmationType subjectConfirmation = new SubjectConfirmationType(); subjectConfirmation.ConfirmationMethod = new string[] { "urn:oasis:names:tc:SAML:1.0:cm:bearer" }; // // Create some SAML subject. SubjectType samlSubject = new SubjectType(); AttributeStatementType attrStatement = new AttributeStatementType(); AuthenticationStatementType authStatement = new AuthenticationStatementType(); authStatement.AuthenticationMethod = "urn:oasis:names:tc:SAML:1.0:am:password"; authStatement.AuthenticationInstant = System.DateTime.UtcNow; samlSubject.Items = new object[] { nameIdentifier, subjectConfirmation}; attrStatement.Subject = samlSubject; authStatement.Subject = samlSubject; IPHostEntry ipEntry = Dns.GetHostEntry(System.Environment.MachineName); SubjectLocalityType subjectLocality = new SubjectLocalityType(); subjectLocality.IPAddress = ipEntry.AddressList[0].ToString(); authStatement.SubjectLocality = subjectLocality; attrStatement.Attribute = new AttributeType[attributes.Count]; int i=0; // Create SAML attributes. foreach (KeyValuePair<string, string> attribute in attributes) { AttributeType attr = new AttributeType(); attr.AttributeName = attribute.Key; attr.AttributeNamespace= domain; attr.AttributeValue = new object[] {attribute.Value}; attrStatement.Attribute[i] = attr; i++; } assertion.Conditions = conditions; assertion.Items = new StatementAbstractType[] {authStatement, attrStatement}; return assertion; } private static XmlElement SignDoc(XmlDocument doc, X509Certificate2 cert2, string referenceId, string referenceValue) { // Use our own implementation of SignedXml SamlSignedXml sig = new SamlSignedXml(doc, referenceId); // Add the key to the SignedXml xmlDocument. sig.SigningKey = cert2.PrivateKey; // Create a reference to be signed. Reference reference = new Reference(); reference.Uri= String.Empty; reference.Uri = "#" + referenceValue; // Add an enveloped transformation to the reference. XmlDsigEnvelopedSignatureTransform env = new XmlDsigEnvelopedSignatureTransform(); reference.AddTransform(env); // Add the reference to the SignedXml object. sig.AddReference(reference); // Add an RSAKeyValue KeyInfo (optional; helps recipient find key to validate). KeyInfo keyInfo = new KeyInfo(); keyInfo.AddClause(new KeyInfoX509Data(cert2)); sig.KeyInfo = keyInfo; // Compute the signature. sig.ComputeSignature(); // Get the XML representation of the signature and save // it to an XmlElement object. XmlElement xmlDigitalSignature = sig.GetXml(); return xmlDigitalSignature; } To open the page in my client app, string postData = String.Format("SAMLResponse={0}&APID=ap_00001&TARGET={1}", System.Web.HttpUtility.UrlEncode(builder.buildResponse("http://theWLServer/samlacs/acs",attributes)), "http://desiredURL"); webBrowser.Navigate("http://theWLServer/samlacs/acs", "_self", Encoding.UTF8.GetBytes(postData), "Content-Type: application/x-www-form-urlencoded");

    Read the article

  • C#/.NET Little Wonders: Using &lsquo;default&rsquo; to Get Default Values

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today’s little wonder is another of those small items that can help a lot in certain situations, especially when writing generics.  In particular, it is useful in determining what the default value of a given type would be. The Problem: what’s the default value for a generic type? There comes a time when you’re writing generic code where you may want to set an item of a given generic type.  Seems simple enough, right?  We’ll let’s see! Let’s say we want to query a Dictionary<TKey, TValue> for a given key and get back the value, but if the key doesn’t exist, we’d like a default value instead of throwing an exception. So, for example, we might have a the following dictionary defined: 1: var lookup = new Dictionary<int, string> 2: { 3: { 1, "Apple" }, 4: { 2, "Orange" }, 5: { 3, "Banana" }, 6: { 4, "Pear" }, 7: { 9, "Peach" } 8: }; And using those definitions, perhaps we want to do something like this: 1: // assume a default 2: string value = "Unknown"; 3:  4: // if the item exists in dictionary, get its value 5: if (lookup.ContainsKey(5)) 6: { 7: value = lookup[5]; 8: } But that’s inefficient, because then we’re double-hashing (once for ContainsKey() and once for the indexer).  Well, to avoid the double-hashing, we could use TryGetValue() instead: 1: string value; 2:  3: // if key exists, value will be put in value, if not default it 4: if (!lookup.TryGetValue(5, out value)) 5: { 6: value = "Unknown"; 7: } But the “flow” of using of TryGetValue() can get clunky at times when you just want to assign either the value or a default to a variable.  Essentially it’s 3-ish lines (depending on formatting) for 1 assignment.  So perhaps instead we’d like to write an extension method to support a cleaner interface that will return a default if the item isn’t found: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } 17:  So this creates an extension method on Dictionary<TKey, TValue> that will attempt to get a value using the given key, and will return the defaultIfNotFound as a stand-in if the key does not exist. This code compiles, fine, but what if we would like to go one step further and allow them to specify a default if not found, or accept the default for the type?  Obviously, we could overload the method to take the default or not, but that would be duplicated code and a bit heavy for just specifying a default.  It seems reasonable that we could set the not found value to be either the default for the type, or the specified value. So what if we defaulted the type to null? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = null) // ... No, this won’t work, because only reference types (and Nullable<T> wrapped types due to syntactical sugar) can be assigned to null.  So what about a calling parameterless constructor? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = new TValue()) // ... No, this won’t work either for several reasons.  First, we’d expect a reference type to return null, not an “empty” instance.  Secondly, not all reference types have a parameter-less constructor (string for example does not).  And finally, a constructor cannot be determined at compile-time, while default values can. The Solution: default(T) – returns the default value for type T Many of us know the default keyword for its uses in switch statements as the default case.  But it has another use as well: it can return us the default value for a given type.  And since it generates the same defaults that default field initialization uses, it can be determined at compile-time as well. For example: 1: var x = default(int); // x is 0 2:  3: var y = default(bool); // y is false 4:  5: var z = default(string); // z is null 6:  7: var t = default(TimeSpan); // t is a TimeSpan with Ticks == 0 8:  9: var n = default(int?); // n is a Nullable<int> with HasValue == false Notice that for numeric types the default is 0, and for reference types the default is null.  In addition, for struct types, the value is a default-constructed struct – which simply means a struct where every field has their default value (hence 0 Ticks for TimeSpan, etc.). So using this, we could modify our code to this: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound = default(TValue)) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } Now, if defaultIfNotFound is unspecified, it will use default(TValue) which will be the default value for whatever value type the dictionary holds.  So let’s consider how we could use this: 1: lookup.GetValueOrDefault(1); // returns “Apple” 2:  3: lookup.GetValueOrDefault(5); // returns null 4:  5: lookup.GetValueOrDefault(5, “Unknown”); // returns “Unknown” 6:  Again, do not confuse a parameter-less constructor with the default value for a type.  Remember that the default value for any type is the compile-time default for any instance of that type (0 for numeric, false for bool, null for reference types, and struct will all default fields for struct).  Consider the difference: 1: // both zero 2: int i1 = default(int); 3: int i2 = new int(); 4:  5: // both “zeroed” structs 6: var dt1 = default(DateTime); 7: var dt2 = new DateTime(); 8:  9: // sb1 is null, sb2 is an “empty” string builder 10: var sb1 = default(StringBuilder()); 11: var sb2 = new StringBuilder(); So in the above code, notice that the value types all resolve the same whether using default or parameter-less construction.  This is because a value type is never null (even Nullable<T> wrapped types are never “null” in a reference sense), they will just by default contain fields with all default values. However, for reference types, the default is null and not a constructed instance.  Also it should be noted that not all classes have parameter-less constructors (string, for instance, doesn’t have one – and doesn’t need one). Summary Whenever you need to get the default value for a type, especially a generic type, consider using the default keyword.  This handy word will give you the default value for the given type at compile-time, which can then be used for initialization, optional parameters, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,default

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • XNA Xbox 360 Content Manager Thread freezing Draw Thread

    - by Alikar
    I currently have a game that takes in large images, easily bigger than 1MB, to serve as backgrounds. I know exactly when this transition is supposed to take place, so I made a loader class to handle loading these large images in the background, but when I load the images it still freezes the main thread where the drawing takes place. Since this code runs on the 360 I move the thread to the 4th hardware thread, but that doesn't seem to help. Below is the class I am using. Any thoughts as to why my new content manager which should be in its own thread is interrupting the draw in my main thread would be appreciated. namespace FileSystem { /// <summary> /// This is used to reference how many objects reference this texture. /// Everytime someone references a texture we increase the iNumberOfReferences. /// When a class calls remove on a specific texture we check to see if anything /// else is referencing the class, if it is we don't remove it. If there isn't /// anything referencing the texture its safe to dispose of. /// </summary> class TextureContainer { public uint uiNumberOfReferences = 0; public Texture2D texture; } /// <summary> /// This class loads all the files from the Content. /// </summary> static class FileManager { static Microsoft.Xna.Framework.Content.ContentManager Content; static EventWaitHandle wh = new AutoResetEvent(false); static Dictionary<string, TextureContainer> Texture2DResourceDictionary; static List<Texture2D> TexturesToDispose; static List<String> TexturesToLoad; static int iProcessor = 4; private static object threadMutex = new object(); private static object Texture2DMutex = new object(); private static object loadingMutex = new object(); private static bool bLoadingTextures = false; /// <summary> /// Returns if we are loading textures or not. /// </summary> public static bool LoadingTexture { get { lock (loadingMutex) { return bLoadingTextures; } } } /// <summary> /// Since this is an static class. This is the constructor for the file loadeder. This is the version /// for the Xbox 360. /// </summary> /// <param name="_Content"></param> public static void Initalize(IServiceProvider serviceProvider, string rootDirectory, int _iProcessor ) { Content = new Microsoft.Xna.Framework.Content.ContentManager(serviceProvider, rootDirectory); Texture2DResourceDictionary = new Dictionary<string, TextureContainer>(); TexturesToDispose = new List<Texture2D>(); iProcessor = _iProcessor; CreateThread(); } /// <summary> /// Since this is an static class. This is the constructor for the file loadeder. /// </summary> /// <param name="_Content"></param> public static void Initalize(IServiceProvider serviceProvider, string rootDirectory) { Content = new Microsoft.Xna.Framework.Content.ContentManager(serviceProvider, rootDirectory); Texture2DResourceDictionary = new Dictionary<string, TextureContainer>(); TexturesToDispose = new List<Texture2D>(); CreateThread(); } /// <summary> /// Creates the thread incase we wanted to set up some parameters /// Outside of the constructor. /// </summary> static public void CreateThread() { Thread t = new Thread(new ThreadStart(StartThread)); t.Start(); } // This is the function that we thread. static public void StartThread() { //BBSThreadClass BBSTC = (BBSThreadClass)_oData; FileManager.Execute(); } /// <summary> /// This thread shouldn't be called by the outside world. /// It allows the File Manager to loop. /// </summary> static private void Execute() { // Make sure our thread is on the correct processor on the XBox 360. #if WINDOWS #else Thread.CurrentThread.SetProcessorAffinity(new int[] { iProcessor }); Thread.CurrentThread.IsBackground = true; #endif // This loop will load textures into ram for us away from the main thread. while (true) { wh.WaitOne(); // Locking down our data while we process it. lock (threadMutex) { lock (loadingMutex) { bLoadingTextures = true; } bool bContainsKey = false; for (int con = 0; con < TexturesToLoad.Count; con++) { // If we have already loaded the texture into memory reference // the one in the dictionary. lock (Texture2DMutex) { bContainsKey = Texture2DResourceDictionary.ContainsKey(TexturesToLoad[con]); } if (bContainsKey) { // Do nothing } // Otherwise load it into the dictionary and then reference the // copy in the dictionary else { TextureContainer TC = new TextureContainer(); TC.uiNumberOfReferences = 1; // We start out with 1 referece. // Loading the texture into memory. try { TC.texture = Content.Load<Texture2D>(TexturesToLoad[con]); // This is passed into the dictionary, thus there is only one copy of // the texture in memory. // There is an issue with Sprite Batch and disposing textures. // This will have to wait until its figured out. lock (Texture2DMutex) { bContainsKey = Texture2DResourceDictionary.ContainsKey(TexturesToLoad[con]); Texture2DResourceDictionary.Add(TexturesToLoad[con], TC); } // We don't have the find the reference to the container since we // already have it. } // Occasionally our texture will already by loaded by another thread while // this thread is operating. This mainly happens on the first level. catch (Exception e) { // If this happens we don't worry about it since this thread only loads // texture data and if its already there we don't need to load it. } } Thread.Sleep(100); } } lock (loadingMutex) { bLoadingTextures = false; } } } static public void LoadTextureList(List<string> _textureList) { // Ensuring that we can't creating threading problems. lock (threadMutex) { TexturesToLoad = _textureList; } wh.Set(); } /// <summary> /// This loads a 2D texture which represents a 2D grid of Texels. /// </summary> /// <param name="_textureName">The name of the picture you wish to load.</param> /// <returns>Holds the image data.</returns> public static Texture2D LoadTexture2D( string _textureName ) { TextureContainer temp; lock (Texture2DMutex) { bool bContainsKey = false; // If we have already loaded the texture into memory reference // the one in the dictionary. lock (Texture2DMutex) { bContainsKey = Texture2DResourceDictionary.ContainsKey(_textureName); if (bContainsKey) { temp = Texture2DResourceDictionary[_textureName]; temp.uiNumberOfReferences++; // Incrementing the number of references } // Otherwise load it into the dictionary and then reference the // copy in the dictionary else { TextureContainer TC = new TextureContainer(); TC.uiNumberOfReferences = 1; // We start out with 1 referece. // Loading the texture into memory. try { TC.texture = Content.Load<Texture2D>(_textureName); // This is passed into the dictionary, thus there is only one copy of // the texture in memory. } // Occasionally our texture will already by loaded by another thread while // this thread is operating. This mainly happens on the first level. catch(Exception e) { temp = Texture2DResourceDictionary[_textureName]; temp.uiNumberOfReferences++; // Incrementing the number of references } // There is an issue with Sprite Batch and disposing textures. // This will have to wait until its figured out. Texture2DResourceDictionary.Add(_textureName, TC); // We don't have the find the reference to the container since we // already have it. temp = TC; } } } // Return a reference to the texture return temp.texture; } /// <summary> /// Go through our dictionary and remove any references to the /// texture passed in. /// </summary> /// <param name="texture">Texture to remove from texture dictionary.</param> public static void RemoveTexture2D(Texture2D texture) { foreach (KeyValuePair<string, TextureContainer> pair in Texture2DResourceDictionary) { // Do our references match? if (pair.Value.texture == texture) { // Only one object or less holds a reference to the // texture. Logically it should be safe to remove. if (pair.Value.uiNumberOfReferences <= 1) { // Grabing referenc to texture TexturesToDispose.Add(pair.Value.texture); // We are about to release the memory of the texture, // thus we make sure no one else can call this member // in the dictionary. Texture2DResourceDictionary.Remove(pair.Key); // Once we have removed the texture we don't want to create an exception. // So we will stop looking in the list since it has changed. break; } // More than one Object has a reference to this texture. // So we will not be removing it from memory and instead // simply marking down the number of references by 1. else { pair.Value.uiNumberOfReferences--; } } } } /*public static void DisposeTextures() { int Count = TexturesToDispose.Count; // If there are any textures to dispose of. if (Count > 0) { for (int con = 0; con < TexturesToDispose.Count; con++) { // =!THIS REMOVES THE TEXTURE FROM MEMORY!= // This is not like a normal dispose. This will actually // remove the object from memory. Texture2D is inherited // from GraphicsResource which removes it self from // memory on dispose. Very nice for game efficency, // but "dangerous" in managed land. Texture2D Temp = TexturesToDispose[con]; Temp.Dispose(); } // Remove textures we've already disposed of. TexturesToDispose.Clear(); } }*/ /// <summary> /// This loads a 2D texture which represnets a font. /// </summary> /// <param name="_textureName">The name of the font you wish to load.</param> /// <returns>Holds the font data.</returns> public static SpriteFont LoadFont( string _fontName ) { SpriteFont temp = Content.Load<SpriteFont>( _fontName ); return temp; } /// <summary> /// This loads an XML document. /// </summary> /// <param name="_textureName">The name of the XML document you wish to load.</param> /// <returns>Holds the XML data.</returns> public static XmlDocument LoadXML( string _fileName ) { XmlDocument temp = Content.Load<XmlDocument>( _fileName ); return temp; } /// <summary> /// This loads a sound file. /// </summary> /// <param name="_fileName"></param> /// <returns></returns> public static SoundEffect LoadSound( string _fileName ) { SoundEffect temp = Content.Load<SoundEffect>(_fileName); return temp; } } }

    Read the article

  • Can't get bonding and bridging to work for KVM

    - by user9546
    Hi everyone. I can't for the life of me get bonding and bridging to work for the KVM setup I'm building. I'm using a fresh install (not an upgrade) of Ubuntu Server 10.10. I have 4 NICs on the same subnet (two intended for each of my two VMs). I'm trying to achieve the setup that Uthark describes here. But following his guidelines didn't work for me. My eth0 and eth1 did not come up, and "brctl show" showed that br0 didn't have any interfaces (the bond). I assumed it didn't work because he's using 10.4, and this article says there's a recent change in bonding: [I can't post more than one hyperlink per post because I'm a newbie.] I had to use this article to get my interfaces to work at all on the same subnet, which is why I have the post-up lines on some of my interfaces: [I can't post more than one hyperlink per post because I'm a newbie.] I installed ifenslave and ethtool. I also created /etc/modprobe.d/aliases.conf with the following content: alias bond0 bonding options bonding mode=6 miimon=100 downdelay=200 updelay=200 And I included "bonding" in /etc/modules So, after several approaches, here is my latest interfaces file: auto lo iface lo inet loopback auto eth5 iface eth5 inet manual auto br5 iface br5 inet static post-up /sbin/ip rule add from [network].79 lookup 10 post-up /sbin/ip route add table 10 default via [network].1 src [network].79 dev br5 address [network].79 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth5 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth2 iface eth2 inet manual auto br2 iface br2 inet static post-up /sbin/ip rule add from [network].78 lookup 11 post-up /sbin/ip route add table 11 default via [network].1 src [network].78 dev br2 address [network].78 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth2 bridge_stp off bridge_fd 0 bridge_maxwait 0 iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-alb up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto br0 iface br0 inet static address [network].60 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports bond0 eth2, eth5, br2, and br5 all seem to be working fine. The only other thing I could find that looked suspicious is an error regarding bonding in /var/log/messages: kernel: [ 3.828684] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details. even though there is a bond-miimon line in /etc/network/interfaces (if that's what they're talking about). Also, the bond seems to go in and out of promiscuous mode several times on boot: Jan 20 14:19:02 kvmhost kernel: [ 3.902378] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902390] device bond0 left promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902393] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902397] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.998990] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999005] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999008] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999012] device bond0 left promiscuous mode Any advice would be greatly appreciated. It seems that this must be possible, based on other posts, but I can't see what I'm doing wrong. Thanks.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >