Search Results

Search found 12605 results on 505 pages for 'settings'.

Page 487/505 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Ajax post not posting email address ?

    - by jeitjet
    UPDATE: It will not work in Firefox, but will work on any other browser. I even tried loading Firefox in safe mode (disabling all plugins, etc.) and still no worky. :( I'm trying to do an AJAX post (on form submission) to a separate PHP file, which works fine without trying to send an email address through the post. I'm fairly new to AJAX and pretty familiar with PHP. Here's my form and ajax call <form class="form" method="POST" name="settingsNotificationsForm"> <div class="clearfix"> <label>Email <em>*</em><small>A valid email address</small></label><input type="email" required="required" name="email" id="email" /> </div> <div class="clearfix"> <label>Email Notification<small>...when a new subscriber joins</small></label><input type="checkbox" name="subscribe_notifications" id="subscribe_notifications"> Receive an email notification with phone number when someone new subscribes to 'BIZDEMO' </div> <div class="clearfix"> <label>Email Notification<small>...when a subscriber cancels</small></label><input type="checkbox" name="unsubscribe_notifications" id="unsubscribe_notifications"> Receive an email notification with phone number when someone new unsubscribes to 'BIZDEMO' </div> <div class="action clearfix top-margin"> <button class="button button-gray" type="submit" id="notifications_submit"><span class="accept"></span>Save</button> </div> </form> and AJAX call: <script type="text/javascript"> jQuery(document).ready(function () { $("#notifications_submit").click(function() { var keyword_value = '<?php echo $keyword; ?>'; var email_address = $("input#email").val(); var subscribe_notifications_value = $("input#subscribe_notifications").attr('checked'); var unsubscribe_notifications_value = $("input#unsubscribe_notifications").attr('checked'); var data_values = { keyword : keyword_value, email : email_address, subscribe_notifications : subscribe_notifications_value, unsubscribe_notifications : unsubscribe_notifications_value }; $.ajax({ type: "POST", url: "../includes/ajax/update_settings.php", data: data_values, success: alert('Settings updated successfully!'), }); }); }); and receiving page: <?php include_once ("../db/db_connect.php"); $keyword = FILTER_INPUT(INPUT_POST, 'keyword' ,FILTER_SANITIZE_STRING); $email = FILTER_INPUT(INPUT_POST, 'email' ,FILTER_SANITIZE_EMAIL); $subscribe_notifications = FILTER_INPUT(INPUT_POST, 'subscribe_notifications' ,FILTER_SANITIZE_STRING); $unsubscribe_notifications = FILTER_INPUT(INPUT_POST, 'unsubscribe_notifications' ,FILTER_SANITIZE_STRING); $table = 'keyword_options'; $data_values = array('email' => $email, 'sub_notify' => $subscribe_notifications, 'unsub_notify' => $unsubscribe_notifications); foreach ($data_values as $name=>$value) { // See if keyword is already in database table $filter = array('keyword' => $keyword); $result = $db->find($table, $filter); if (count($result) > 0 && $new != true) { $where = array('keyword' => $keyword, 'keyword_meta' => $name); $data = array('keyword_value' => $value); $db->update($table, $where, $data); } else { $data = array('keyword' => $keyword, 'keyword_meta' => $name, 'keyword_value' => $value); $db->create($table, $data); $new = true; // If this is a new record, always go to else statement } } unset($value); Here are some weird things that happen: When I only enter text into the email field, (i.e. - abc), it works fine, posts correctly, etc. When I enter a bogus email address with the "." before the "@", it works fine When I enter a validated email address (with the "." after the "@"), the post fails. Ideas?

    Read the article

  • Sencha Touch 2 - Can't get list to display // or load a store? [UPDATED X2]

    - by Jordan
    I have been trying to get a list to display for quite a while now. I have tried all sorts of tips from various people without success. Now I am running into a new problem. I have taken the exact code from an example and I can't seem to get it to work either. First of all, here is the code. Station.js Ext.define('Syl.model.Station', { extend: 'Ext.data.Model', config: { fields: [ { name: 'id', type: 'string' }, { name: 'stop', type: 'string' } ] } }); Stations.js Ext.define('Syl.store.Stations', { extend : 'Ext.data.Store', requires: ['Syl.model.Station'], id: 'stations', xtype: 'stations', config : { model : 'Syl.model.Station', //storeId: 'stationsStore', autoLoad : true, //sorters: 'stop', /* proxy: { type: 'ajax', url: 'stations.json' }*/ data: [ { "id": "129", "stop": "NY Station" }, { "id": "13", "stop": "Newark Station" } ] } }); MyService.js Ext.define('Syl.view.MyService', { extend: 'Ext.Panel', xtype: 'stationsformPage', requires: [ 'Syl.store.Stations', 'Ext.form.FieldSet', 'Ext.field.Password', 'Ext.SegmentedButton', 'Ext.List' ], config: { fullscreen: true, layout: 'vbox', //iconCls: 'settings', //title: 'My Service', items: [ { docked: 'top', xtype: 'toolbar', title: 'My Service' }, { [OLDER CODE BEGIN] xtype: 'list', title: 'Stations', //store: 'Stations', store: stationStore, //UPDATED styleHtmlContent: true, itemTpl: '<div><strong>{stop}</strong> {id}</div>' [OLDER CODE END] [UPDATE X2 CODE BEGIN] xtype: 'container', layout: 'fit', flex: 10, items: [{ xtype: 'list', title: 'Stations', width: '100%', height: '100%', store: stationStore, styleHtmlContent: true, itemTpl: '<div><strong>{stop}</strong> {id}</div>' }] [UPDATE X2 CODE END] }, ] } }); app.js (edited down to the basics) var stationStore; //UPDATED Ext.application({ name: 'Syl', views: ['MyService'], store: ['Stations'], model: ['Station'], launch: function() { stationStore = Ext.create('Syl.store.Stations');//UPDATED var mainPanel = Ext.Viewport.add(Ext.create('Syl.view.MyService')); }, }); Okay, now when I run this in the browser, I get this error: "[WARN][Ext.dataview.List#applyStore] The specified Store cannot be found". The app runs but there is no list. I can't understand how this code could work for the people who gave the example and not me. Could it be a difference in the Sencha Touch version? I am using 2.0.1.1. To add to this, I have been having problems in general with lists not displaying. I had originally tried a stripped down list without even having a store. I tried to just set the data property in the list's config. I didn't get this error, but I also didn't get a list to display. That is why I thought I would try someone else's code. I figured if I could at least get a working list up and running, I could manipulate it into doing what I want. Any help would be greatly appreciated. Thanks. [UPDATED] Okay, so I did some more hunting and someone told me I needed to have an instance of my store to load into the list, not the store definition. So I updated the code as you can see and the error went away. The problem is that I still don't get a list. I have no errors at all, but I can't see a list. Am I not loading the data correctly? Or am I not putting the list in the view correctly? [UPDATED X2] Okay, so I learned that the list should be in a container and that I should give it a width and a height. I'm not totally sure on this being correct, but I do now have a list that I can drag up and down. The problem is there is still nothing in it. Anyone have a clue why?

    Read the article

  • NEED your opinion on .net Profile class VS session vars

    - by Ted
    To save trips to sql db in my older apps, I store *dozens of data points about the current user in an array and then store the array in a session. For example, info that might be used repeatedly during user’s session might be stored… Dim a(7) as string a(0) = “FirstName” a(1) = “LastName” a(2) = “Address” a(3) = “Address2” a(4) = “City” a(5) = “State” a(6) = “Zip” session.add(“s_a”, a) *Some apps have an array 100 in size. That is something I learned in my asp classic days. Referencing the correct index can be laborsome and I find it difficult to go back and add another data point in the array grouped with like data. For example, suppose I need to add Middle Initial to the array as a design alteration. Unless I redo the whole index mapping, I have to stick Middle Initial in the next open slot, which might be in the 50s. NOW, I am considering doing something easier to reference each time (eliminating the need to know the index of the value wanted). So I am looking to do this… session.add(“Firstname”, “FirstName”) session.add(“Lastname”, “LastName”) session.add(“Address”, “Address”) etc. BUT, before I do this, I would like some guidance. I am afraid this might be less efficient, even though easier to use. I don’t know if a new session object is created for each data point or if there is only one session object, and I am adding a name/value pair to that object? If I am adding a name/value pair to a single object, that seems like a good idea. Does anyone know? Or is there a more preferred way? Built-in Profile class? Re: Profile class I have an internal debate about scope. It seems that the .net Profile class is good for storing app-SPECIFIC user settings (i.e. style theme, object display properties, user role, etc.) The examples I give are information whose values are selected/edited by the user to customize the application experience. This information is not typically stored/edited elsewhere in the app db. But when you have data that 1) is stored already in the app db and 2) can be altered by other users (in this case: company reps may update client's status, address, etc.), then the persistence of the Profile data may be an issue. In this case, the Profile would need to be reset at the beginning and dropped like a session.abandon at the end of each user's session to prevent reloading info that had since been edited by someone. I believe this is possible, but not sure Currently, I use the session array to store both scopes, app-specific and user-specific data. If my session plan is good, I think I will create a class to set/get values from the session also. I appreciate your thoughts. I would like to know how others have handled this type of situation. Thanks.

    Read the article

  • Safely defining variables for public callback functions in javascript

    - by djreed
    I am working with the YouTube iFrame API to embed a number of videos on a page. Documentation here: https://developers.google.com/youtube/iframe_api_reference#Requirements In summary, you load the API asynchronously using the following snippet: var tag = document.createElement('script'); tag.src = "http://www.youtube.com/player_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); Once loaded, the API fires the predefined callback function onYouTubePlayerAPIReady. For additional context: I am defining a library file for this in Google Closure. I am providing a namespace: goog.provide('yt.video'); I then use goog.exportSymbol so that the API can find the function. That all works fine. My challenge is that I would like to pass 2 variables to the callback function. Is there any way to do this without defining these 2 variables in the context of the window object? goog.provide('yt.video'); goog.require('goog.dom'); yt.video = function(videos, locales) { this.videos = videos; this.captionLocales = locales; this.init(); }; yt.video.prototype.init = function() { var tag = document.createElement('script'); tag.src = "http://www.youtube.com/player_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); }; /* * Callback function fired when YT API is ready * This is exported using goog.exportSymbol in another file and * is being fired by the API properly. */ yt.video.prototype.onPlayerReady = function(videos, locales) { window.console.log('this :' + this); //logs window window.console.log('this.videos : ' + this.videos); //logs undefined /* * Video settings from Django variable */ for(i=0; i<this.videos.length; i++) { var playerEvents = {}; var embedVars = {}; var el = this.videos[i].el; var playerVid = this.videos[i].vid; var playerWidth = this.videos[i].width; var playerHeight = this.videos[i].height; var captionLocales = this.videos[i].locales; if(this.videos[i].playerVars) var embedVars = this.videos[i].playerVars; } if(this.videos[i].events) { var playerEvents = this.videos[i].events; } /* * Show captions by default */ if(goog.array.indexOf(captionLocales, 'es') >= 0) { embedVars.cc_load_policy = 1; }; new YT.Player(el, { height: playerHeight, width: playerWidth, videoId: playerVid, events: playerEvents, playerVars: embedVars }); }; }; To intialize this, I am currently using the following within a self-executing anonymous function: var videos = [ {"vid": "video_id", "el": "player-1", "width": 640, "height": 390, "locales": ["es", "fr"], "events": {"onStateChange": stateChanged}}, {"vid": "video_id", "el": "player-2", "locales": ["es", "fr"], "width": 640, "height": 390} ]; var locales = ['es']; var videoTemplate = new yt.video(videos, locales);

    Read the article

  • mysql connect error issue

    - by Alex
    I've php page which update Mysql Db. I don't understand why my following php code is saying that "Could not update marker. No database selected". Strange!! can you please tell me why it's showing error message. Thanks. Php code: <?php // database settings $db_username = 'root'; $db_password = ''; $db_name = 'parkool'; $db_host = 'localhost'; //mysqli $mysqli = new mysqli($db_host, $db_username, $db_password, $db_name); if (mysqli_connect_errno()) { header('HTTP/1.1 500 Error: Could not connect to db!'); exit(); } ################ Save & delete markers ################# if($_POST) //run only if there's a post data { //make sure request is comming from Ajax $xhr = $_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest'; if (!$xhr){ header('HTTP/1.1 500 Error: Request must come from Ajax!'); exit(); } // get marker position and split it for database $mLatLang = explode(',',$_POST["latlang"]); $mLat = filter_var($mLatLang[0], FILTER_VALIDATE_FLOAT); $mLng = filter_var($mLatLang[1], FILTER_VALIDATE_FLOAT); $mName = filter_var($_POST["name"], FILTER_SANITIZE_STRING); $mAddress = filter_var($_POST["address"], FILTER_SANITIZE_STRING); $mId = filter_var($_POST["id"], FILTER_SANITIZE_STRING); /*$result = mysql_query("SELECT id FROM test.markers WHERE test.markers.lat=$mLat AND test.markers.lng=$mLng"); if (!$result) { echo 'Could not run query: ' . mysql_error(); exit; } $row = mysql_fetch_row($result); $id=$row[0];*/ //$output = '<h1 class="marker-heading">'.$mId.'</h1><p>'.$mAddress.'</p>'; //exit($output); //Update Marker if(isset($_POST["update"]) && $_POST["update"]==true) { $results = mysql_query("UPDATE parkings SET latitude = '$mLat', longitude = '$mLng' WHERE locId = '94' "); if (!$results) { //header('HTTP/1.1 500 Error: Could not Update Markers! $mId'); echo "coudld not update marker." . mysql_error(); exit(); } exit("Done!"); } $output = '<h1 class="marker-heading">'.$mName.'</h1><p>'.$mAddress.'</p>'; exit($output); } ############### Continue generating Map XML ################# //Create a new DOMDocument object $dom = new DOMDocument("1.0"); $node = $dom->createElement("markers"); //Create new element node $parnode = $dom->appendChild($node); //make the node show up // Select all the rows in the markers table $results = $mysqli->query("SELECT * FROM parkings WHERE 1"); if (!$results) { header('HTTP/1.1 500 Error: Could not get markers!'); exit(); } //set document header to text/xml header("Content-type: text/xml"); // Iterate through the rows, adding XML nodes for each while($obj = $results->fetch_object()) { $node = $dom->createElement("marker"); $newnode = $parnode->appendChild($node); $newnode->setAttribute("name",$obj->name); $newnode->setAttribute("locId",$obj->locId); $newnode->setAttribute("address", $obj->address); $newnode->setAttribute("latitude", $obj->latitude); $newnode->setAttribute("longitude", $obj->longitude); //$newnode->setAttribute("type", $obj->type); } echo $dom->saveXML();

    Read the article

  • WiX 3 Tutorial: Understanding main WXS and WXI file

    - by Mladen Prajdic
    In the previous post we’ve taken a look at the WiX solution/project structure and project properties. We’re still playing with our super SuperForm application and today we’ll take a look at the general parts of the main wxs file, SuperForm.wxs, and the wxi include file. For wxs file we’ll just go over the general description of what each part does in the code comments. The more detailed descriptions will be in future posts about features themselves. WXI include file Include files are exactly what their name implies. To include a wxi file into the wxs file you have to put the wxi at the beginning of each .wxs file you wish to include it in. If you’ve ever worked with C++ you can think of the include files as .h files. For example if you include SuperFormVariables.wxi into the SuperForm.wxs, the variables in the wxi won’t be seen in FilesFragment.wxs or RegistryFragment.wxs. You’d have to include it manually into those two wxs files too. For preprocessor variable $(var.VariableName) to be seen by every file in the project you have to include them in the WiX project properties->Build->“Define preprocessor variables” textbox. This is why I’ve chosen not to go this route because in multi developer teams not everyone has the same directory structure and having a single variable would mean each developer would have to checkout the wixproj file to edit the variable. This is pretty much unacceptable by my standards. This is why we’ve added a System Environment variable named SuperFormFilesDir as is shown in the previous Wix Tutorial post. Because the FilesFragment.wxs is autogenerated on every project build we don’t want to change it manually each time by adding the include wxi at the beginning of the file. This way we couldn’t recreate it in each pre-build event. <?xml version="1.0" encoding="utf-8"?><Include> <!-- Versioning. These have to be changed for upgrades. It's not enough to just include newer files. --> <?define MajorVersion="1" ?> <?define MinorVersion="0" ?> <?define BuildVersion="0" ?> <!-- Revision is NOT used by WiX in the upgrade procedure --> <?define Revision="0" ?> <!-- Full version number to display --> <?define VersionNumber="$(var.MajorVersion).$(var.MinorVersion).$(var.BuildVersion).$(var.Revision)" ?> <!-- Upgrade code HAS to be the same for all updates. Once you've chosen it don't change it. --> <?define UpgradeCode="YOUR-GUID-HERE" ?> <!-- Path to the resources directory. resources don't really need to be included in the project structure but I like to include them for for clarity --> <?define ResourcesDir="$(var.ProjectDir)\Resources" ?> <!-- The name of your application exe file. This will be used to kill the process when updating and creating the desktop shortcut --> <?define ExeProcessName="SuperForm.MainApp.exe" ?></Include> For now there’s no way to tell WiX in Visual Studio to have a wxi include file available to the whole project, so you have to include it in each file separately. Only variables set in “Define preprocessor variables” or System Environment variables are accessible to the whole project for now. The main WXS file: SuperForm.wxs We’ll only take a look at the general structure of the main SuperForm.wxs and not its the details. We’ll cover the details in future posts. The code comments should provide plenty info about what each part does in general. Basically there are 5 major parts. The update part, the conditions and actions part, the UI install sequence, the directory structure and the features we want to include. <?xml version="1.0" encoding="UTF-8"?><!-- Add xmlns:util namespace definition to be able to use stuff from WixUtilExtension dll--><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension"> <!-- This is how we include wxi files --> <?include $(sys.CURRENTDIR)Includes\SuperFormVariables.wxi ?> <!-- Id="*" is to enable upgrading. * means that the product ID will be autogenerated on each build. Name is made of localized product name and version number. --> <Product Id="*" Name="!(loc.ProductName) $(var.VersionNumber)" Language="!(loc.LANG)" Version="$(var.VersionNumber)" Manufacturer="!(loc.ManufacturerName)" UpgradeCode="$(var.UpgradeCode)"> <!-- Define the minimum supported installer version (3.0) and that the install should be done for the whole machine not just the current user --> <Package InstallerVersion="300" Compressed="yes" InstallScope="perMachine"/> <Media Id="1" Cabinet="media1.cab" EmbedCab="yes" /> <!-- Upgrade settings. This will be explained in more detail in a future post --> <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion OnlyDetect="yes" Minimum="$(var.VersionNumber)" IncludeMinimum="no" Property="NEWER_VERSION_FOUND" /> <UpgradeVersion Minimum="0.0.0.0" IncludeMinimum="yes" Maximum="$(var.VersionNumber)" IncludeMaximum="no" Property="OLDER_VERSION_FOUND" /> </Upgrade> <!-- Reference the global NETFRAMEWORK35 property to check if it exists --> <PropertyRef Id="NETFRAMEWORK35"/> <!-- Startup conditions that checks if .Net Framework 3.5 is installed or if we're running the OS higher than Windows XP SP2. If not the installation is aborted. By doing the (Installed OR ...) property means that this condition will only be evaluated if the app is being installed and not on uninstall or changing --> <Condition Message="!(loc.DotNetFrameworkNeeded)"> <![CDATA[Installed OR NETFRAMEWORK35]]> </Condition> <Condition Message="!(loc.AppNotSupported)"> <![CDATA[Installed OR ((VersionNT >= 501 AND ServicePackLevel >= 2) OR (VersionNT >= 502))]]> </Condition> <!-- This custom action in the InstallExecuteSequence is needed to stop silent install (passing /qb to msiexec) from going around it. --> <CustomAction Id="NewerVersionFound" Error="!(loc.SuperFormNewerVersionInstalled)" /> <InstallExecuteSequence> <!-- Check for newer versions with FindRelatedProducts and execute the custom action after it --> <Custom Action="NewerVersionFound" After="FindRelatedProducts"> <![CDATA[NEWER_VERSION_FOUND]]> </Custom> <!-- Remove the previous versions of the product --> <RemoveExistingProducts After="InstallInitialize"/> <!-- WixCloseApplications is a built in custom action that uses util:CloseApplication below --> <Custom Action="WixCloseApplications" Before="InstallInitialize" /> </InstallExecuteSequence> <!-- This will ask the user to close the SuperForm app if it's running while upgrading --> <util:CloseApplication Id="CloseSuperForm" CloseMessage="no" Description="!(loc.MustCloseSuperForm)" ElevatedCloseMessage="no" RebootPrompt="no" Target="$(var.ExeProcessName)" /> <!-- Use the built in WixUI_InstallDir GUI --> <UIRef Id="WixUI_InstallDir" /> <UI> <!-- These dialog references are needed for CloseApplication above to work correctly --> <DialogRef Id="FilesInUse" /> <DialogRef Id="MsiRMFilesInUse" /> <!-- Here we'll add the GUI logic for installation and updating in a future post--> </UI> <!-- Set the icon to show next to the program name in Add/Remove programs --> <Icon Id="SuperFormIcon.ico" SourceFile="$(var.ResourcesDir)\Exclam.ico" /> <Property Id="ARPPRODUCTICON" Value="SuperFormIcon.ico" /> <!-- Installer UI custom pictures. File names are made up. Add path to your pics. –> <!-- <WixVariable Id="WixUIDialogBmp" Value="MyAppLogo.jpg" /> <WixVariable Id="WixUIBannerBmp" Value="installBanner.jpg" /> --> <!-- the default directory structure --> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLLOCATION" Name="!(loc.ProductName)" /> </Directory> </Directory> <!-- Set the default install location to the value of INSTALLLOCATION (usually c:\Program Files\YourProductName) --> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLLOCATION" /> <!-- Set the components defined in our fragment files that will be used for our feature --> <Feature Id="SuperFormFeature" Title="!(loc.ProductName)" Level="1"> <ComponentGroupRef Id="SuperFormFiles" /> <ComponentRef Id="cmpVersionInRegistry" /> <ComponentRef Id="cmpIsThisUpdateInRegistry" /> </Feature> </Product></Wix> For more info on what certain attributes mean you should look into the WiX Documentation.   WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • How to Upgrade Your Netbook to Windows 7 Home Premium

    - by Matthew Guay
    Would you like more features and flash in Windows on your netbook?  Here’s how you can easily upgrade your netbook to Windows 7 Home Premium the easy way. Most new netbooks today ship with Windows 7 Starter, which is the cheapest edition of Windows 7.  It is fine for many computing tasks, and will run all your favorite programs great, but it lacks many customization, multimedia, and business features found in higher editions.  Here we’ll show you how you can quickly upgrade your netbook to more full-featured edition of Windows 7 using Windows Anytime Upgrade.  Also, if you want to upgrade your laptop or desktop to another edition of Windows 7, say Professional, you can follow these same steps to upgrade it, too. Please note: This is only for computers already running Windows 7.  If your netbook is running XP or Vista, you will have to run a traditional upgrade to install Windows 7. Upgrade Advisor First, let’s make sure your netbook can support the extra features, such as Aero Glass, in Windows 7 Home Premium.  Most modern netbooks that ship with Windows 7 Starter can run the advanced features in Windows 7 Home Premium, but let’s check just in case.  Download the Windows 7 Upgrade Advisor (link below), and install as normal. Once it’s installed, run it and click Start Check.   Make sure you’re connected to the internet before you run the check, or otherwise you may see this error message.  If you see it, click Ok and then connect to the internet and start the check again. It will now scan all of your programs and hardware to make sure they’re compatible with Windows 7.  Since you’re already running Windows 7 Starter, it will also tell you if your computer will support the features in other editions of Windows 7. After a few moments, the Upgrade Advisor will show you want it found.  Here we see that our netbook, a Samsung N150, can be upgraded to Windows 7 Home Premium, Professional, or Ultimate. We also see that we had one issue, but this was because a driver we had installed was not recognized.  Click “See all system requirements” to see what your netbook can do with the new edition. This shows you which of the requirements, including support for Windows Aero, your netbook meets.  Here our netbook supports Aero, so we’re ready to go upgrade. For more, check out our article on how to make sure your computer can run Windows 7 with Upgrade Advisor. Upgrade with Anytime Upgrade Now, we’re ready to upgrade our netbook to Windows 7 Home Premium.  Enter “Anytime Upgrade” in the Start menu search,and select Windows Anytime Upgrade. Windows Anytime Upgrade lets you upgrade using product key you already have or one you purchase during the upgrade process.  And, it installs without any downloads or Windows disks, so it works great even for netbooks without DVD drives. Anytime Upgrades are cheaper than a standard upgrade, and for a limited time, select retailers in the US are offering Anytime Upgrades to Windows 7 Home Premium for only $49.99 if purchased with a new netbook.  If you already have a netbook running Windows 7 Starter, you can either purchase an Anytime Upgrade package at a retail store or purchase a key online during the upgrade process for $79.95.  Or, if you have a standard Windows 7 product key (full or upgrade), you can use it in Anytime upgrade.  This is especially nice if you can purchase Windows 7 cheaper through your school, university, or office. Purchase an upgrade online To purchase an upgrade online, click “Go online to choose the edition of Windows 7 that’s best for you”.   Here you can see a comparison of the features of each edition of Windows 7.  Note that you can upgrade to either Home Premium, Professional, or Ultimate.  We chose home Premium because it has most of the features that home users want, including Media Center and Aero Glass effects.  Also note that the price of each upgrade is cheaper than the respective upgrade from Windows XP or Vista.  Click buy under the edition you want.   Enter your billing information, then your payment information.  Once you confirm your purchase, you will directly be taken to the Upgrade screen.  Make sure to save your receipt, as you will need the product key if you ever need to reinstall Windows on your computer. Upgrade with an existing product key If you purchased an Anytime Upgrade kit from a retailer, or already have a Full or Upgrade key for another edition of Windows 7, choose “Enter an upgrade key”. Enter your product key, and click Next.  If you purchased an Anytime Upgrade kit, the product key will be located on the inside of the case on a yellow sticker. The key will be verified as a valid key, and Anytime Upgrade will automatically choose the correct edition of Windows 7 based on your product key.  Click Next when this is finished. Continuing the Upgrade process Whether you entered a key or purchased a key online, the process is the same from here on.  Click “I accept” to accept the license agreement. Now, you’re ready to install your upgrade.  Make sure to save all open files and close any programs, and then click Upgrade. The upgrade only takes about 10 minutes in our experience but your mileage may vary.  Any available Microsoft updates, including ones for Office, Security Essentials, and other products, will be installed before the upgrade takes place. After a couple minutes, your computer will automatically reboot and finish the installation.  It will then reboot once more, and your computer will be ready to use!  Welcome to your new edition of Windows 7! Here’s a before and after shot of our desktop.  When you do an Anytime Upgrade, all of your programs, files, and settings will be just as they were before you upgraded.  The only change we noticed was that our pinned taskbar icons were slightly rearranged to the default order of Internet Explorer, Explorer, and Media Player.  Here’s a shot of our desktop before the upgrade.  Notice that all of our pinned programs and desktop icons are still there, as well as our taskbar customization (we are using small icons on the taskbar instead of the default large icons). Before, with the Windows 7 Starter background and the Aero Basic theme: And after, with Aero Glass and the more colorful default Windows 7 background.   All of the features of Windows 7 Home Premium are now ready to use.  The Aero theme was activate by default, but you can now customize your netbook theme, background, and more with the Personalization pane.  To open it, right-click on your desktop and select Personalize. You can also now use Windows Media Center, and can play-back DVD movies using an external drive. One of our favorite tools, the Snipping Tool, is also now available for easy screenshots and clips. Activating you new edition of Windows 7 You will still need to activate your new edition of Windows 7.  To do this right away, open the start menu, right-click on Computer, and select Properties.   Scroll to the bottom, and click “Activate Windows Now”. Make sure you’re connected to the internet, and then select “Activate Windows online now”. Activation may take a few minutes, depending on your internet connection speed. When it is done, the Activation wizard will let you know that Windows is activated and genuine.  Your upgrade is all finished! Conclusion Windows Anytime Upgrade makes it easy, and somewhat cheaper, to upgrade to another edition of Windows 7.  It’s useful for desktop and laptop owners who want to upgrade to Professional or Ultimate, but many more netbook owners will want to upgrade from Starter to Home Premium or another edition.  Links Download the Windows 7 Upgrade Advisor Windows Team Blog: Anytime Upgrade Special with new PC purchase Similar Articles Productive Geek Tips How To Upgrade from Vista to Windows 7 Home Premium EditionAnother Blog You Should Subscribe ToMysticgeek Blog: Turn Vista Home Premium Into Ultimate (Part 3) – Shadow CopyUpgrade Ubuntu from Breezy to DapperHow to Upgrade the Windows 7 RC to RTM (Final Release) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • External HDD USB 3.0 failure

    - by Philip
    [ 2560.376113] usb 9-1: new high-speed USB device number 2 using xhci_hcd [ 2560.376186] usb 9-1: Device not responding to set address. [ 2560.580136] usb 9-1: Device not responding to set address. [ 2560.784104] usb 9-1: device not accepting address 2, error -71 [ 2560.840127] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2561.080182] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2566.096163] usb 10-1: device descriptor read/8, error -110 [ 2566.200096] usb 10-1: new SuperSpeed USB device number 5 using xhci_hcd [ 2571.216175] usb 10-1: device descriptor read/8, error -110 [ 2571.376138] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2571.744174] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2576.760116] usb 10-1: device descriptor read/8, error -110 [ 2576.864074] usb 10-1: new SuperSpeed USB device number 7 using xhci_hcd [ 2581.880153] usb 10-1: device descriptor read/8, error -110 [ 2582.040123] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2582.224139] hub 9-0:1.0: unable to enumerate USB device on port 1 [ 2582.464177] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2587.480122] usb 10-1: device descriptor read/8, error -110 [ 2587.584079] usb 10-1: new SuperSpeed USB device number 9 using xhci_hcd [ 2592.600150] usb 10-1: device descriptor read/8, error -110 [ 2592.760134] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2593.128175] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2598.144183] usb 10-1: device descriptor read/8, error -110 [ 2598.248109] usb 10-1: new SuperSpeed USB device number 11 using xhci_hcd [ 2603.264171] usb 10-1: device descriptor read/8, error -110 [ 2603.480157] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2608.496162] usb 10-1: device descriptor read/8, error -110 [ 2608.600091] usb 10-1: new SuperSpeed USB device number 12 using xhci_hcd [ 2613.616166] usb 10-1: device descriptor read/8, error -110 [ 2613.832170] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2618.848135] usb 10-1: device descriptor read/8, error -110 [ 2618.952079] usb 10-1: new SuperSpeed USB device number 13 using xhci_hcd [ 2623.968155] usb 10-1: device descriptor read/8, error -110 [ 2624.184176] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2629.200124] usb 10-1: device descriptor read/8, error -110 [ 2629.304075] usb 10-1: new SuperSpeed USB device number 14 using xhci_hcd [ 2634.320172] usb 10-1: device descriptor read/8, error -110 [ 2634.424135] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2634.776186] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2639.792105] usb 10-1: device descriptor read/8, error -110 [ 2639.896090] usb 10-1: new SuperSpeed USB device number 15 using xhci_hcd [ 2644.912172] usb 10-1: device descriptor read/8, error -110 [ 2645.128174] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2650.144160] usb 10-1: device descriptor read/8, error -110 [ 2650.248062] usb 10-1: new SuperSpeed USB device number 16 using xhci_hcd [ 2655.264120] usb 10-1: device descriptor read/8, error -110 [ 2655.480182] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2660.496121] usb 10-1: device descriptor read/8, error -110 [ 2660.600086] usb 10-1: new SuperSpeed USB device number 17 using xhci_hcd [ 2665.616167] usb 10-1: device descriptor read/8, error -110 [ 2665.832177] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2670.848110] usb 10-1: device descriptor read/8, error -110 [ 2670.952066] usb 10-1: new SuperSpeed USB device number 18 using xhci_hcd [ 2675.968081] usb 10-1: device descriptor read/8, error -110 [ 2676.072124] hub 10-0:1.0: unable to enumerate USB device on port 1 [ 2786.104531] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.104546] usb usb10: USB disconnect, device number 1 [ 2786.104686] xHCI xhci_drop_endpoint called for root hub [ 2786.104692] xHCI xhci_check_bandwidth called for root hub [ 2786.104942] xhci_hcd 0000:02:00.0: USB bus 10 deregistered [ 2786.105054] xhci_hcd 0000:02:00.0: remove, state 4 [ 2786.105065] usb usb9: USB disconnect, device number 1 [ 2786.105176] xHCI xhci_drop_endpoint called for root hub [ 2786.105181] xHCI xhci_check_bandwidth called for root hub [ 2786.109787] xhci_hcd 0000:02:00.0: USB bus 9 deregistered [ 2786.110134] xhci_hcd 0000:02:00.0: PCI INT A disabled [ 2794.268445] pci 0000:02:00.0: [1b73:1000] type 0 class 0x000c03 [ 2794.268483] pci 0000:02:00.0: reg 10: [mem 0x00000000-0x0000ffff] [ 2794.268689] pci 0000:02:00.0: PME# supported from D0 D3hot [ 2794.268700] pci 0000:02:00.0: PME# disabled [ 2794.276383] pci 0000:02:00.0: BAR 0: assigned [mem 0xd7800000-0xd780ffff] [ 2794.276398] pci 0000:02:00.0: BAR 0: set to [mem 0xd7800000-0xd780ffff] (PCI address [0xd7800000-0xd780ffff]) [ 2794.276419] pci 0000:02:00.0: no hotplug settings from platform [ 2794.276658] xhci_hcd 0000:02:00.0: enabling device (0000 -> 0002) [ 2794.276675] xhci_hcd 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 2794.276762] xhci_hcd 0000:02:00.0: setting latency timer to 64 [ 2794.276771] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.276913] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 9 [ 2794.395760] xhci_hcd 0000:02:00.0: irq 16, io mem 0xd7800000 [ 2794.396141] xHCI xhci_add_endpoint called for root hub [ 2794.396144] xHCI xhci_check_bandwidth called for root hub [ 2794.396195] hub 9-0:1.0: USB hub found [ 2794.396203] hub 9-0:1.0: 1 port detected [ 2794.396305] xhci_hcd 0000:02:00.0: xHCI Host Controller [ 2794.396371] xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 10 [ 2794.396496] xHCI xhci_add_endpoint called for root hub [ 2794.396499] xHCI xhci_check_bandwidth called for root hub [ 2794.396547] hub 10-0:1.0: USB hub found [ 2794.396553] hub 10-0:1.0: 1 port detected [ 2798.004084] usb 1-3: new high-speed USB device number 8 using ehci_hcd [ 2798.140824] scsi21 : usb-storage 1-3:1.0 [ 2820.176116] usb 1-3: reset high-speed USB device number 8 using ehci_hcd [ 2824.000526] scsi 21:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2824.002263] sd 21:0:0:0: Attached scsi generic sg2 type 0 [ 2824.003617] sd 21:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2824.005139] sd 21:0:0:0: [sdb] Write Protect is off [ 2824.005149] sd 21:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2824.009084] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.009094] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.011944] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.011952] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.049153] sdb: sdb1 [ 2824.051814] sd 21:0:0:0: [sdb] No Caching mode page present [ 2824.051821] sd 21:0:0:0: [sdb] Assuming drive cache: write through [ 2824.051825] sd 21:0:0:0: [sdb] Attached SCSI disk [ 2839.536624] usb 1-3: USB disconnect, device number 8 [ 2844.620178] usb 10-1: new SuperSpeed USB device number 2 using xhci_hcd [ 2844.640281] scsi22 : usb-storage 10-1:1.0 [ 2850.326545] scsi 22:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2850.327560] sd 22:0:0:0: Attached scsi generic sg2 type 0 [ 2850.329561] sd 22:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2850.329889] sd 22:0:0:0: [sdb] Write Protect is off [ 2850.329897] sd 22:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2850.330223] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.330231] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.331414] sd 22:0:0:0: [sdb] No Caching mode page present [ 2850.331423] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.384116] usb 10-1: USB disconnect, device number 2 [ 2850.392050] sd 22:0:0:0: [sdb] Unhandled error code [ 2850.392056] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392061] sd 22:0:0:0: [sdb] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [ 2850.392074] end_request: I/O error, dev sdb, sector 0 [ 2850.392079] quiet_error: 70 callbacks suppressed [ 2850.392082] Buffer I/O error on device sdb, logical block 0 [ 2850.392194] ldm_validate_partition_table(): Disk read failed. [ 2850.392271] Dev sdb: unable to read RDB block 0 [ 2850.392377] sdb: unable to read partition table [ 2850.392581] sd 22:0:0:0: [sdb] READ CAPACITY failed [ 2850.392584] sd 22:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2850.392588] sd 22:0:0:0: [sdb] Sense not available. [ 2850.392613] sd 22:0:0:0: [sdb] Asking for cache data failed [ 2850.392617] sd 22:0:0:0: [sdb] Assuming drive cache: write through [ 2850.392621] sd 22:0:0:0: [sdb] Attached SCSI disk [ 2850.732182] usb 10-1: new SuperSpeed USB device number 3 using xhci_hcd [ 2850.752228] scsi23 : usb-storage 10-1:1.0 [ 2851.752709] scsi 23:0:0:0: Direct-Access BUFFALO HD-PZU3 0001 PQ: 0 ANSI: 6 [ 2851.754481] sd 23:0:0:0: Attached scsi generic sg2 type 0 [ 2851.756576] sd 23:0:0:0: [sdb] 1953463728 512-byte logical blocks: (1.00 TB/931 GiB) [ 2851.758426] sd 23:0:0:0: [sdb] Write Protect is off [ 2851.758436] sd 23:0:0:0: [sdb] Mode Sense: 1f 00 00 08 [ 2851.758779] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.758787] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.759968] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.759977] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.817710] sdb: sdb1 [ 2851.820562] sd 23:0:0:0: [sdb] No Caching mode page present [ 2851.820568] sd 23:0:0:0: [sdb] Assuming drive cache: write through [ 2851.820572] sd 23:0:0:0: [sdb] Attached SCSI disk [ 2852.060352] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.076533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.076538] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.196329] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.212593] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.212599] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.456290] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.472402] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.472408] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.624304] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.640531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.640536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.772296] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.788536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.788541] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2852.920349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2852.936536] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2852.936540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2853.072287] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2853.088565] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2853.088570] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.176339] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.192561] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.192567] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.320349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.336526] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.336531] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.468344] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.484551] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.484556] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.612349] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.628540] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.628545] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.756350] usb 10-1: reset SuperSpeed USB device number 3 using xhci_hcd [ 2884.772528] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b19060 [ 2884.772533] xhci_hcd 0000:02:00.0: xHCI xhci_drop_endpoint called with disabled ep f6b1908c [ 2884.848116] usb 10-1: USB disconnect, device number 3 [ 2884.851493] scsi 23:0:0:0: [sdb] killing request [ 2884.851501] scsi 23:0:0:0: [sdb] killing request [ 2884.851699] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851702] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851708] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2b ee 00 00 3e 00 [ 2884.851721] end_request: I/O error, dev sdb, sector 6237166 [ 2884.851726] Buffer I/O error on device sdb1, logical block 6237102 [ 2884.851730] Buffer I/O error on device sdb1, logical block 6237103 [ 2884.851738] Buffer I/O error on device sdb1, logical block 6237104 [ 2884.851741] Buffer I/O error on device sdb1, logical block 6237105 [ 2884.851744] Buffer I/O error on device sdb1, logical block 6237106 [ 2884.851747] Buffer I/O error on device sdb1, logical block 6237107 [ 2884.851750] Buffer I/O error on device sdb1, logical block 6237108 [ 2884.851753] Buffer I/O error on device sdb1, logical block 6237109 [ 2884.851757] Buffer I/O error on device sdb1, logical block 6237110 [ 2884.851807] scsi 23:0:0:0: [sdb] Unhandled error code [ 2884.851810] scsi 23:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 2884.851813] scsi 23:0:0:0: [sdb] CDB: Read(10): 28 00 00 5f 2c 2c 00 00 3e 00 [ 2884.851824] end_request: I/O error, dev sdb, sector 6237228 [ 2885.168190] usb 10-1: new SuperSpeed USB device number 4 using xhci_hcd [ 2885.188268] scsi24 : usb-storage 10-1:1.0 Please help me with my problem. I got this after running dmesg.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Dual Boot issues with Windows 7 and Ubuntu

    - by Michael
    I'm finding myself in a rather unique situation. I've read through just about every resource I can find about this and while things have helped me understand some background, I haven't yet been able to find a solution. So I'm asking here. I originally had just a Windows 7 64-bit OS installation on my desktop. Learning that I couldn't do anything with Apache, PHP and MySql from within a 64-bit system, I did some research and found out that I could use Ubuntu. I've installed the latest version: 11.04. I created a CD to install Ubuntu from and the install went just fine. I installed it side-by-side with Windows 7. I can boot into Ubuntu just fine through the dual-boot option. When I reboot to load Windows though, the Grub2 list shows Windows 7 (loader) and when I select this option the Windows System Recovery loads instead of the actual OS. I haven't made it past there because I didn't know what to do. I just shut the computer down and rebooted into Ubuntu. I've been working for the last hour and a half to try to figure out how to boot into the Windows 7 OS and I haven't got a clue. While I'm somewhat proficient with Windows 7, I'm totally new to Ubuntu, so if you do know what needs to happen, please keep it simple enough that I'll be able to understand. Thanks for all your help in advance. Here's the results after using the Boot Info Script: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #5 for cbh. => Windows is installed in the MBR of /dev/sdb => Grub 2 is installed in the MBR of /dev/mapper/pdc_bdadcfbdif and looks on the same drive in partition #5 for cbh. sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sdb1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /boot/BCD sdb4: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb6: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: pdc_bdadcfbdif1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD pdc_bdadcfbdif2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe pdc_bdadcfbdif3: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy mount: unknown filesystem type '' =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sda2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/sda3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sdb1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sdb2 206,911 1,342,554,688 1,342,347,778 7 HPFS/NTFS /dev/sdb3 1,930,344,448 1,953,521,663 23,177,216 7 HPFS/NTFS /dev/sdb4 1,342,556,158 1,930,344,447 587,788,290 5 Extended /dev/sdb5 1,342,556,160 1,896,806,399 554,250,240 83 Linux /dev/sdb6 1,896,808,448 1,930,344,447 33,536,000 82 Linux swap / Solaris Drive: pdc_bdadcfbdif ___________________ _____________________________________________________ Disk /dev/mapper/pdc_bdadcfbdif: 750.0 GB, 749999947776 bytes 255 heads, 63 sectors/track, 91182 cylinders, total 1464843648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/mapper/pdc_bdadcfbdif1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 ends after the last sector of /dev/mapper/pdc_bdadcfbdif blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/mapper/pdc_bdadcfbdif1 888E54CC8E54B482 ntfs SYSTEM /dev/mapper/pdc_bdadcfbdif2 C2766BF6766BEA1D ntfs OS /dev/mapper/pdc_bdadcfbdif: PTTYPE="dos" /dev/sda1 888E54CC8E54B482 ntfs SYSTEM /dev/sda2 C2766BF6766BEA1D ntfs OS /dev/sda3 BE6CA31D6CA2CF87 ntfs HP_RECOVERY /dev/sda promise_fasttrack_raid_member /dev/sdb1 20B65685B6565B7C ntfs SYSTEM /dev/sdb2 B4467A314679F508 ntfs HP /dev/sdb3 6E10B7A410B77227 ntfs FACTORY_IMAGE /dev/sdb4: PTTYPE="dos" /dev/sdb5 266f9801-cf4f-4acc-affa-2092be035f0c ext4 /dev/sdb6 1df35749-a887-45ff-a3de-edd52239847d swap /dev/sdb: PTTYPE="dos" error: /dev/mapper/pdc_bdadcfbdif3: No such file or directory error: /dev/sdc: No medium found error: /dev/sdd: No medium found error: /dev/sde: No medium found error: /dev/sdf: No medium found error: /dev/sdg: No medium found ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sdb5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc- affa-2092be035f0c ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic-pae } menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c echo 'Loading Linux 2.6.38-8-generic-pae ...' linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc-affa-2092be035f0c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic-pae } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos1)' search --no-floppy --fs-uuid --set=root 20B65685B6565B7C chainloader +1 } menuentry "Windows Recovery Environment (loader) (on /dev/sdb3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos3)' search --no-floppy --fs-uuid --set=root 6E10B7A410B77227 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb5 during installation UUID=266f9801-cf4f-4acc-affa-2092be035f0c / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb6 during installation UUID=1df35749-a887-45ff-a3de-edd52239847d none swap sw 0 0 =================== sdb5: Location of files loaded by Grub: =================== 900.1GB: boot/grub/core.img 825.0GB: boot/grub/grub.cfg 688.7GB: boot/initrd.img-2.6.38-8-generic-pae 688.0GB: boot/vmlinuz-2.6.38-8-generic-pae 688.7GB: initrd.img 688.0GB: vmlinuz =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on pdc_bdadcfbdif3 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde sdf sdg =============================== StdErr Messages: =============================== ERROR: dos: partition address past end of RAID device hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory ERROR: dos: partition address past end of RAID device

    Read the article

  • CodePlex Daily Summary for Friday, May 21, 2010

    CodePlex Daily Summary for Friday, May 21, 2010New Projects.Net wrapper around the Neo4j Rest Server: Neo4jRestSharp is a .Net API wrapper for the Neo4j Rest Server. Neo4j is an open sourced java based transactional graph database that stores data ...3D Editor Application Framework: A starting point for building 3D editing applications, such as video game editors, particle system editors, 3D modelling tools, visualization tools...Bulk Actions for SharePoint: This project aims to provide some essential and generic bulk actions for SharePoint lists. Idea is to include any custom actions that can be applie...CineRemote - The hometheater control board: CineRemote's purpose is to offer an alternative to expensive control system for dedicated hometheater rooms. CrmContrib: CrmContrib is a collection of useful items for developers and customizers working with the Dynamics CRM platform.db2xls: OleDb,Sql Server,Sqlite,....to excel, from sqlHappyNet - Silverlight reference application: HappyNet is a project using best practices to build an e-commerce web site. It is a full Silverlight application based on a solid architecture (PR...IP Multicast Library: IP Multicast Library makes it easier for developers to add Multicast, messaging to projects.Linkbutton Web Part: This Link Button Web Part can be installed in any SharePoint 2007 web site. You can onfigure a URL with query string that will be used by the Link...Majordomus pro Windows: Nástroj určený pro správce a vývojáře slouží k řízenému spuštění používaných a vypnutí nepotřebných služeb, procesů a aplikací ve Windows. Pomocí s...MRDS Samples: The MRDS Samples site hosts a variety of code samples for Microsoft Robotics Developer Studio (RDS).Mute4: Mute4 is a simple application that allows you to set a mute/vibration profile and it will switch back to your normal profile automatically after a ...Niko Neko Pureya: Niko Neko Pureya is a media player designed for people who watches a series of videos (like anime). It is very simple and easy to use & learn. And ...NVPX - VP8 Video Codec for .Net: NVPx allows you to use the now open-source VP8 codec on the .Net platform.openrs: openrs is an open-source RuneScape 2 emulator designed to be used with newer engine clients.Prism Evaluation: prism evaluationProj4Net: Proj4Net is a C#/.Net library to transform point coordinates from one geographic coordinate system to another, including datum transformation. The ...Read it to me!: Read it to me will allow you to load txt and rtf files and then speak them using SAPI 5 voices that are installed on your computer with an option t...sGSHOPedit: -SilverDice: SilverDice...SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight contains a collection of silverlight controls making life easier for developers. You'll no longer have to worry ...Silverlight Report: Open-Source Silverlight Reporting Engine. This project allows you to create and print reports using Silverlight 4.SimTrain5000: Train simulation project on University College of Northern Denmark.Springshield Sample Site for EPiServer CMS: City of Springshield - The accessible sample site for EPiServer CMS 6.Teach.Net: Teach.Net is a library/framework that can be used to create applications for testing and learning.The Amoeba Project: The Amoeba Project is a platform to be developed to embrace most of the latest Microsoft Technologies. Still in a conceptual stage however, it loo...The Fastcopy Helper: The Fastcopy Helper is a auxiliary tool for fastcopy.vow: vowWCF Client Generator: This code generator avoids the shortcomings of svcutil when generating proxies for services with a large number of methods.WebCycle: WebCycle is a screensaver application that cycles through web pages. This was originally created to cycle through Reporting Services reports so th...XGate2D - XNA 2D Game Engine: XGate2D is 2D game engine built using XNA Framework. XGate2D currently has 8 features: input handler, animation, Graphical User Interface (GUI), ...XNA Catapult Minigame for XNA 4: XNA 4 implementation of the Catapult Minigame Sample from XNA Creators Club.New ReleasesADefHelpDesk: ADefHelpDesk (Standard ASP.NET Version) 01.00.00: ADefHelpDesk a Help Desk / Ticket Tracker module * NOTE: This version is NOT a DotNetNuke module - It is a standard ASP.NET Application * SQL 2005...Bulk Actions for SharePoint: First Release: First Release - Includes following bulk list actions: *Delete *Checkin/Checkout *Publish/Unpublish *Move *Update MetadataCheck-in Wizard for ArenaChMS: v1.2.1: v 1.2.0 updated to work with Arena 2009.2 (see notes below). Added support for "At Kiosk" and "At Location" printing. Added support for print l...ConfigTray: 1.5: Version 1.5 will have a new UI for managing ConfigTray config. Instead of manually editing configtray.exe.config to add/delete/edit settings and fi...CrmContrib: CrmContribWorkflow 1.0 ALPHA1: This is an initial release of the CrmContribWorkflow 1.0 components. At the moment there are only two activities included in this release. Add Cont...DemotToolkit: DemotToolkit-0.1.0.50830: Initial release.DemotToolkit: DemotToolkit-0.1.1.51107: Fixed crashing in some circumstances.Dot Game: Dot Game Stable Release: Dot Game This is latest stable release without network play mode. (Network play mode is under development)Dynamic Survey Forms - SharePoint Web Part: Fix for missing dlls and documentation: Added missing assemblies to setup.zip. Installation instructions.EnhSim: V1.9.8.7: Added Sharpened Twilight ScaleEvent Scavenger: Viewer 3.2.2: Fixed a bug in the viewer where the previous view 'Top x' filter was not restored after the application was reopened.F# Project Extender: V0.9.2.0 (VS2008,VS2010): F# project extender for Visual Studio 2008 and Visual Studio 2010. Fixed bugs: -VS2010 crash on MoveUp(MoveDown) of renamed file -Adding files brea...FlickrNet API Library: 3.0 Beta 2: The final Beta for the 3.0 release. Fixes a major issue with Photosets.GetList as well as a number of smaller bugs, and adds the new Usage extras ...Folder Bookmarks: Folder Bookmarks 1.5.7: The latest version of Folder Bookmarks (1.5.7), with the new Help feature - all the instructions needed to use the software (If you have any sugges...Linkbutton Web Part: V1.1: Use WinZip to unzip. See docs folder for installation instructions.Live-Exchange Calendar Sync: Live-Exchange Calendar Sync Final: Live-Exchange Calendar Sync Beta May 14, 2010 release of Live-Exchange Calendar Sync 1.0 . (Version 46127) Getting StartedInfo about installation ...MEFedMVVM: MEFedMVVM: This version contains the MEFedMVVM ViewModelLocator and also some basic services such as Mediator and StateManager. You can download the code fr...Mentor Text Database: May 2010 Release with instrumentation: This should function the same as the previous version. Some enhancements have been made, and additional instrumentation has been added to help anal...Merthin: SSF 2010: Code and documentation presented at the Student Science Fair of the Faculty of Mathematics and Computer Science at the University of Habana. The ma...NB_Store - Free DotNetNuke Ecommerce Catalog Module: NB_Store_02.01.00: NB_Store v2.1.0 THIS IS AN ALPHA RELEASE FOR TESTING ONLY......DO NOT USE IT ON A LIVE SYSTEM.NerdDinner.com - Where Geeks Eat: NerdDinner - Four Database Access Samples: Chris Sells worked with Nick Muhonen from Useable Concepts and Nick created four samples exploring how an ASP.NET MVC application can access databa...openrs: Devstart: Trunk release, empty project.Over Store: OverStore 1.19.0.0: - Version number is increased. - Add methods for specifying custom callback methods to TableMappingRepositoryConfiguration. - Object attaching fu...Rnwood.SmtpServer: Rnwood.SmtpServer 2.0: SmtpServer 2.0 is a .NET SMTP server component written in pure c#. It was written to power http://smtp4dev.codeplex.com/ but can easily be used by ...Scrum Sprint Monitor: v1.0.0.48524 (.NET 4-TFS 2010): What is new in this release? #6132 - Bug with open work hours; Added untested support for MSF for Agile process template; Improved data reporti...SharePoint Rsync List: 1.0.0.0: This initial 1.0 release includes a new feature which manages timer jobs on your sync listShould: Beta 1.1: Updated the namespaces. The extension methods are now in the root Should namespace. The other classes are not in child namespaces.SilverDude Toolkit for Silverlight: SilverDude Toolkit for Silverlight: Kindly give your comments about this project and tell how you feel about it. I'm still new in creating controls, hopefully you guys can support me....Silverlight Report: SilverlightReport_v0.1_alpha_bin: SilverlightReport v0.1 alphaSLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.2.0: Fixed a problem with long referenced DetectionResults that might have caused an IndexOutOfRangeException Added Marker.LoadFromResource to get rid...The Fastcopy Helper: My Fastcopy Helper 1.0: This Source Code Is use a method to run it . The method is thinked by my bain. So , The Performance maybe lower.Thinktecture.DataObjectModel: Thinktecture.DataObjectModel v0.12: Some bugs fixed. See ChangeLog.txt for more infos.Umbraco CMS: Umbraco 4.0.4.1: A stability release fixing 13 issues based on feedback from 4.0.3 users. Most importantly is a fix to a serious date bug where day and month could ...Usa*Usa Libraly: Smart.Web.Mobile ver 0.2: Smart.Web.Mobile pictgram convert library for japanese galapagos k-tai( ゚д゚) ver 0.2. - Custom encoding for HttpRequest.ContentEncoding / HttpResp...VCC: Latest build, v2.1.30520.0: Automatic drop of latest buildvow: dream: I have a dreamvow: test: testWCF Client Generator: Version 0.9.1.42927: Initial feature set complete. Detailed UI pending.WebCycle: WebCycle 1.0.20: Initial CodePlex releaseWebCycle: WebCycle 1.0.21: Added Uri validataion before saving settingsWhois Application: 1.5 release: - uses the whois.iana.org to dynamically lookup the whois server for each top level domain - enables enter key press for searchWing Beats: Wing Beats 0.9: This first release is focused on the core functionality and XHTML 1.0 strict generation in Asp.NET MVC.Most Popular ProjectsWeb Service Software FactoryPlasmaAquisição de Sinais Vitais em Tempo Real (Vital signs realtime data acquisition)Octtree XNA-GS DrawableGameComponentRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Most Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationPHPExcelBlogEngine.NETSQL Server PowerShell ExtensionsCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices: Windows Azure Security GuidanceFluent Ribbon Control Suite

    Read the article

  • CodePlex Daily Summary for Saturday, February 20, 2010

    CodePlex Daily Summary for Saturday, February 20, 2010New ProjectsBerkeliumDotNet: BerkeliumDotNet is a .NET wrapper for the Berkelium library written in C++/CLI. Berkelium provides off-screen browser rendering via Google's Chromi...BoxBinary Descriptive WebCacheManager Framework: Allows you to take a simple, different, and more effective method of caching items in ASP.NET. The developer defines descriptive categories to deco...CHS Extranet: CHS Extranet Project, working with the RM Community to build the new RM Easylink type applicationDbModeller: Generates one class having properties with the basic C# types aimed to serve as a business object by using reflection from the passed objects insta...Dice.NET: Dice.NET is a simple dice roller for those cases where you just kinda forgot your real dices. It's very simple in setup/use.EBI App with the SQL Server CE and websync: EBI App with the SQL Server CE and SQL Server DEV with Merge Replication(Web Synchronization) ere we are trying to develop an application which yo...Family Tree Analyzer: This project with be a c# project which aims to allow users to import their GEDCOM files and produce various data analysis reports such as a list o...Go! Embedded Device Builder: Go! is a complete software engineering environment for the creation of embedded Linux devices. It enables you to engineer, design, develop, build, ...HiddenWordsReadingPlan: HiddenWordsReadingPlanHtml to OpenXml: A library to convert simple or advanced html to plain OpenXml document.Jeffrey Palermo's shared source: This project contains multiple samples with various snippets and projects from blog posts, user group talks, and conference sessions.Krypton Palette Selectors: A small C# control library that allows for simplified palette selection and management. It makes use of and relies on Component Factory's excellen...OCInject: A DI container on a diet. This is a basic DI container that lives in your project not an external assembly with support for auto generated delegat...Photo Organiser: A small utility to sort photos into a new file structure based on date held in their XMP or EXIF tags (YYYY/MM/DD/hhmmss.jpg). Developed in C# / WPF.QPAPrintLib: Print every document by its recommended programmReusable Library: A collection of reusable abstractions for enterprise application developer.Runtime Intelligence Data Visualizer: WPF application used to visualize Runtime Intelligence data using the Data Query Service from the Runtime Intelligence Endpoint Starter Kit.ScreenRec: ScreenRec is program for record your desktop and save to images or save one picture.Silverlight Internet Desktop Application Guidance: SLIDA (Silverlight Internet Desktop Applications) provides process guidance for developers creating Silverlight applications that run exclusively o...WSUS Web Administration: Web Application to remotely manage WSUSNew Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.7.0 Stable: Bug Solved : Test-Path-Writable fails on root of system drive on Windows 7. Therefore the function now accepts an optional parameter to specify if ...aqq: sec 1.02: Projeto SEC - Sistema economico Comercial - em Visual FoxPro 9.0 OpenSource ou seja gratis e com fontes, licença GNU/GPL para maiores informações e...ASP.NET MVC Attribute Based Route Mapper: Attribute Based Routes v0.2: ASP.NET MVC Attribute Based Route MapperBoxBinary Descriptive WebCacheManager Framework: Initial release: Initial assembly release for anyone wanting the files referenced in my talk at Umbraco's 5th Birthday London meetup 16/Feb/2010 The code is fairly...Build Version Increment Add-In Visual Studio: Build Version Increment v2.2 Beta: 2.2.10050.1548Added support for custom increment schemes via pluginsBuild Version Increment Add-In Visual Studio: BuildVersionIncrement v2.1: 2.1.10050.1458Fix: Localization issues Feature: Unmanaged C support Feature: Multi-Monitor support Feature: Global/Default settings Fix: De...CHS Extranet: Beta 2.3: Beta 2.3 Release Change Log: Fixed the update my details not updating the department/form Tried to fix the issue when the ampersand (&) is in t...Cover Creator: CoverCreator 1.2.2: Resolved BUG: If there is only one CD entry in freedb.org application do nothing. Instalation instructions Just unzip CoverCreator and run CoverCr...Employee Scheduler: Employee Scheduler 2.3: Extract the files to a directory and run Lab Hours.exe. Add an employee. Double click an employee to modify their times. Please contact me through ...EnOceanNet: EnOceanNet v1.11: Recompiled for .NET Framework 4 RCFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 beta 4 Released: Hi, This release contains fix for the following bugs: * DataBinding was not working as expected with RIA services. * DataSeries visual wa...Html to OpenXml: HtmlToOpenXml 0.1 Beta: This is a beta version for testing purpose.Jeffrey Palermo's shared source: Web Forms front controller: This code goes along with my blog post about adding code that executes before your web form http://jeffreypalermo.com/blog/add-post-backs-to-mvc-nd...Krypton Palette Selectors: Initial Release: The initial release. Contains only the KryptonPaletteDropButton control.LaunchMeNot: LaunchMeNot 1.10: Lots has been added in this release. Feel free, however, to suggest features you'd like on the forums. Changes in LaunchMeNot 1.10 (19th February ...Magellan: Magellan 1.1.36820.4796 Stable: This is a stable release. It contains a fix for a bug whereby the content of zones in a shared layout couldn't use element bindings (due to name sc...Magellan: Magellan 1.1.36883.4800 Stable: This release includes a couple of major changes: A new Forms object model. See this page for details. Magellan objects are now part of the defau...MAISGestão: LayerDiagram: LayerDiagramMatrix3DEx: Matrix3DEx 1.0.2.0: Fixes the SwapHandedness method. This release includes factory methods for all common transformation matrices like rotation, scaling, translation, ...MDownloader: MDownloader-0.15.1.55880: Fixed bugs.NewLineReplacer: QPANewLineReplacer 1.1: Replace letter fast and easy in great textfilesOAuthLib: OAuthLib (1.5.0.1): Difference between 1.5.0.0 is just version number.OCInject: First Release: First ReleasePhoto Organiser: Installer alpha release: First release - contains known bugs, but works for the most part.Pinger: Pinger-1.0.0.0 Source: The Latest and First Source CodePinger: Pinger-1.0.0.2 Binary: Hi, This version can! work on Older versions of windows than 7 but i haven't test it! tnxPinger: Pinger-1.0.0.2 Source: Hi, It's the raw source!Reusable Library: v1.0: A collection of reusable abstractions for enterprise application developer.ScreenRec: Version 1: One version of this programSense/Net Enterprise Portal & ECMS: SenseNet 6.0 Beta 5: Sense/Net 6.0 Beta 5 Release Notes We are proud to finally present you Sense/Net 6.0 Beta 5, a major feature overhaul over beta43, and hopefully th...Silverlight Internet Desktop Application Guidance: v1: Project templates for Silverlight IDA and Silverlight Navigation IDA.SLAM! SharePoint List Association Manager: SLAM v1.3: The SharePoint List Association Manager is a platform for managing lists in SharePoint in a relational manner. The SLAM Hierarchy Extension works ...SQL Server PowerShell Extensions: 2.0.2 Production: Release 2.0.1 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 8 modules with 133 advanced functions, 2 cmdlets and 7 sc...StoryQ: StoryQ 2.0.2 Library and Converter UI: Fixes: 16086 This release includes the following files: StoryQ.dll - the actual StoryQ library for you to reference StoryQ.xml - the xmldoc for ...Text to HTML: 0.4.0 beta: Cambios de la versión:Inclusión de los idiomas castellano, inglés y francés. Adición de una ventana de configuración. Carga dinámica de variabl...thor: Version 1.1: What's New in Version 1.1Specify whether or not to display the subject of appointments on a calendar Specify whether or not to use a booking agen...TweeVo: Tweet What Your TiVo Is Recording: TweeVo v1.0: TweeVo v1.0VCC: Latest build, v2.1.30219.0: Automatic drop of latest buildVFPnfe: Arquivo xml gerado manualmente: Segue um aquivo que gera o xml para NF-e de forma manual, estou trabalhando na versão 1.1 deste projeto, aguarde, enquanto isso baixe outro projeto...Windows Double Explorer: WDE v0.3.7: -optimization -locked tabs can be reset to locked directory (single & multi) -folder drag drop to tabcontrol creates new tab -splash screen -direcl...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.5: Visual Studio 2008 and RC 2010 are now supported. Different profiles can now be used to compile the shader with. ChangesVisual Studio RC 2010 is ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrSharpyBlogEngine.NETSharePoint ContribjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFluent Ribbon Control Suite

    Read the article

  • CodePlex Daily Summary for Monday, April 12, 2010

    CodePlex Daily Summary for Monday, April 12, 2010New Projects3 Hour Game Design Contest: The 3 Hour Game Design Contest is a programming contest for making simple games in 3 hours. 3 hours may not seem like enough time to make a game, b...BI Monkey SSIS ETL Framework: The BI Monkey SSIS ETL Framework is an ETL Execution, Control and Logging system for ETL projects using SSIS. It is supported by a SQL Server metad...Blend Sample Data Helpers: Helper behavior classes to generate sample images and data from Internet sources such as Flickr images. Bold TCP for Delphi 7: Open Sourcing the Bold TCP for Delphi 7.cfThreadingTools: This library project contains classes and extensions which will allow easy handling of multi-threaded UI-accesses.CuBiX_SDL: CuBiX_SDL : CuBiX est un projet personnel.Draglets: Draglets makes it easier for editors and CMS-developers to move and reorder content at their web sites. It's developed in ASP.NET, C# with WCF and ...DSQLT - Dynamic SQL Templates: DSQLT - Dynamic SQL Templates Use Stored Procedures as templates for dynamic SQL statements. Substitute parameters @0-@9 with values like objectna...Edtter: Edtter is a sample web application built on ASP.NET MVC 2 Framework. (Japanese Version Only)Forms Based Authentication Management - SharePoint2007FBA: This is my own update to Stacy Draper's FBABasic project for Forms Based Authentication in MOSS 2007. In additon to managing your fba user's roles,...Height Map to 3D World at XNA: Height Map to 3D World is a XNA project that developed firstly by Eric Grossinger and secondly improved by Karadeniz Technical University Computer ...HouseFly: A simple contact and note taking applicationITM 495 - iPhone Web App: School ProjectKaufleute: This will be finished laterLR: this project is about connecting toPowerShell Integration Services: A set of tools aimed at Extract Transform and Load tasks. Focused on getting the most common ETL tasks done without SSIS. Salient: A collection of, hopefully, useful libraries.Samurai.Validation: Extensible and flexible .Net object validation frameworkSamurai.Workflow: Samurai Workflow is a slim, easy-to-use workflow framework for WPF applications.SharePoint User Management WebPart: SharePoint User Management WebPartUrl shorte(ne)r: It's simple Url Shortener (like: http://tinyurl.com) Currently only Polish language is supported. In future will be provided multi language suppor...Yasbg: Yasbg (pronounced yas-bug) is Yet Another Static Blog Generator. It is made in C# using MarkdownSharp for markdown. Currently in alpha. New Releases.NET Extensions - Extension Methods Library: Release 2010.06: Added an universal approach for grouping extension methods like conversions. Conversion are now available on any data type (it's actually extension...3 Hour Game Design Contest: 3H-GDC mVII: This is the collection of game files for the 7th 3H-GDCB&W Port Scanner: Black`n`White Port Scanner 3.0: B&W Port Scanner 3 includes FTP Server detection tool, Better stability, Optimized memory management, Saving & Opening Result sets ... and more new...BI Monkey SSIS ETL Framework: Framework v1 Alpha: This Alpha release is not fully tested and some functionality is not operating as intended.Bluetooth Radar: Version 1.7: UI Changes Device UserControl Randomly placed devices.BugTracker.NET: BugTracker.NET 3.4.1: For the tasks/time tracking feature, added a way of viewing all the tasks at once, not just the tasks for one bug. Also added a way of exporting a...cfThreadingTools: cfThreadingTools 0.1.1.8: This is the first public available release. Following items are included: BaseTools-class which allows thread-safe setting of properties and callin...DeepZoom Pivot Constructor: DeepZoom Pivot Constructor v0.1: This is a test release of the library platform - Targets .NET 3.5 No samples yet, etc., but it works well :-)DSQLT - Dynamic SQL Templates: Initial release with License Included: nothing changed but license print procedure included the zip file contains database backup SQL script readmeForms Based Authentication Management - SharePoint2007FBA: SharePoint2007FBA 1.0.0.0: Downloads for the Project solution and the WSP package. Please read the Setup Guide. If you are unfamiliar with setting up Forms Based Authenticati...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET version 0.3: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-widget-version-03.aspx For installation instruc...Framework Detector: FrameworkDetect Support .NET 4 v2: FrameworkDetect Support .NET 4Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.46860: This is the second beta release of the SVN based version incrementor. Please feel free to create a thread in the discussion tabs and provide feedb...Height Map to 3D World at XNA: 3DWorld: Just open .rar file and extract it any folder and run Proje2Dto3D.exe file.HTML Ruby: 6.20.2: Removed rubyLineSpace option Improved options panel Fixed ruby text font-size rendering issue with complex ruby annotation Removed more waste...HTML Ruby: 6.20.3: Removed unused code Temporary partial fix for Firefox 3.7a4pre nightly buildHTML Ruby: 6.21.0: Added support for current HTML5 ruby annotation format. All ruby annotations are converted to XHTML 1.1 complex ruby annotation.Kooboo HTML form: Kooboo HTML Form Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Menu: Kooboo CMS Menu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Meta: Kooboo Meta Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo PageMenu: Kooboo CMS PageMenu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Search: Kooboo CMS Search module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Numina Application/Security Framework: Numina.Framework Core 50212: Added bulk import user page Added General settings page for updating Company Name, Theme, and API Key Add/Edit application calls Full URL to h...Rawr: Rawr 2.3.14: - Rawr3: Tons of fixes for Rawr3 compatability and UI. - Significant performance improvements all around. - More fixes and improvements to Wowhea...Rich Ajax empowered Web/Cloud Applications: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with enhanced perfor...SharePoint User Management WebPart: User Management Web part 1.0: Most of the organization have one SharePoint Site which is configured with windows authenticated which is for internal employees having AD authenti...SkeinLibManaged: SkeinLibManaged 1.1.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version,...VCC: Latest build, v2.1.30411.0: Automatic drop of latest buildVFPX: Code References 1.1 Beta: Visit the Code References Info Page for complete information about this release.VisioAutomation: VisioAutomation 2.5.0: VisioAutomation 2.5.0- General cleanup/bugfixes - Many low-level changes the the VisioAutomation extension methods - these are far fewer now - This...Visual Studio DSite: English To Spanish Translator (Visual C++ 2008): A simple english to spanish translator made in visual c 2008, using the Google Translate API.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.00: Whats NewFile Browser: Inherits Folder Permissions from DotNetNuke Updated the Editor to Version 3.2.1 revision 5372 Added CkEditor jQuery Adap...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with unique develope...WPF Data Virtualization: 1.0.0.0: First ReleaseYasbg: Yasbg Alpha: ReadmeYet Another Static Blog Generator is a command line utility that generates static html files for blogs. Currently, it is NOT feed enabled. I...異世界の新着動画: Ver. 10-04-12: ニコ生の仕様変更に対応 アンケート時間の設定追加Most Popular ProjectsWBFS ManagerRawrASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsRawrnopCommerce. Open Source online shop e-commerce solution.AutoPocopatterns & practices – Enterprise LibraryShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterBlogEngine.NETBeanProxy

    Read the article

  • CodePlex Daily Summary for Thursday, February 18, 2010

    CodePlex Daily Summary for Thursday, February 18, 2010New ProjectsASP .NET MVC CMS (Content Management System): Open source Content management system based on ASP.NET MVC platform.AutoFolders: AutoFolders package for Umbraco CMS This package auto creates folder structures for new and existing pages. The folders structures can be date bas...AutoPex: This project combines CCI with Pex by allowing the developer to run Pex on methods based on differences between two assemblies. Canvas VSDOC Intellisense: JavaScript VSDOC documentation for HTML5 Canvas element and 2d Context interface.CSUDH: California State University, Domguiez Hills Game projectsD-AMPS: System for Analysis of Microelectronic and Photonic StructuresDispX: Disease PredictorEmployee Info Starter Kit: This is a starter kit, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a com...Enhanced Discussion Board for SharePoint: Provide later... publishing project to share with Malaysians firstFlowPad: Flowpad is a light, fast and easy to use flow diagram editor. It helps you quickly pour your algorithms from your mind to 'paper'. It is written us...Henge3D Physics Library for XNA: Henge3D is a 3D physics library written in C# for XNA. It is implemented entirely in managed code and is compatible with the XBOX 360.Hybrid Windows Service: Abstracted design pattern for running a windows service interactively. Implemented as a base class to replace ServiceBase it will automatically pro...Image Cropper datatype for Umbraco: Stand alone version of the Image Cropper datatype in Umbraco. Listinator: A social wishlist application done in asp.net MVCMicrosoft Dynamics Ax User Group (AXUG) Code Repository: The goal of this project is to make it easier for customers of Microsoft Dynamics Ax to be able to share relevant source code. Code base should inc...Mobil Trials: Sebuah game sederhana yang dibuat di atas Silverlight 3.0 dengan bantuan Physics Helper 3.0 Demo : http://gameagam.co.cc/default.html Mirror link...NavigateTo Providers: This project is a collection of NavigateTo providers for Visual Studio 2010. NExtLib: NExtLib is a general-purpose extension library for .NET, which adds some useful features and addresses some alleged omissions.Nom - .NET object-mapper: Nom is a light-weight, storage-type agnostic persistence framework which is intended to provide an abstraction over both relational and non-relatio...Numerical Methods on Silverlight: Numerical Methods, Silverlight, Math Parser, Simple, EulerOpenGLViewController for Visual Basic .NET 2008: A single class in pure VB.NET code to create and control an OpenGL window by calling opengl32.dll directly without use of additional wrapper librar...RestaurantMIS: RestaurantMIS is a simple Restaurant management system developed in Visual C# 2008 with Chinese language.SmartKonnect: <project name>A WPF application for windows with shoutcast, twitter, facebook and etc.SSRS Excel file Sheet rename: SSRS wont support renaming excel reports sheet rename. This program support to generate the report and change the excel sheet nameSWENTRIZ.NET: SWENTRIZ.NET allows to build graphics of implicit functions via .NET functionality.TFT: Tropical forecast tracker is a web application. It will measure the error of the National Hurricane Center's forecast as compared to the actual tr...WCF Dynamic Client Proxy: A WCF Dynamic Client Proxy so you don't have to inherit from ClientBase all the time. The proxy also has fault tolerance so you don't have to dispo...Web.Config Role Provider: Stores ASP.NET Roles in web.config. Easy to set up and deploy. Works great for simple websites with authentication. The projects includes support ...WPF Line of Business App: Example WPF patterns for line of business applications. Includes navigation, animation, and visualization.YuBiS Framework: Silverlight and WF based a workflow RAD framework. New ReleasesASP .NET MVC CMS (Content Management System): AtomicCms 1.0: This is the first public release of AtomicCms. To get more information about this content management system, visit website http://atomiccms.com/Blogsprajeesh.Blogspot samples: Designing Modular Smart Clients using CAL: This whitepaper provides architectural guidance for designing and implementing enterprise WPF/ silverlight client applications based on the Composi...DB Ghost Build Tools: 1.0.2: Made a change to the datetime format per dewee.DotNetNuke® Community Edition: 05.02.03: Major HighlightsFixed the issue where LinkClick.aspx links were incorrect for child portals Fixed the issue with the PayPal URL settings. Fixed...Employee Directory webpart for sharepoint 2007 user profiles: Employee Directory Source V2.0: Features: 1. Displays a complete list of all Active Directory profiles imported by the SSP into SharePoint 2007. 2. Displays the following fields ...Enhanced Discussion Board for SharePoint: Alpha Release: Meant for those who attended my presentation. Not cleaned upESPEHA: Espeha 9 PFR: Some small issues fixedFlowPad: FlowPad 0.1: FlowPad 0.1 build. Run it to get fammiliar with major concepts of easy diagramming :)Fluent Ribbon Control Suite: Fluent Ribbon Control Suite BETA2: Fluent Ribbon Control Suite BETA2 Includes: - Fluent.dll (with .pdb and .xml) - Demo Application - Samples - Foundation (Tabs, Groups, Contextu...Henge3D Physics Library for XNA: Henge3D Source (2010-02): This is the initial 2010-02 release.Highlight: Highlight 2.5: This release is primarily a maintenance release of the library and is functionally equivalent to version 2.3 that was released in 2004.Magiq: Magiq 0.3.0: Magiq 0.3.0 contains: Magiq-to-objects: Full support to Linq-to-objects Magiq-to-sql: Full support to Linq-to-sql New features: Plugin model Bu...Microsoft Points Converter: Pre-Alpha ClickOnce Installer v0.03: This release builds on the 0.02 release by adding more thorough validation checks for the amount to convert from as well as adding several currency...Mobil Trials: Mobil Trials Source Code: Sebuah game sederhana yang dibuat di atas Silverlight 3.0 dengan bantuan Physics Helper 3.0 Game ini masih perlu dikembangkan lebih jauh lagi! Si...Numerical Methods on Silverlight: Numerical Methods on Silverlight 1.00: This a new version of Numerical Methods on Silverlight.OAuthLib: OAuthLib (1.5.0.0): Changed point is as next. 7037 Fix spell miss of RequestFactoryMedthodSharePoint Outlook Connector: Version 1.0.1.0: Now it supports simply attaching SharePoint documents feature.Sharpy: Sharpy 1.1 Alpha: This is the second Sharpy release. Only a single change has been made - the foreach function now uses IEnumerable as a source instead of IList. Th...SkinDroidCreator: SkinDroidCreator ALPHA 1: Primera releaseTan solo carga mapas, ya sea de un zip o de un directorio. Para probarlo se pueden cargar temas Metamorph o temas flasheables, ya se...SkyDrive .Net API Client: SkyDrive .Net API Client 0.8.9: SkyDrive .Net API Client assembly version 0.8.9. Changes/improvements: - Added Web Proxy support - Introduced WebDriveInfo - Introduced DownloadUrl...spikes: Salient.Web.Administration 1.0: WebAdmin is simply the built in ASP.NetWebAdministrationFiles application cleaned up with codebehinds to make customization and refactoring possibl...SSRS Excel file Sheet rename: Change SSRS excel file sheet name: Create stored procedure from the attached file in sql server 2005/2008SWENTRIZ.NET: Approach 1: First approachTortoiseSVN Addin for Visual Studio: TortoiseSVN Addin 1.0.4: Visual Studio 2005 support Custom working root bug fixingTotal Commander SkyDrive File System Plugin (.wfx): Total Commander SkyDrive File System Plugin 0.8.4: Total Commander SkyDrive File System Plugin version 0.8.4. Bug fixes: - Upgraded SkyDriveWebClient to version 0.8.9 Please do not forget to expres...UnOfficial AW Wrapper dot Net: UAWW.Net 0.1.5.85 Béta 2: Fixed and Added SomethingVr30 OS: Space Brick Break 1.1: A brick breaker. ADD Level 3, 4, 5Web.Config Role Provider: First release: Three downloads are available: A compiled dll ready to use. The schema to enable intellisense The complete source (zipped)WI Assistant: WI Assistant 2.1: This release improves the work item selection functionality. These selection methods are now supported (some require at least one item selected): ...WI Assistant: WI Assistant 2.2: Improved error handling and fix for linking several times in a row. DISCLAIMER: While I have tested this app on my TFS Server, by downloading and...ZipStorer - A Pure C# Class to Store Files in Zip: ZipStorer 2.30: Added stream-oriented methods Improved support for ePUB & Open Container Format specification (OCF) Automatic switch from Deflate to Store algo...Most Popular ProjectsRawrDotNetNuke® Community EditionASP.NET Ajax LibraryFacebook Developer ToolkitWindows 7 USB/DVD Download ToolWSPBuilder (SharePoint WSP tool)Virtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Json.NETPerformance Analysis of Logs (PAL) ToolQuickGraph, Graph Data Structures And Algorithms for .NetMost Active ProjectsDinnerNow.netRawrSharpyBlogEngine.NETSimple SavantjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer Toolkit

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Connecting Linux to WatchGuard Firebox SSL (OpenVPN client)

    Recently, I got a new project assignment that requires to connect permanently to the customer's network through VPN. They are using a so-called SSL VPN. As I am using OpenVPN since more than 5 years within my company's network I was quite curious about their solution and how it would actually be different from OpenVPN. Well, short version: It is a disguised version of OpenVPN. Unfortunately, the company only offers a client for Windows and Mac OS which shouldn't bother any Linux user after all. OpenVPN is part of every recent distribution and can be activated in a couple of minutes - both client as well as server (if necessary). WatchGuard Firebox SSL - About dialog Borrowing some files from a Windows client installation Initially, I didn't know about the product, so therefore I went through the installation on Windows 8. No obstacles (and no restart despite installation of TAP device drivers!) here and the secured VPN channel was up and running in less than 2 minutes or so. Much appreciated from both parties - customer and me. Of course, this whole client package and my long year approved and stable installation ignited my interest to have a closer look at the WatchGuard client. Compared to the original OpenVPN client (okay, I have to admit this is years ago) this commercial product is smarter in terms of file locations during installation. You'll be able to access the configuration and key files below your roaming application data folder. To get there, simply enter '%AppData%\WatchGuard\Mobile VPN' in your Windows/File Explorer and confirm with Enter/Return. This will display the following files: Application folder below user profile with configuration and certificate files From there we are going to borrow four files, namely: ca.crt client.crt client.ovpn client.pem and transfer them to the Linux system. You might also be able to isolate those four files from a Mac OS client. Frankly, I'm just too lazy to run the WatchGuard client installation on a Mac mini only to find the folder location, and I'm going to describe why a little bit further down this article. I know that you can do that! Feedback in the comment section is appreciated. Configuration of OpenVPN (console) Depending on your distribution the following steps might be a little different but in general you should be able to get the important information from it. I'm going to describe the steps in Ubuntu 13.04 (Raring Ringtail). As usual, there are two possibilities to achieve your goal: console and UI. Let's what it is necessary to be done. First of all, you should ensure that you have OpenVPN installed on your system. Open your favourite terminal application and run the following statement: $ sudo apt-get install openvpn network-manager-openvpn network-manager-openvpn-gnome Just to be on the safe side. The four above mentioned files from your Windows machine could be copied anywhere but either you place them below your own user directory or you put them (as root) below the default directory: /etc/openvpn At this stage you would be able to do a test run already. Just in case, run the following command and check the output (it's the similar information you would get from the 'View Logs...' context menu entry in Windows: $ sudo openvpn --config client.ovpn Pay attention to the correct path to your configuration and certificate files. OpenVPN will ask you to enter your Auth Username and Auth Password in order to establish the VPN connection, same as the Windows client. Remote server and user authentication to establish the VPN Please complete the test run and see whether all went well. You can disconnect pressing Ctrl+C. Simplifying your life - authentication file In my case, I actually set up the OpenVPN client on my gateway/router. This establishes a VPN channel between my network and my client's network and allows me to switch machines easily without having the necessity to install the WatchGuard client on each and every machine. That's also very handy for my various virtualised Windows machines. Anyway, as the client configuration, key and certificate files are located on a headless system somewhere under the roof, it is mandatory to have an automatic connection to the remote site. For that you should first change the file extension '.ovpn' to '.conf' which is the default extension on Linux systems for OpenVPN, and then open the client configuration file in order to extend an existing line. $ sudo mv client.ovpn client.conf $ sudo nano client.conf You should have a similar content to this one here: dev tunclientproto tcp-clientca ca.crtcert client.crtkey client.pemtls-remote "/O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server"remote-cert-eku "TLS Web Server Authentication"remote 1.2.3.4 443persist-keypersist-tunverb 3mute 20keepalive 10 60cipher AES-256-CBCauth SHA1float 1reneg-sec 3660nobindmute-replay-warningsauth-user-pass auth.txt Note: I changed the IP address of the remote directive above (which should be obvious, right?). Anyway, the required change is marked in red and we have to create a new authentication file 'auth.txt'. You can give the directive 'auth-user-pass' any file name you'd like to. Due to my existing OpenVPN infrastructure my setup differs completely from the above written content but for sake of simplicity I just keep it 'as-is'. Okay, let's create this file 'auth.txt' $ sudo nano auth.txt and just put two lines of information in it - username on the first, and password on the second line, like so: myvpnusernameverysecretpassword Store the file, change permissions, and call openvpn with your configuration file again: $ sudo chmod 0600 auth.txt $ sudo openvpn --config client.conf This should now work without being prompted to enter username and password. In case that you placed your files below the system-wide location /etc/openvpn you can operate your VPNs also via service command like so: $ sudo service openvpn start client $ sudo service openvpn stop client Using Network Manager For newer Linux users or the ones with 'console-phobia' I'm going to describe now how to use Network Manager to setup the OpenVPN client. For this move your mouse to the systray area and click on Network Connections => VPN Connections => Configure VPNs... which opens your Network Connections dialog. Alternatively, use the HUD and enter 'Network Connections'. Network connections overview in Ubuntu Click on 'Add' button. On the next dialog select 'Import a saved VPN configuration...' from the dropdown list and click on 'Create...' Choose connection type to import VPN configuration Now you navigate to your folder where you put the client files from the Windows system and you open the 'client.ovpn' file. Next, on the tab 'VPN' proceed with the following steps (directives from the configuration file are referred): General Check the IP address of Gateway ('remote' - we used 1.2.3.4 in this setup) Authentication Change Type to 'Password with Certificates (TLS)' ('auth-pass-user') Enter User name to access your client keys (Auth Name: myvpnusername) Enter Password (Auth Password: verysecretpassword) and choose your password handling Browse for your User Certificate ('cert' - should be pre-selected with client.crt) Browse for your CA Certificate ('ca' - should be filled as ca.crt) Specify your Private Key ('key' - here: client.pem) Then click on the 'Advanced...' button and check the following values: Use custom gateway port: 443 (second value of 'remote' directive) Check the selected value of Cipher ('cipher') Check HMAC Authentication ('auth') Enter the Subject Match: /O=WatchGuard_Technologies/OU=Fireware/CN=Fireware_SSLVPN_Server ('tls-remote') Finally, you have to confirm and close all dialogs. You should be able to establish your OpenVPN-WatchGuard connection via Network Manager. For that, click on the 'VPN Connections => client' entry on your Network Manager in the systray. It is advised that you keep an eye on the syslog to see whether there are any problematic issues that would require some additional attention. Advanced topic: routing As stated above, I'm running the 'WatchGuard client for Linux' on my head-less server, and since then I'm actually establishing a secure communication channel between two networks. In order to enable your network clients to get access to machines on the remote side there are two possibilities to enable that: Proper routing on both sides of the connection which enables both-direction access, or Network masquerading on the 'client side' of the connection Following, I'm going to describe the second option a little bit more in detail. The Linux system that I'm using is already configured as a gateway to the internet. I won't explain the necessary steps to do that, and will only focus on the additional tweaks I had to do. You can find tons of very good instructions and tutorials on 'How to setup a Linux gateway/router' - just use Google. OK, back to the actual modifications. First, we need to have some information about the network topology and IP address range used on the 'other' side. We can get this very easily from /var/log/syslog after we established the OpenVPN channel, like so: $ sudo tail -n20 /var/log/syslog Or if your system is quite busy with logging, like so: $ sudo less /var/log/syslog | grep ovpn The output should contain PUSH received message similar to the following one: Jul 23 23:13:28 ios1 ovpn-client[789]: PUSH: Received control message: 'PUSH_REPLY,topology subnet,route 192.168.1.0 255.255.255.0,dhcp-option DOMAIN ,route-gateway 192.168.6.1,topology subnet,ping 10,ping-restart 60,ifconfig 192.168.6.2 255.255.255.0' The interesting part for us is the route command which I highlighted already in the sample PUSH_REPLY. Depending on your remote server there might be multiple networks defined (172.16.x.x and/or 10.x.x.x). Important: The IP address range on both sides of the connection has to be different, otherwise you will have to shuffle IPs or increase your the netmask. {loadposition content_adsense} After the VPN connection is established, we have to extend the rules for iptables in order to route and masquerade IP packets properly. I created a shell script to take care of those steps: #!/bin/sh -eIPTABLES=/sbin/iptablesDEV_LAN=eth0DEV_VPNS=tun+VPN=192.168.1.0/24 $IPTABLES -A FORWARD -i $DEV_LAN -o $DEV_VPNS -d $VPN -j ACCEPT$IPTABLES -A FORWARD -i $DEV_VPNS -o $DEV_LAN -s $VPN -j ACCEPT$IPTABLES -t nat -A POSTROUTING -o $DEV_VPNS -d $VPN -j MASQUERADE I'm using the wildcard interface 'tun+' because I have multiple client configurations for OpenVPN on my server. In your case, it might be sufficient to specify device 'tun0' only. Simplifying your life - automatic connect on boot Now, that the client connection works flawless, configuration of routing and iptables is okay, we might consider to add another 'laziness' factor into our setup. Due to kernel updates or other circumstances it might be necessary to reboot your system. Wouldn't it be nice that the VPN connections are established during the boot procedure? Yes, of course it would be. To achieve this, we have to configure OpenVPN to automatically start our VPNs via init script. Let's have a look at the responsible 'default' file and adjust the settings accordingly. $ sudo nano /etc/default/openvpn Which should have a similar content to this: # This is the configuration file for /etc/init.d/openvpn## Start only these VPNs automatically via init script.# Allowed values are "all", "none" or space separated list of# names of the VPNs. If empty, "all" is assumed.# The VPN name refers to the VPN configutation file name.# i.e. "home" would be /etc/openvpn/home.conf#AUTOSTART="all"#AUTOSTART="none"#AUTOSTART="home office"## ... more information which remains unmodified ... With the OpenVPN client configuration as described above you would either set AUTOSTART to "all" or to "client" to enable automatic start of your VPN(s) during boot. You should also take care that your iptables commands are executed after the link has been established, too. You can easily test this configuration without reboot, like so: $ sudo service openvpn restart Enjoy stable VPN connections between your Linux system(s) and a WatchGuard Firebox SSL remote server. Cheers, JoKi

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • 2 way SSL between SOA and OSB

    - by Johnny Shum
    If you have a need to use 2 way SSL between SOA composite and external partner links, you can follow these steps. Create the identity keystores, trust keystores, and server certificates. Setup keystores and SSL on WebLogic Setup server to use 2 way SSL Configure your SOA composite's partner link to use 2 way SSL Configure SOA engine two ways SSL In this case,  I use SOA and OSB for the test.  I started with a separate OSB and SOA domains.  I deployed two soap based proxies on OSB and two composites on SOA.  In SOA, one composite invokes a OSB proxy service, the other is invoked by the OSB.  Similarly,  in OSB,  one proxy invokes a SOA composite and the other is invoked by SOA. 1. Create the identity keystores, trust keystores and the server certificates Since this is a development environment, I use JDK's keytool to create the stores and use self signing certificate.  For production environment, you should use certificates from a trusted certificate authority like Verisign.    I created a script below to show what is needed in this step.  The only requirement is when creating the SOA identity certificate, you MUST use the alias mykey. STOREPASS=welcome1KEYPASS=welcome1# generate identity keystore for soa and osb.  Note: For SOA, you MUST use alias mykeyecho "creating stores"keytool -genkey -alias mykey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=soa, C=US" -keystore soa-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS keytool -genkey -alias osbkey -keyalg "RSA" -sigalg "SHA1withRSA" -dname "CN=osb, C=US" -keystore osb-default-keystore.jks -storepass $STOREPASS -keypass $KEYPASS# listing keystore contentsecho "listing stores contents"keytool -list -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASSkeytool -list -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS# exporting certs from storesecho "export certs from  stores"keytool -exportcert -alias mykey -keystore soa-default-keystore.jks -storepass $STOREPASS -file soacert.derkeytool -exportcert -alias osbkey -keystore osb-default-keystore.jks -storepass $STOREPASS -file osbcert.der # import certs to trust storesecho "import certs"keytool -importcert -alias osbkey -keystore soa-trust-keystore.jks -storepass $STOREPASS -file osbcert.der -keypass $KEYPASSkeytool -importcert -alias mykey -keystore osb-trust-keystore.jks -storepass $STOREPASS -file soacert.der  -keypass $KEYPASS SOA suite uses the JDK's SSL implementation for outbound traffic instead of the WebLogic's implementation.  You will need to import the partner's public cert into the trusted keystore used by SOA.  The default trusted keystore for SOA is DemoTrust.jks and it is located in $MW_HOME/wlserver_10.3/server/lib.   (This is set in the startup script -Djavax.net.ssl.trustStore).   If you use your own trusted keystore, then you will need to import it into your own trusted keystore. keytool -importcert -alias osbkey -keystore $MW_HOME/wlserver_10.3/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase  -file osbcert.der -keypass $KEYPASS If you do not perform this step, you will encounter this exception in runtime when SOA invokes OSB service using 2 way SSL Message send failed: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target  2.  Setup keystores and SSL on WebLogic First, you will need to login to the WebLogic console, navigate to the server's configuration->Keystore's tab.   Change the Keystores type to Custom Identity and Custom Trust and enter the rest of the fields. Then you navigate to the SSL tab, enter the fields in the identity section and expand the Advanced section.  Since I am using self signing cert on my VM enviornment, I disabled Hostname verification.  In real production system, this should not be the case.   I also enabled the option "Use Server Certs", so that the application uses the server cert to initiate https traffic (it is important to enable this in OSB). Last, you enable SSL listening port in the Server's configuration->General tab. 3.  Setup server to use 2 way SSL If you follow the screen shot in previous step, you can see in the Server->Configuration->SSL->Advanced section, there is an option for Two Way Client Cert Behavior,  you should set this to Client Certs Requested and Enforced. Repeat step 2 and 3 done on OSB.  After all these configurations,  you have to restart all the servers. 4.  Configure your SOA composite's partner link to use 2 way SSL You do this by modifying the composite.xml in your project, locate the partner's link reference and add the property oracle.soa.two.way.ssl.enabled.   <reference name="callosb" ui:wsdlLocation="helloword.wsdl">    <interface.wsdl interface="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.interface(Hello_PortType)"/>    <binding.ws port="http://www.examples.com/wsdl/HelloService.wsdl#wsdl.endpoint(Hello_Service/Hello_Port)"                location="helloword.wsdl" soapVersion="1.1">      <property name="weblogic.wsee.wsat.transaction.flowOption"                type="xs:string" many="false">WSDLDriven</property>   <property name="oracle.soa.two.way.ssl.enabled">true</property>    </binding.ws>  </reference> In OSB, you should have checked the HTTPS required flag in the proxy's transport configuration.  After this,  rebuilt the composite jar file and ready to deploy in the EM console later. 5.  Configure SOA engine two ways SSL Oracle SOA Suite uses both Oracle WebLogic Server and Sun Secure Socket Layer (SSL) stacks for two-way SSL configurations. For the inbound web service bindings, Oracle SOA Suite uses the Oracle WebLogic Server infrastructure and, therefore, the Oracle WebLogic Server libraries for SSL.  This is already done by step 2 and 3 in the previous section. For the outbound web service bindings, Oracle SOA Suite uses JRF HttpClient and, therefore, the Sun JDK libraries for SSL.  You do this by configuring the SOA Engine in the Enterprise Manager Console, select soa-infra->SOA Administration->Common Properties Then click at the link at the bottom of the page:  "More SOA Infra Advances Infrastructure Configuration Properties" and then enter the full path of soa identity keystore in the value field of the KeyStoreLocation attribute.  Click Apply and Return then navigate to the domain->security->credential. Here, you provide the password to the keystore.  Note: the alias of the certficate must be mykey as described in step 1, so you only need to provide the password to the identity keystore.   You accomplish this by: Click Create Map In the Map Name field, enter SOA, and click OK Click Create Key Enter the following details where the password is the password for the SOA identity keystore. 6.  Test and Trouble Shooting Once the setup is complete and server restarted, you can deploy the composite in the EM console and test it.  In case of error,  you can read the server log file to determine the cause of the error.  For example, If you have not setup step 5 and test 2 way SSL, you will see this in the log when invoking OSB from BPEL: java.lang.Exception: oracle.sysman.emSDK.webservices.wsdlapi.SoapTestException: oracle.fabric.common.FabricInvocationException: Unable to access the following endpoint(s): https://localhost.localdomain:7002/default/helloword ####<Sep 22, 2012 2:07:37 PM CDT> <Error> <oracle.soa.bpel.engine.ws> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-0AFDAEF20610F8FD89C5> ............ <11d1def534ea1be0:-4034173:139ef56d9f0:-8000-00000000000002ec> <1348340857956> <BEA-000000> <got FabricInvocationException sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target If you have not enable WebLogic SSL to use server certificate in the console and invoke SOA composite from OSB using two ways SSL, you will see this error: ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e2> <1348340857776> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:07:37 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-00000000000000e4> <1348340857786> <BEA-090485> <CERTIFICATE_UNKNOWN alert was received from localhost.localdomain - 127.0.0.1. The peer has an unspecified issue with the certificate. SSL debug tracing should be enabled on the peer to determine what the issue is.> ####<Sep 22, 2012 2:27:21 PM CDT> <Warning> <Security> <rhel55> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <11d1def534ea1be0:-51f5c76a:139ef5e1e1a:-8000-0000000000000124> <1348342041926> <BEA-090497> <HANDSHAKE_FAILURE alert received from localhost - 127.0.0.1. Check both sides of the SSL configuration for mismatches in supported ciphers, supported protocol versions, trusted CAs, and hostname verification settings.> References http://docs.oracle.com/cd/E23943_01/admin.1111/e10226/soacompapp_secure.htm#CHDCFABB   Section 5.6.4 http://docs.oracle.com/cd/E23943_01/web.1111/e13707/ssl.htm#i1200848

    Read the article

  • CodePlex Daily Summary for Saturday, June 05, 2010

    CodePlex Daily Summary for Saturday, June 05, 2010New Projects555 Calculator: A simple calculator to help choosing resistor and capacitor values to get the frequency you're looking for on a 555 timer.BleQua .NET: PL: Program sieciowy BleQua .NET jest multi-komunikatorem. EN: Network program BleQua .NET is multicomunicator.ChatDiplomaWork: ChatSample is the sample project.CSUFVGDC Summer Jam: Repository for VGDC summer game jam.Database Export Wizard: ExportWizard is a Step Wizard for Database Export using ASP.net and SQL Server. It allows easy export in any of the standard formats: CSV, TXT, HTM...Dozer Enterprise Library for .NET: a light .net framework for enterprise applications developmentEmployee Management System: This is an Employee Management System. the goal here is to offer a software that caters to small to mid sized businesses for free. This program a...Fanray: My project on Codeplex.Infragistics Analytics Framework: This project includes wrappers for the Infragistics controls that integrate with the recently launched Microsoft Silverlight Analytics Framework. ...KIME Simio Extensions: This is the official project of KIME Solutions. Here you find any current developments of KIME that are publicly available. KIME develops extension...MindTouch Community Extensions: MindTouch Community Extentions is a Native C# extention library for MindTouch Core that will have Dekiscript functions for full coverage of the API...NginxTray: NginxTray allows you manage easily Nginx Web Server by a Tray icon.NStore: NStore is a virtual store example done with ASP.NET MVC 2.0 tecnologyProjet Campus Numerique + Appli Mobile: Projet de création d'un campus numérique pour l'ISEN et d'un application mobile d'accès.SharePoint 2010 Feature Upgrade Kit: A set of tools for managing upgradable Features in SharePoint 2010. (Upgrading Features is a means of deploying code/artifact updates to existing S...SiteMap Utility for DNN Blog Module: This is a mini-project which allows you to easily add or generate an XML site map to your DotNetNuke® website for the search engines to use to inde...Space Explorer: A small app to help users examine folders to see which files and subfolders are taking up space. Still in development, no releases available curren...SQL Compact Toolbox: SQL Compact Toolbox is a Visual Studio 2010 add-in, that adds scripting, import, export, migrate, rename, run script and other upcoming SQL Server ...Twilverlight: Twliverlight is TweetDeck mixed with Silverlight. Much as I like using TweetDeck, it hogs my memory out, so this is an attempt to write a memory-ef...Visual Studio 2010 FxCop Extension: Visual Studio 2010 FxCop Extension allows to integrate stand-alone FxCop into Visual Studio 2010. You'll be able to analysis your source code with ...VisualStudio 2010 JavaScript Outlining: Visual Studio 2010 editor extension for JavaScript code blocks and custom regions outlining Wiki Shelf: Wiki Shelf is a Wikipedia browser app. The goal is to bring the library experience of browsing books, studying, and researching to the Wikipedia u...X-Arena - Magic: Projeto de PDS2 no Curso de Tecnologia em Desenvolvimento de Software no CEFETRN. O desenvolvimento do jogo Quiz Arena possui como pr...New Releases555 Calculator: 555Calc release v1.0: The initial 1 point uh-oh release of 555Calc.BleQua .NET: BleQua .NET 1.0.0.0: First releaseChatterbot JBot: JBot 1.0.1.155: Change presentation technology from Window Forms to Widndown Presentation Fundation.Community Forums NNTP bridge: Community Forums NNTP Bridge V26: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V27: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V28: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...CSS 360 Planetary Calendar: Final Release: =============================================================================== Final Release Version: 2.0 Tools Used: - Collaboration, Releas...Database Export Wizard: Version 3: As described in CodeProject article: Database Export Wizard for ASP.net and SQL Server. http://www.codeproject.com/KB/aspnet/DatabaseExportWizard.aspxEdu Math: Edu Math 2.0.10.122: Change version .NET Framework.Employee Management System: V1 (beta): This version is still in beta testing. Any issues, comments or suggestions are greatly appreciated. The export to excel function in this release o...ESB.NET: ESBDeploy_7.0.27.0 (x64 and x86) [ESB.NET 7.0 RC1]: Release Details Changes Since Last Release (since 6.3.47.1) - Targets .NET Framework 4, Visual Studio.NET 2010, Workflow 4 - Flowchart workflow ada...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.1.1 beta 2 Released: Hi, This release contains the following enhancements: *ShowIndicator() and HideIndicator() function has been implemented in Chart. So now user wi...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.4 beta 2 Released: Hi, This release contains the following enhancements: *ShowIndicator() and HideIndicator() function has been implemented in Chart. So now user wi...FsCheck: A random testing framework: FsCheck 0.7: What to download? If you use F# April 2010 CTP with Visual Studio 2008 or Visual Studio 2010, either Source or Binaries will do. To open source in...Git Source Control Provider: V 0.5: For VS 2010 users, it is recommanded to install it within Visual Studio by selecting Tools | Extension Manager. Run Visual Studio. Go to Tools ...GPdotNET - Genetic Programming Tool: GPdotNETv0.95: 1. Localization support 2. Export functionality for GP Model with training and testing data. 3. Export GPModel with Testing and Training data. 4....JoshDOS: JoshDOS 1.1: 1.1 adds a toutorial of how to create new commands and of course, you need the COMSOS user kit or dev kit. Ver 1.1 also includes a demo called Gue...KIME Simio Extensions: KIME.SimioDebugStep: This simple Simio step allows you to debug any number of expressions from within the simulation run. The debug information is displayed using a mes...NginxTray: NginxTray 0.3 Beta 2: NginxTray 0.3 Beta 2NginxTray: NginxTray 0.5 Beta 3: NginxTray 0.5 Beta 3NginxTray: NginxTray 0.6 RC1: NginxTray 0.6 RC1Open Source PLM Activities: Prodeos_OC beta 1.0: The “Innovator – MS Office Connector” is a product developed by Prodeos (www.prodeos.com). It is a light connector made to facilitate the use of Mi...Paint.NET PSD Plugin: 1.5.1: Changes in this release: Bitmap-mode images can now be loaded. Thanks to dhnc for filing the bug. Plugin no longer crashes on files with user m...SharePoint 2010 Feature Upgrade Kit: 1.0.0.0: This release contains:- - Custom application page to manage the upgrade of Site and Web-scoped Features. To come in the next release: - Companio...SiteMap Utility for DNN Blog Module: Blog SiteMap Utility v01.00.01: This is the first public release of the SiteMap Utility for the core DotNetNuke® Blog Module. Please see the documentation on this site on how to...SqlDiffFramework-A Visual Differencing Engine for Dissimilar Data Sources: SqlDiffFramework 1.0.2.0: Maintenance Release Defect Fixes: Issue # 3: 3 Issue # 4: 4 Enhancements: About Box now displays regional and language settings in effect. SDF...SuperSocket: First release of SuperSocket: !First release of SuperSocketThe Fastcopy Helper: FastcopyHelper: Fastcopy Helper 2.0 This is a final one. You can use it on the way. In order to use it , you should have the .NET3.5 ! 此软件必须下载 .NET3.5平台,方可使用!TV Show Renamer: TV Show Renamer Beta 3: I found the bug the prevented it from closing correctly so I fixed it and had to release it right away. If anyone else finds any problems. contact me.UrzaGatherer: UrzaGatherer v2.0: UrzaGatherer is the first stable version. This release include UrzaBackgroundPictures.VisualStudio 2010 JavaScript Outlining: VisualStudion 2010 Javascript Outlining 1.0: Features Outlines JavaScript codeblock regions for the code placed between { }. Both places on a new line. Outlines custom regions defined by: //...Wouter's SharePoint Demo Land: Navigation Service with Proxy: A SharePoint 2010 Service Application that uses service proxies to relay commands to the actual service. The demo proxy makes use of in-memory comm...盘古分词-开源中文分词组件: V2.0.0.0: 进一步优化性能,分词速度达到将近 500K ,1.2.0.1 版本只有 320K 修改 PanGu.Lucene.Analyzer, 支持 Lucene.net 2.9 版本。 增加对字典中以数字开头的专业非中文词汇的识别 增加英文分词开关,权重由英文小写权重和英文词根权重两个参数来决定...Most Popular ProjectsCommunity Forums NNTP bridgeOutSyncASP.NET MVC Time PlannerNeatUploadMoonyDesk (windows desktop widgets)AgUnit - Silverlight unit testing with ReSharperViperWorks IgnitionASP.NET MVC ExtensionsAviva Solutions C# Coding GuidelinesMute4Most Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSStyleCopFarseer Physics Enginesmark C# LibraryMirror Testing System

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >