Search Results

Search found 13516 results on 541 pages for 'common dialog'.

Page 336/541 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • C++ file input/output search

    - by Brian J
    Hi I took the following code from a program I'm writing to check a user generated string against a dictionary as well as other validation. My problem is that although my dictionary file is referenced correctly,the program gives the default "no dictionary found".I can't see clearly what I'm doing in error here,if anyone has any tips or pointers it would be appreciated, Thanks. //variables for checkWordInFile #define gC_FOUND 99 #define gC_NOT_FOUND -99 // static bool certifyThat(bool condition, const char* error) { if(!condition) printf("%s", error); return !condition; } //method to validate a user generated password following password guidelines. void validatePass() { FILE *fptr; char password[MAX+1]; int iChar,iUpper,iLower,iSymbol,iNumber,iTotal,iResult,iCount; //shows user password guidelines printf("\n\n\t\tPassword rules: "); printf("\n\n\t\t 1. Passwords must be at least 9 characters long and less than 15 characters. "); printf("\n\n\t\t 2. Passwords must have at least 2 numbers in them."); printf("\n\n\t\t 3. Passwords must have at least 2 uppercase letters and 2 lowercase letters in them."); printf("\n\n\t\t 4. Passwords must have at least 1 symbol in them (eg ?, $, £, %)."); printf("\n\n\t\t 5. Passwords may not have small, common words in them eg hat, pow or ate."); //gets user password input get_user_password: printf("\n\n\t\tEnter your password following password rules: "); scanf("%s", &password); iChar = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iUpper = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iLower =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iSymbol =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iNumber = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); iTotal = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal); if(certifyThat(iUpper >= 2, "Not enough uppercase letters!!!\n") || certifyThat(iLower >= 2, "Not enough lowercase letters!!!\n") || certifyThat(iSymbol >= 1, "Not enough symbols!!!\n") || certifyThat(iNumber >= 2, "Not enough numbers!!!\n") || certifyThat(iTotal >= 9, "Not enough characters!!!\n") || certifyThat(iTotal <= 15, "Too many characters!!!\n")) goto get_user_password; iResult = checkWordInFile("dictionary.txt", password); if(certifyThat(iResult != gC_FOUND, "Password contains small common 3 letter word/s.")) goto get_user_password; iResult = checkWordInFile("passHistory.txt",password); if(certifyThat(iResult != gC_FOUND, "Password contains previously used password.")) goto get_user_password; printf("\n\n\n Your new password is verified "); printf(password); //writing password to passHistroy file. fptr = fopen("passHistory.txt", "w"); // create or open the file for( iCount = 0; iCount < 8; iCount++) { fprintf(fptr, "%s\n", password[iCount]); } fclose(fptr); printf("\n\n\n"); system("pause"); }//end validatePass method int checkWordInFile(char * fileName,char * theWord){ FILE * fptr; char fileString[MAX + 1]; int iFound = -99; //open the file fptr = fopen(fileName, "r"); if (fptr == NULL) { printf("\nNo dictionary file\n"); printf("\n\n\n"); system("pause"); return (0); // just exit the program } /* read the contents of the file */ while( fgets(fileString, MAX, fptr) ) { if( 0 == strcmp(theWord, fileString) ) { iFound = -99; } } fclose(fptr); return(0); }//end of checkwORDiNFile

    Read the article

  • Create a named cell dynamically

    - by CaptMorgan
    I have a workbook with 3 worksheets. 1 worksheet will have input values (not created at the moment and not needed for this question), 1 worksheet with several "template" or "source" tables, and the last worksheet has 4 formatted "target" tables (empty or not doesn't matter). Each template table has 3 columns, 1 column identifying what the values are for in the second 2 columns. The value columns have formulas in them and each cell is Named. The formulas use the cell Names rather than cell address (e.g. MyData1 instead of C2). I am trying to copy the templates into the target tables while also either copying the cell Names from the source into the targets or create the Names in the target tables based on the source cell Names. My code below I am creating the target names by using a "base" in the Name that will be changed depending on which target table it gets copied to. my sample tables have "Num0_" for a base in all the cell names (e.g. Num0_MyData1, Num0_SomeOtherData2, etc). Once the copy has completed the code will then name the cells by looking at the target Names (and address), replacing the base of the name with a new base, just adding a number of which target table it goes to, and replacing the sheet name in the address. Here's where I need help. The way I am changing that address will only work if my template and target are using the same cell addresses of their perspective sheets. Which they are not. (e.g. Template1 table has value cells, each named, of B2 thru C10, and my target table for the copy may be F52 thur G60). Bottom line I need to figure out how to copy those names over with the templates or name the cells dynamically by doing something like a replace where I am incrementing the address value based on my target table #...remember I have 4 target tables which are static, I will only copy to those areas. I am a newbie to vba so any suggestions or help is appreciated. NOTE: The copying of the table works as I want. It even names the cells (if the Template and Target Table have the same local worksheet cell address (e.g. C2) 'Declare Module level variables 'Variables for target tables are defined in sub's for each target table. Dim cellName As Name Dim newName As String Dim newAddress As String Dim newSheetVar Dim oldSheetVar Dim oldNameVar Dim srcTable1 Sub copyTables() newSheetVar = "TestSheet" oldSheetVar = "Templates" oldNameVar = "Num0_" srcTable1 = "TestTableTemplate" 'Call sub functions to copy tables, name cells and update functions. copySrc1Table copySrc2Table End Sub '****there is another sub identical to this one below for copySrc2Table. Sub copySrc1Table() newNameVar = "Num1_" trgTable1 = "SourceEnvTable1" Sheets(oldSheetVar).Select Range(srcTable1).Select Selection.Copy For Each cellName In ActiveWorkbook.Names 'Find all names with common value If cellName.Name Like oldNameVar & "*" Then 'Replace the common value with the update value you need newName = Replace(cellName.Name, oldNameVar, newNameVar) newAddress = Replace(cellName.RefersTo, oldSheetVar, newSheetVar) 'Edit the name of the name. This will change any formulas using this name as well ActiveWorkbook.Names.Add Name:=newName, RefersTo:=newAddress End If Next cellName Sheets(newSheetVar).Select Range(trgTable1).Select Selection.PasteSpecial Paste:=xlPasteAll, Operation:=xlNone, SkipBlanks:= _ False, Transpose:=False End Sub PING

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • Xubuntu 13.10 64bit - Slow and buggy "log out" process?

    - by MrKatSwordfish
    I'm a Windows convert who has done only a little bit of dabbling in Ubuntu in the past (back in Dapper Drake a few years back). A lot has changes since then, and I've been yearning to jump back into linux again! So, having just bought a new SSD, I felt that this would be as good of a time as any to set up a dual-boot system again. I've messed around with Ubuntu 13.10 a bit, and while Unity has its issues, I think that it still needs some time to develop. I looked into XFCE and liked it a lot, so I went with Xubuntu. I've installed Xubuntu, and for the most part it's running smoothly and it a pleasure to work with. The customization is great and the minimalistic look and feel is really nice! But here's my problem, whenever I select the "Log Out" option from either the application menu, or the user profiles menu, my PC comes to a crawl, and the dialog box with all the options (shut down, restart, log out, etc.) takes maybe a minute or more to appear. I click the log out button, my PC is brought to a snail's pace, and I have to wait for what seems like an eternity for the logout options to appear! If i try to open something else (even a terminal window) while it's loading the logout options, that other program won't finish loading until the logout screen finally appears. Keep in mind, this is a pretty much vanilla install of Xubuntu 13.10 64bit, on a PC with an intel i7, an SSD, 6gb DDR3 RAM, and a new AMD 7770 gpu (drivers haven't been installed yet, though). Everything else runs fast, most applications open near-instantly! It must be an issue with how the logout options screen initializes or something, but I'm not sure exactly how I can fix it.. Edit - Extra Info: This problem is very consistent when using the "Log Out" buttons in Xubuntu. However, I've found that I'm able to reboot and shutdown much more quickly by going through the "Switch User" screen, and using the reboot or shutdown buttons on that screen. I'm nearly certain that it has something to do with the little Log Out options screen that appears when I select Log Out from the menu, and not the actual process of shutting down.. So what should I do? I really like XFCE so far, and I've never tried a non-ubuntu based distro before, but should I just switch to something else? Is there any known fix for this issue? Are there any work-arounds for logging out/shutting down/rebooting via the terminal so that I can avoid this irritating bug? Is there any that I can monitor the progress of the log out via terminal, allowing me to see which parts are causing the slow-down? What is the best way to report this bug to someone?

    Read the article

  • Visual Studio 2010 SP1 Beta supports IIS Express

    - by DigiMortal
    Visual Studio 2010 SP1 Beta and ASP.NET MVC 3 RC2 were both announced today. I made a little test on one of my web applications to see how Visual Studio 2010 works with IIS Express. In this posting I will show you how to make your ASP.NET MVC 3 application work with IIS Express. Installing new stuff You can install IIS Express using Web Platform Installer. It is not part of WebMatrix anymore and you can just install IIS Express without WebMatrix. NB! You have to install IIS Express using Web Platform installer because IIS Express is not installed by SP1. After installing Visual Studio 2010 SP1 Beta on my machine (it took a long-long-long time to install) I installed also ASP.NET MVC 3 RC2. If you have Async CTP installed on your machine you have to uninstall it to get ASP.NET MVC 3 RC2 installed and run without problems. Screenshot on right shows what kinf of horrors my old laptop had to survive to get all new stuff installer. Setting IIS Express as server for web application Now, when you right-click on some web project you should see new menu item in context menu – Use IIS Express…. If you click on it you are asked for confirmation and if you say Yes then your web application is reconfigured to use IIS Express. After configuration you will see dialog box like this. And you are done. You can run your application now. Running web application When you run your application it is run on IIS Express. You can see IIS Express icon on taskbar and when you click it you can open IIS Express settings. If you closed your application in browser you can open it again from IIS Express icon. Modifying IIS Express settings for web application You can modify IIS Express settings for your application. Just open your project properties and move to Web tab. IIS and IIS Express are using same settings. The difference is if you make check to Use IIS Express checkbox or not. Switching back to Visual Studio Development Server If you don’t want or you can’t use IIS Express for some reason you can easily switch back to Visual Studio Development Server. Just right-click on your web application project and select Use Visual Studio Development Server from context menu. Conclusion IIS Express is more independent than full version of IIS and it can be also installed and run on machines where are very strict rules (some corporate and academic environments by example). IIS Express was previously part of WebMatrix package but now it is separate product and Visual Studio 2010 has very nice support for it thanks to SP1. You can easily make your web applications use IIS Express and if you want to switch back to development server it is also very easy.

    Read the article

  • SQL Developer Database Diff – Compare Objects From Multiple Schemas

    - by thatjeffsmith
    Ever wonder why Database Diff isn’t called Schema Diff? One reason is because SQL Developer allows you select objects from more than one schema in the ‘Source’ connection for the compare. Simply use the ‘More’ dialog view and select as many tables from as many different schemas as you require Now, before you get around to testing this – as you should never believe what I say, trust but verify – two things you need to know: I’m using SQL Developer version 3.2 On the initial screen you need to use the ‘Maintain’ option Maintain tells SQL Developer to use the schema designation in the source connection to find the same corresponding object in the destination schema. Choose ‘maintain’ if you want to compare objects in the same schema in the destination but don’t have the user login for that schema. So after you’ve selected your databases, your diff preferences, and your objects – you’re ready to perform the compare and review your results. The DIFF Report Notice the highlighted text, SQL Developer is ‘maintaining’ the Schema context from the two databases. Short and sweet. That’s pretty much all there is to doing a compare with SQL Developer with multiple schemas involved. You may have noticed in some posts lately that my editor screenshots had a ‘green screen’ look and feel to them. What’s with the black background in your editors? In the SQL Developer preferences, you can set your editor color schemes. I started with the ‘Twilight’ scheme (team Jacob in case you’re wondering) and then customized it further by going with a default green font color. You could go pretty crazy in here, and I’m assuming 90% of you could care less and will just stick with the original. But for those of you who are particular about your IDE styling – go crazy! SQL Developer Editor Display Preferences

    Read the article

  • Make Your 64 bit Computer Look like a Commodore 64

    - by Matthew Guay
    The Commodore 64 was one of the bestselling home computers ever, and many geeks got their first computing experience on one of these early personal computers. Here’s an easy way to revisit the early years of personal computing with a theme for Windows 7. With only 64Kb of ram and an 8 bit processor, the Commodore 64 is light-years behind today’s computers.  But with a Windows 7 themepack, you can turn back the years and give your computer a quick overhaul to look more like its ancient predecessor. Age Windows 7 with a click Download the Commodore 64 theme from PC World (link below), and unzip the files. Now, double-click on the Themepack file to apply the theme. This will open your Personalization panel and will automatically change your system fonts, window style, background, and more. Your desktop will go from your Windows 7 look… to a modified Windows 7 look that is reminiscent of the Commodore 64. Open an application to see all the changes … notice the old-style font in the Window boarder and menus. This theme also changes your Computer, Recycle Bin, and User folder icons to Commodore 64-inspired icons. And, if you want to go back to the standard Windows 7 look and feel, it’s only a click away in the Personalization dialog.  Right-click on your desktop, select Personalize, and then choose the theme you want.   Conclusion Although this doesn’t give you the real look and feel of the Commodore 64, it is still a fun way to experience a bit of computer nostalgia.  There are tons of excellent themes available for Windows 7, so check back for more exciting ways to customize your desktop! Link Download the Commodore 64 theme for Windows 7 Similar Articles Productive Geek Tips Make MSE Create a Restore Point Before Cleaning MalwareMake Ubuntu Automatically Save Changes to Your SessionMake Windows Vista Shut Down Services QuickerChange Your Computer Name in Windows 7 or VistaMake Windows 7 or Vista Log On Automatically TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports

    Read the article

  • How to Use and Manage Extensions to Safari 5

    - by Mysticgeek
    While there have been hacks to include extensions in Safari for some time now, Safari 5 now offers proper support for them. Today we take a look at managing extensions in the latest version of Safari. Installation and Setup Download and install Safari 5 (link below). Make sure to download the installer that doesn’t include QuickTime if you don’t want it. Also, uncheck getting Apple updates and news in your email. Then decide if you want to install Bonjour for Windows and have Safari automatically update or not. Once it’s installed, launch Safari and select Show Menu Bar from the the Settings Menu. Then go into Preferences \ Advanced and check the box Show Develop menu in the menu bar. Develop will now appear on the Menu Bar…click on it and select Enable Extensions. Using Extensions Now you can find and start using extensions (link below) that will work with Safari 5. In this example we’re installing PageSaver which takes an image of what is showing in your browser. Click on the link for the Extension you want to install…   Then you’ll get a confirmation asking if you want to open or save it. Opening it will install it right away. Click Install in the dialog that asks if you’re sure you want to. Here we see the Extension was successfully installed and you can see the camera icon on the Toolbar. When you’re on a portion of a webpage you want to take an image of, click on the camera icon and you’ll have the image saved in your Downloads folder. Then you can open it up in a browser or image editor. Go into Preferences \ Extensions and from here you can turn the extensions on or off, uninstall, or check for updates. If you’re a Safari user, or thinking about trying it, you’ll enjoy proper support for extensions in version 5. At the time of this writing we couldn’t find any extensions on the Apple site, but you might want to keep your eye on it to see if they do start listing them.  Download Safari 5 for Mac & PC Safari Extensions Similar Articles Productive Geek Tips Manage Web Searches In SafariMake Safari Stop Crashing Every 20 Seconds on Windows VistaCustomize Safari for Windows ToolbarMake Your Safari Web Browsing PrivateSave Screen Space by Hiding the Bookmarks Toolbar in Safari for Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • How to prevent screen locking when lid is closed?

    - by Joe Casadonte
    I have Ubuntu 11.10 with Gnome 3 (no Unity), gnome-screen-saver has been removed and replaced with xscreensaver. The screensaver stuff all works fine -- no complaints there. When I close my laptop lid, even for a second, the screen locks (and the dialog box asking for my password is xscreensaver's). I'd like for this not to happen... Things I've tried/looked at already: xscreensaver settings - the "Lock Screen After" checkbox is not checked (though I've also tried it checked and set to 720 minutes) gconf-editor - apps -> gnome-screensaver -> lock_enabled is not checked System Settings - Power - "When the lid is closed" is set to "Do nothing" for both battery and A/C System Settings - Screen - Lock is "off" gconf-editor - apps -> gnome-power-manager -> buttons -> lid_ac && lid_battery are both set to "nothing" dconf-editor - apps -> org -> gnome -> desktop -> screensaver -> lock_enabled is not checked Output from: gsettings list-recursively org.gnome.settings-daemon.plugins.power: org.gnome.settings-daemon.plugins.power active true org.gnome.settings-daemon.plugins.power button-hibernate 'hibernate' org.gnome.settings-daemon.plugins.power button-power 'suspend' org.gnome.settings-daemon.plugins.power button-sleep 'suspend' org.gnome.settings-daemon.plugins.power button-suspend 'suspend' org.gnome.settings-daemon.plugins.power critical-battery-action 'hibernate' org.gnome.settings-daemon.plugins.power idle-brightness 30 org.gnome.settings-daemon.plugins.power idle-dim-ac false org.gnome.settings-daemon.plugins.power idle-dim-battery true org.gnome.settings-daemon.plugins.power idle-dim-time 10 org.gnome.settings-daemon.plugins.power lid-close-ac-action 'nothing' org.gnome.settings-daemon.plugins.power lid-close-battery-action 'nothing' org.gnome.settings-daemon.plugins.power notify-perhaps-recall true org.gnome.settings-daemon.plugins.power percentage-action 2 org.gnome.settings-daemon.plugins.power percentage-critical 3 org.gnome.settings-daemon.plugins.power percentage-low 10 org.gnome.settings-daemon.plugins.power priority 1 org.gnome.settings-daemon.plugins.power sleep-display-ac 600 org.gnome.settings-daemon.plugins.power sleep-display-battery 600 org.gnome.settings-daemon.plugins.power sleep-inactive-ac false org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type 'suspend' org.gnome.settings-daemon.plugins.power sleep-inactive-battery true org.gnome.settings-daemon.plugins.power sleep-inactive-battery-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-battery-type 'suspend' org.gnome.settings-daemon.plugins.power time-action 120 org.gnome.settings-daemon.plugins.power time-critical 300 org.gnome.settings-daemon.plugins.power time-low 1200 org.gnome.settings-daemon.plugins.power use-time-for-policy true gnome-settings-daemon is running: <~> $ ps -ef | grep gnome-settings-daemon 1000 1719 1645 0 19:37 ? 00:00:01 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 1000 1726 1 0 19:37 ? 00:00:00 /usr/lib/gnome-settings-daemon/gsd-printer 1000 1774 1645 0 19:37 ? 00:00:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-helper Anything else I can check? Thanks!

    Read the article

  • On SQL Developer and TNSNAMES.ORA

    - by thatjeffsmith
    Tnsnames.ora [DOCS] is a configuration file for SQL*Net that describes the network service names for the databases in your organization. Basically, it tells Oracle applications how to find your databases. This post is just a quick overview on how to get SQL Developer to ‘see’ this file and define a connection. There’s only a single prerequisite for having SQL Devleoper setup such that it can use TNSNAMES to connect: You have somewhere a tnsnames.ora file You don’t need a client, instant or otherwise, on your machine. You just need the file. Now, if you DO you have a client or HOME on your machine, SQL Developer will look for those and find the tnsnames file for you. IF we can’t find it at the usual places, you can simply tell us where it is via this preference: On the Database – Advanced page Once you’ve done this, assuming you have a file (or 10) in that directory, we’ll read it, parse it, and list the entries in the connection dialog. The File(s) That’s right, files. Just like SQL*Plus, we’ll read any file that starts with ‘tnsnames’ – that includes files you’ve renamed to .bak or .old. Kris talks about that more here. I have just the one, which is all I need anyway. There we go! Defining the Connection Just set the connection type to TNS. This is a lot easier to do than manually defining the connections – esp as they’re likely to frequently change in ‘the real world.’ No Client or Home Required That’s right. You don’t need an Oracle Client or $ORACLE_HOME to have SQL Developer see and read a TNS file. Just so you know I’m not cheating… SQL Dev doesn’t know which client to use and won’t use it even if it DID know… I’m able to define a new connection AND connect with these preferences ON|OFF.

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

  • Accessing JMX for Oracle WebLogic 11g

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4, we use the latest Oracle WebLogic release (11g). The instructions below illustrate a way of allowing a console like jconsole to remotely monitor and manage Oracle WebLogic using the JMX Mbeans. Typically management of Oracle WebLogic is done from Oracle Enterprise Manager or the Oracle Weblogic console application but you can also use JMX. To access the JMX capability for Oracle WebLogic 11g, for an Oracle Utilities Application Framework based product, using a JMX console (such as jconsole) the following process needs to be performed: Enable the JMX Management Server in the Oracle WebLogic console at splapp - Configuration - General - Advanced Settings option. Enable both Compatibility Mbean Server Enabled and Management EJB Enabled (this enables the legacy and new JMX interface). Save the changes This change will require a restart. In the startup of the Oracle WebLogic server in the $SPLSYSTEMLOGS/myserver.log (or %SPLESYSTEMLOGS%\myserver.log on Windows) you will see the BEA-149512 message indicating the Mbean servers have been started. The message will indicate the JMX URL that can be used to access the JMX Mbeans. The URL is in the format: service:jmx:iiop://host:port/jndi/mbeanserver where: host - Oracle WebLogic host name port - Oracle WebLogic port number mbeanserver - Mbean Server to access. Valid Values: weblogic.management.mbeanservers.runtime weblogic.management.mbeanservers.edit weblogic.management.mbeanservers.domainruntime For illustrative purposes we will use the domainruntime Mbean. Ensure that you execute the splenviron[.sh] utility to set the appropriate environment variables for the desired environment. Execute the following jconsole command to initiate the connection to the JMX Mbean server Windows: jconsole -J-Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%WL_HOME%\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote Linux/Unix jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar;$WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote You will see a New Connection Dialog. Specify the URL from the previous steps into the Remote process (i.,e. service:jmx:iiop...). The credentials are the credentials specified for the Oracle WebLogic console. You are now able to view the JMX classes available. Here is an example from my demonstration machine: Refer to the Oracle WebLogic Mbean documentation to understand the output.

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • Editing Project files, Resource Editors in VS 2010

    - by rajbk
    Editing Project Files Visual Studio 2010 gives you the ability to easily edit the project file associated with your project (.csproj or .vbproj). You might do this to change settings related to how the project is compiled since proj files are MSBuild files. One would normally close Visual Studio and edit the proj file using a text editor.  The better way is to first unload the project in Visual Studio by right clicking on the project in the solution explorer and selecting “Unload Project”   The project gets unloaded and is marked “unavailable” The project file can now be edited by right clicking on the unloaded project.    After editing the file, the project can be reloaded. Resource editors in VS 2010 Visual studio also comes with a number of resource editors (see list here). For example, you could open a file using the Binary editor like so. Go to File > Open > File.. Select a File and choose the “Open With..” option in the bottom right.   We are given the option to choose an editor.   Note that clicking on the “Add..” in the dialog above allows you to include your favorite editor.   Choosing the “Binary editor” above allows us to edit the file in hex format. In addition, we can also search for hex bytes or ASCII strings using the Find command.   The “Open With..” option is also available from within the solution explorer as shown below: Enjoy!   Mr. Incredible: No matter how many times you save the world, it always manages to get back in jeopardy again. Sometimes I just want it to stay saved! You know, for a little bit? I feel like the maid; I just cleaned up this mess! Can we keep it clean for... for ten minutes!

    Read the article

  • View Your Google Calendar in Outlook 2010

    - by Mysticgeek
    Google Calendar is a great way to share appointments, and synchronize your schedule with others. Here we show you how to view your Google Calendar in Outlook 2010 too. Google Calendar Log into the Google Calendar and under My Calendars click on Settings. Now click on the calendar you want to view in Outlook. Scroll down the page and click on the ICAL button from the Private Address section, or Calendar Address if it’s a public calendar…then copy the address to your clipboard. Outlook 2010 Open up your Outlook calendar, click the Home tab on the Ribbon, and under Manage Calendars click on Open Calendar \ From Internet… Now enter the link location into the New Internet Calendar field then click OK. Click Yes to the dialog box that comes up verifying you want to subscribe to it.   If you want more subscription options click on the Advanced button. Here you can name the folder, type in a description, and choose if you want to download attachments. That is all there is to it! Now you will be able to view your Google Calendar in Outlook 2010. You’ll also be able to view your local computer and the Google Calendar side by side… Keep in mind that this only gives you the ability to view the Google Calendar…it’s read-only. Any changes you make on the Google Calendar site will show up when you do a send/receive. If live out of Outlook during the day, you might want the ability to view what is going on with your Google Calendar(s) as well. If you’re an Outlook 2007 user, check out our article on how to view your Google Calendar in Outlook 2007. Similar Articles Productive Geek Tips View Your Google Calendar in Outlook 2007Overlay Calendars in Outlook 2007 (like Google Calendar does)Sync Your Outlook and Google Calendar with Google Calendar SyncDisplay your Google Calendar in Windows CalendarEasily Add All Holidays To The Calendar in Outlook 2003 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Create More Bookmark Toolbars in Firefox Easily Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7

    Read the article

  • SnagIt Live Writer Plug-in updated

    - by Rick Strahl
    I've updated my free SnagIt Live Writer plug-in again as there have been a few issues with the new release of SnagIt 11. It appears that TechSmith has trimmed the COM object and removed a bunch of redundant functionality which has broken the older plug-in. I also updated the layout and added SnagIt's latest icons to the form. Finally I've moved the source code to Github for easier browsing and downloading for anybody interested and easier updating for me. This plug-in is not new - I created it a number of years back, but I use the hell out it both for my blogging and for a few internal apps that with MetaWebLogApi to update online content. The plug-in makes it super easy to add captured image content directly into a post and upload it to the server. What does it do? Once installed the plug-in shows up in the list of plug-ins. When you click it launches a SnagIt Capture Dialog: Typically you set the capture settings once, and then save your settings. After that a single click or ENTER press gets you off capturing. If you choose the Show in SnagIt preview window option, the image you capture is is displayed in the preview editor to mark up images, which is one of SnagIt's great strengths IMHO. The image editor has a bunch of really nice effects for framing and marking up and highlighting of images that is really sweet. Here's a capture from a previous image in the SnagIt editor where I applied the saw tooth cutout effect: Images are saved to disk and can optionally be deleted immediately, since Live Writer creates copies of original images in its own folders before uploading the files. No need to keep the originals around typically. The plug-in works with SnagIt Versions 7 and later. It's a simple thing of course - nothing magic here, but incredibly useful at least to me. If you're using Live Writer and you own a copy of SnagIt do yourself a favor and grab this and install it as a plug-in. Resources: SnagIt Windows Live Writer Plug-in Installer Source Code on GitHub Buy SnagIt© Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – Last Three Episodes – Need Your Opinion

    - by Pinal Dave
    I have been blogging for almost 7 years and building video content for around 2 years. After spending so much time on blogging and creating video, I have got the quite a good idea what people would like to read and what people like to watch. However, there is one thing, which I am constantly struggling after almost a year and I would like to get your opinion about it. Though, this may look very simple to you but it is very crucial to me and I would like to know your opinion about it. I have been building video almost every week for my SQL in Sixty Seconds series and it has been quite popular. So far on my YouTube Channel there are over 2600 subscribers and over 250K views. Here is my problem – there are about 50+ videos on SQL in Sixty Seconds Series but the not every video is popular. There are a few videos which are extremely popular and there are videos which are absolutely struggling to get even single view. I have yet not figured out what people would love to watch on this channel. I noticed lots of people watching various videos but hardly anyone leaving comments or suggestions. At the end of the blog posts associated with the SQL in Sixty Seconds, I always ask which video people would love to watch, but I get a very low response over there too. What I wonder is that why such a low engagement of viewers/readers on the video blog posts where as the channel is success and lots of people are watching the video? What do you think I should change in my video to increase the engagement? Here are my last three videos from SQL in Sixty Seconds channel and I would like to know your feedback. Remove Cached Login from SSMS Connect Dialog – SQL in Sixty Seconds #049 RESEED Identity Column in Database Table – SQL in Sixty Seconds #051 Puzzle SET ANSI_NULLS and Resultset – SQL in Sixty Seconds #052  The feedback which I will like the most will for sure get a special surprise gift for me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • SSIS Catalog, Windows updates and deployment failures due to System.Core mismatch

    - by jamiet
    This is a heads-up for anyone doing development on SSIS. On my current project where we are implementing a SQL Server Integration Services (SSIS) 2012 solution we recently encountered a situation where we were unable to deploy any of our projects even though we had successfully deployed in the past. Any attempt to use the deployment wizard resulted in this error dialog: The text of the error (for all you search engine crawlers out there) was: A .NET Framework error occurred during execution of user-defined routine or aggregate "create_key_information": System.IO.FileLoadException: Could not load file or assembly 'System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) ---> System.IO.FileLoadException: The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) System.IO.FileLoadException: System.IO.FileLoadException:     at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateSymmetricKey(String algorithm)    at Microsoft.SqlServer.IntegrationServices.Server.Security.CryptoGraphy.CreateKeyInformation(SqlString algorithmName, SqlBytes& key, SqlBytes& IV) . (Microsoft SQL Server, Error: 6522) After some investigation and a bit of back and forth with some very helpful members of the SSIS product team (hey Matt, Wee Hyong) it transpired that this was due to a .Net Framework fix that had been delivered via Windows Update. I took a look at the server update history and indeed there have been some recently applied .Net Framework updates: This fix had (in the words of Matt Masson) “somehow caused a mismatch on System.Core for SQLCLR” and, as you may know, SQLCLR is used heavily within the SSIS Catalog. The fix was pretty simple – restart SQL Server. This causes the assemblies to be upgraded automatically. If you are using Data Quality Services (DQS) you may have experienced similar problems which are documented at Upgrade SQLCLR Assemblies After .NET Framework Update. I am hoping the SSIS team will follow-up with a more thorough explanation on their blog soon. You DBAs out there may be questioning why Windows Update is set to automatically apply updates on our production servers. We’re checking that out with our hosting provider right now You have been warned! @Jamiet

    Read the article

  • Liberado Visual Studio 2010 Feature Pack y Power Tools

    - by carlone
      Estimados amig@s, El lunes recien pasado, fue liberado a la web (Release to Web RTW) el primer paquete de Visual Studio Feature Pack, el cual incluye Microsoft Visual Studio 2010 Visualization and Modeling Feature Pack y el Visual Studio 2010 Productivity Powertools. Realmente son dos herramientas muy poderosas de las cuales pueden obtener beneficio sobre todo en términos de eficiencia y productividad a la hora de estar haciendo programación. Visual Studio 2010 Productivity Powertools En palabras simples, esta herramienta es un conjunto de extensiones para visual studio (versión Professional para arriba), para mejorar la productividad de los programadores. Dentro de las muchas características que ofrece, podemos resaltar: Mejoras a la visualización dentro de los tabs en Visual Studio, lo cual incluye Scrollable Tabs Vertical Tabs Pinned Tabs Ventana de dialog para añadir referencias a un proyecto más eficiente y con capacidad de búsqueda Sobresaltar o marcar la línea actual (Muy útil para en monitores de alta resolución) HTML Copy, que permite copiar el código sin perder el formato Ctrl + Click Go to definition, esta es una característica que permite que al presionar la tecla control podamos ubicar hipervínculos para navegar a la definición de un símbolo. Alineación del código cuando hacemos asignación de valores Bien si de dan cuenta, son muchas las características que pueden aprovechar para mejorar su experiencia de programación dentro del entorno de Visual Studio 2010. Para descargar el instalador y obtener más información dirigirse al website: http://visualstudiogallery.msdn.microsoft.com/en-us/d0d33361-18e2-46c0-8ff2-4adea1e34fef   Microsoft Visualization and Modeling Feature Pack A través de esta herramienta podrás sacar mucho provecho para desarrollar modelos basados en UML. Nota: Este paquete esta limitado a suscriptores de MSDN y se puede descargar únicamente desde el sitio de descargas para suscriptores de MSDN. Es pre requisito tener la versión Visual Studio 2010 Ultimate para utilizar esta herramienta. Dentro de las muchas características ofrecidas, podemos resaltar: Generación de código a partir de diagramas UML Crear diagramas UML a partir del código (Ej: generar un diagrama de secuencia en base al código de su aplicación) Generar gráficas de dependencias de proyectos web Importar elementos de modelos de clase, secuencia y casos de uso de archivos XMI 2.1 Aunque esta limitada a la versión Ultimate, con esta herramienta los Arquitectos de Software y los jefes de desarrollo tendrán la posibilidad de poder controlar y diseñar mejor sus aplicaciones. Si desean ver la documentación, videos y descargar el pack pueden dirigirse a:  http://msdn.microsoft.com/en-us/vstudio/ff655021.aspx   Espero que estas herramientas les sean de mucho provecho!   Saludos, Carlos A. Lone Sigueme en Twitter @carloslonegt

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • Move Window Buttons Back to the Right in Ubuntu 10.04

    - by Trevor Bekolay
    One of the more controversial changes in the Ubuntu 10.04 beta is the Mac OS-inspired change to have window buttons on the left side. We’ll show you how to move the buttons back to the right. Before While the change may or may not persist through to the April 29 release of Ubuntu 10.04, in the beta version the maximize, minimize, and close buttons appear in the top left of a window. How to move the window buttons The window button locations are dictated by a configuration file. We’ll use the graphical program gconf-editor to change this configuration file. Press Alt+F2 to bring up the Run Application dialog box, enter “gconf-editor” in the text field, and click on Run. The Configuration Editor should pop up. The key that we want to edit is in apps/metacity/general. Click on the + button next to the “apps” folder, then beside “metacity” in the list of folders expanded for apps, and then click on the “general” folder. The button layout can be changed by changing the “button_layout” key. Double-click button_layout to edit it. Change the text in the Value text field to: menu:maximize,minimize,close Click OK and the change will occur immediately, changing the location of the window buttons in the Configuration Editor. Note that this ordering of the window buttons is slightly different than the typical order; in previous versions of Ubuntu and in Windows, the minimize button is to the left of the maximize button. You can change the button_layout string to reflect that ordering, but using the default Ubuntu 10.04 theme, it looks a bit strange. If you plan to change the theme, or even just the graphics used for the window buttons, then this ordering may be more natural to you. After After this change, all of your windows will have the maximize, minimize, and close buttons on the right. What do you think of Ubuntu 10.04’s visual change? Let us know in the comments! Similar Articles Productive Geek Tips Move a Window Without Clicking the Titlebar in UbuntuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Keep the Display From Turning Off on UbuntuPut Close/Maximize/Minimize Buttons on the Left in UbuntuAllow Remote Control To Your Desktop On Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Code refactoring with Visual Studio 2010 Part-1

    - by Jalpesh P. Vadgama
    Visual studio 2010 is a Great IDE(Integrated Development Environment) and we all are using it in day by day for our coding purpose. There are many great features provided by Visual Studio 2010 and Today I am going to show one of great feature called for code refactoring. This feature is one of the most unappreciated features of Visual Studio 2010 as lots of people still not using that and doing stuff manfully. So to explain feature let’s create a simple console application which will print first name and last name like following. And following is code for that. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Console.WriteLine(string.Format("FirstName:{0}",firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So as you can see this is a very basic console application and let’s run it to see output. So now lets explore our first feature called extract method in visual studio you can also do that via refractor menu like following. Just select the code for which you want to extract method and then click refractor menu and then click extract method. Now I am selecting three lines of code and clicking on refactor –> Extract Method just like following. Once you click menu a dialog box will appear like following. As you can I have highlighted two thing first is Method Name where I put Print as Method Name and another one Preview method signature where its smart enough to extract parameter also as We have just selected three lines with  console.writeline.  One you click ok it will extract the method and you code will be like this. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; Print(firstName, lastName); } private static void Print(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } So as you can see in above code its has created a static method called Print and also passed parameter for as firstname and lastname. Isn’t that great!!!. It has also created static print method as I am calling it from static void main.  Hope you liked it.. Stay tuned for more..Till that Happy programming.

    Read the article

  • Ghost team foundation build controllers

    - by Martin Hinshelwood
    Quite often after an upgrade there are things left over. Most of the time they are easy to delete, but sometimes it takes a little effort. Even rarer are those times when something just will not go away no matter how much you try. We have had a ghost team build controller hanging around for a while now, and it had defeated my best efforts to get rid of it. The build controller was from our old TFS server from before our TFS 2010 beta 2 upgrade and was really starting to annoy me. Every time I try to delete it I get the message: Controller cannot be deleted because there are build in progress -Manage Build Controller dialog   Figure: Deleting a ghost controller does not always work. I ended up checking all of our 172 Team Projects for the build that was queued, but did not find anything. Jim Lamb pointed me to the “tbl_BuildQueue” table in the team Project Collection database and sure enough there was the nasty little beggar. Figure: The ghost build was easily spotted Adam Cogan asked me: “Why did you suspect this one?” Well, there are a number of things that led me to suspect it: QueueId is very low: Look at the other items, they are in the thousands not single digits ControllerId: I know there is only one legitimate controller, and I am assuming that 6 relates to “zzUnicorn” DefinitionId: This is a very low number and I looked it up in “tbl_BuildDefinition” and it did not exist QueueTime: As we did not upgrade to TFS 2010 until late 2009 a date of 2008 for a queued build is very suspect Status: A status of 2 means that it is still queued This build must have been queued long ago when we were using TFS 2008, probably a beta, and it never got cleaned up. As controllers are new in TFS 2010 it would have created the “zzUnicorn” controller to handle any build servers that already exist. I had previously deleted the Agent, but leaving the controller just looks untidy. Now that the ghost build has been identified there are two options: Delete the row I would not recommend ever deleting anything from the database to achieve something in TFS. It is really not supported. Set the Status to cancelled (Recommended) This is the best option as TFS will then clean it up itself So I set the Status of this build to 2 (cancelled) and sure enough it disappeared after a couple of minutes and I was then able to then delete the “zzUnicorn” controller. Figure: Almost completely clean Now all I have to do is get rid of that untidy “zzBunyip” agent, but that will require rewriting one of our build scripts which will have to wait for now.   Technorati Tags: ALM,TFBS,TFS 2010

    Read the article

  • Code refactoring with Visual Studio 2010-Part 3

    - by Jalpesh P. Vadgama
    I have been writing few post about Code refactoring features of visual studio 2010 and This blog post is also one of them. In this post I am going to show you reorder parameters features in visual studio 2010. As a developer you might need to reorder parameter of a method or procedure in code for better readability of the the code and if you do this task manually then it is tedious job to do. But Visual Studio Reorder Parameter code refactoring feature can do this stuff within a minute. So let’s see how its works. For this I have created a simple console application which I have used earlier posts . Following is a code for that. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(firstName, lastName); } private static void PrintMyName(string firstName, string lastName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } Above code is very simple. It just print a firstname and lastname via PrintMyName method. Now I want to reorder the firstname and lastname parameter of PrintMyName. So for that first I have to select method and then click Refactor Menu-> Reorder parameters like following. Once you click a dialog box appears like following where it will give options to move parameter with arrow navigation like following. Now I am moving lastname parameter as first parameter like following. Once you click OK it will show a preview option where I can see the effects of changes like following. Once I clicked Apply my code will be changed like following. using System; namespace CodeRefractoring { class Program { static void Main(string[] args) { string firstName = "Jalpesh"; string lastName = "Vadgama"; PrintMyName(lastName, firstName); } private static void PrintMyName(string lastName, string firstName) { Console.WriteLine(string.Format("FirstName:{0}", firstName)); Console.WriteLine(string.Format("LastName:{0}", lastName)); Console.ReadLine(); } } } As you can see its very easy to use this feature. Hoped you liked it.. Stay tuned for more.. Till that happy programming.

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >