Search Results

Search found 45324 results on 1813 pages for 'open source'.

Page 564/1813 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • How to fix "ruby installation is missing psych (for YAML output)." on CentOS?

    - by ohho
    After rvm installation on CentOS 5.8: [rails@localhost ~]$ rvm -v rvm 1.16.17 [rails@localhost ~]$ which ruby ~/.rvm/rubies/ruby-1.9.3-p286/bin/ruby [rails@localhost ~]$ ruby -v ruby 1.9.3p286 (2012-10-12 revision 37165) [i686-linux] [rails@localhost ~]$ which gem ~/.rvm/rubies/ruby-1.9.3-p286/bin/gem there is a warning: $ gem -v /home/rails/.rvm/rubies/ruby-1.9.3-p286/lib/ruby/1.9.1/yaml.rb:56:in `<top (required)>': It seems your ruby installation is missing psych (for YAML output). To eliminate this warning, please install libyaml and reinstall your ruby. 1.8.24 I followed some advice: $ rvm pkg install libyaml Fetching yaml-0.1.4.tar.gz to /home/rails/.rvm/archives Extracting yaml-0.1.4.tar.gz to /home/rails/.rvm/src Prepare yaml in /home/rails/.rvm/src/yaml-0.1.4. Configuring yaml in /home/rails/.rvm/src/yaml-0.1.4. Compiling yaml in /home/rails/.rvm/src/yaml-0.1.4. Installing yaml to /home/rails/.rvm/usr Please note that it's required to reinstall all rubies: rvm reinstall all --force and then: $ rvm reinstall all --force Removing /home/rails/.rvm/src/ruby-1.8.7-p371... Removing /home/rails/.rvm/rubies/ruby-1.8.7-p371... No binary rubies available for: centos/5.8/i386/ruby-1.8.7-p371. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Installing Ruby from source to: /home/rails/.rvm/rubies/ruby-1.8.7-p371, this may take a while depending on your cpu(s)... ruby-1.8.7-p371 - #downloading ruby-1.8.7-p371, this may take a while depending on your connection... ruby-1.8.7-p371 - #extracting ruby-1.8.7-p371 to /home/rails/.rvm/src/ruby-1.8.7-p371 ruby-1.8.7-p371 - #extracted to /home/rails/.rvm/src/ruby-1.8.7-p371 Applying patch /home/rails/.rvm/patches/ruby/1.8.7/stdout-rouge-fix.patch Applying patch /home/rails/.rvm/patches/ruby/1.8.7/no_sslv2.diff ruby-1.8.7-p371 - #configuring ruby-1.8.7-p371 - #compiling ruby-1.8.7-p371 - #installing Removing old Rubygems files... Installing rubygems-1.8.24 for ruby-1.8.7-p371 ... Installation of rubygems completed successfully. Saving wrappers to '/home/rails/.rvm/bin'. ruby-1.8.7-p371 - #adjusting #shebangs for (gem irb erb ri rdoc testrb rake). ruby-1.8.7-p371 - #importing default gemsets (/home/rails/.rvm/gemsets/) Install of ruby-1.8.7-p371 - #complete Please be aware that you just installed a ruby that requires 2 patches just to be compiled on up to date linux system. This may have known and unaccounted for security vulnerabilities. Please consider upgrading to Ruby 1.9.3-286 which will have all of the latest security patches. Making gemset ruby-1.8.7-p371 pristine. Making gemset ruby-1.8.7-p371@global pristine. Removing /home/rails/.rvm/src/ruby-1.9.3-p286... Removing /home/rails/.rvm/rubies/ruby-1.9.3-p286... No binary rubies available for: centos/5.8/i386/ruby-1.9.3-p286. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Installing Ruby from source to: /home/rails/.rvm/rubies/ruby-1.9.3-p286, this may take a while depending on your cpu(s)... ruby-1.9.3-p286 - #downloading ruby-1.9.3-p286, this may take a while depending on your connection... ruby-1.9.3-p286 - #extracting ruby-1.9.3-p286 to /home/rails/.rvm/src/ruby-1.9.3-p286 ruby-1.9.3-p286 - #extracted to /home/rails/.rvm/src/ruby-1.9.3-p286 ruby-1.9.3-p286 - #configuring ruby-1.9.3-p286 - #compiling ruby-1.9.3-p286 - #installing Removing old Rubygems files... Installing rubygems-1.8.24 for ruby-1.9.3-p286 ... Installation of rubygems completed successfully. Saving wrappers to '/home/rails/.rvm/bin'. ruby-1.9.3-p286 - #adjusting #shebangs for (gem irb erb ri rdoc testrb rake). ruby-1.9.3-p286 - #importing default gemsets (/home/rails/.rvm/gemsets/) Install of ruby-1.9.3-p286 - #complete Making gemset ruby-1.9.3-p286 pristine. Making gemset ruby-1.9.3-p286@global pristine. Too bad, the warning is still there: $ gem -v /home/rails/.rvm/rubies/ruby-1.9.3-p286/lib/ruby/1.9.1/yaml.rb:56:in `<top (required)>': It seems your ruby installation is missing psych (for YAML output). To eliminate this warning, please install libyaml and reinstall your ruby. 1.8.24 How can I get rid of the warning? UPDATE: (applying rvm reinstall 1.9.3 --movable) $ rvm reinstall 1.9.3 --movable Removing /home/rails/.rvm/src/ruby-1.9.3-p286... Removing /home/rails/.rvm/rubies/ruby-1.9.3-p286... Fetching yaml-0.1.4.tar.gz to /home/rails/.rvm/archives Extracting yaml-0.1.4.tar.gz to /home/rails/.rvm/src Prepare yaml in /home/rails/.rvm/src/yaml-0.1.4. Configuring yaml in /home/rails/.rvm/src/yaml-0.1.4. Compiling yaml in /home/rails/.rvm/src/yaml-0.1.4. Installing yaml to /home/rails/.rvm/rubies/ruby-1.9.3-p286 Installing Ruby from source to: /home/rails/.rvm/rubies/ruby-1.9.3-p286, this may take a while depending on your cpu(s)... ruby-1.9.3-p286 - #downloading ruby-1.9.3-p286, this may take a while depending on your connection... ruby-1.9.3-p286 - #extracting ruby-1.9.3-p286 to /home/rails/.rvm/src/ruby-1.9.3-p286 ruby-1.9.3-p286 - #extracted to /home/rails/.rvm/src/ruby-1.9.3-p286 Applying patch /home/rails/.rvm/patches/ruby/1.9.3/ruby-multilib.patch Error running 'patch -F 25 -p1 -N -f -i /home/rails/.rvm/patches/ruby/1.9.3/ruby-multilib.patch', please read /home/rails/.rvm/log/ruby-1.9.3-p286/patch.apply.ruby-multilib.log There has been an error applying the specified patches. Halting the installation. Making gemset ruby-1.9.3-p286 pristine. Making gemset ruby-1.9.3-p286@global pristine.

    Read the article

  • How to Play FLAC Files in Windows 7 Media Center & Player

    - by Mysticgeek
    An annoyance for music lovers who enjoy FLAC format, is there’s no native support for WMP or WMC. If you’re a music enthusiast who prefers FLAC format, we’ll look at adding support to Windows 7 Media Center and Player. For the following article we are using Windows 7 Home Premium 32-bit edition. Download and Install madFLAC v1.8 The first thing we need to do is download and install the madFLAC v1.8 decoder (link below). Just unzip the file and run install.bat… You’ll get a message that it has been successfully registered, click Ok. To verify everything is working, open up one of your FLAC files with WMP, and you’ll get the following message. Check the box Don’t ask me again for this extension and click Yes. Now Media Player should play the track you’ve chosen.   Delete Current Music Library But what if you want to add your entire collection of FLAC files to the Library? If you already have it set up as your default music player, unfortunately we need to remove the current library and delete the database. The best way to manage the music library in Windows 7 is via WMP 12. Since we don’t want to delete songs from the computer we need to Open WMP, press “Alt+T” and navigate to Tools \ Options \ Library.   Now uncheck the box Delete files from computer when deleted from library and click Ok. Now in your Library click “Ctrl + A” to highlight all of the songs in the Library, then hit the “Delete” key. If you have a lot of songs in your library (like on our system) you’ll see the following dialog box while it collects all of the information.   After all of the data is collected, make sure the radio button next to Delete from library only is marked and click Ok. Again you’ll see the Working progress window while the songs are deleted. Deleting Current Database Now we need to make sure we’re starting out fresh. Close out of Media Player, then we’ll basically follow the same directions The Geek pointed out for fixing the WMP Library. Click on Start and type in services.msc into the search box and hit Enter. Now scroll down and stop the service named Windows Media Player Network Sharing Service. Now, navigate to the following directory and the main file to delete CurrentDatabase_372.wmdb %USERPROFILE%\Local Settings\Application Data\Microsoft\Media Player\ Again, the main file to delete is CurrentDatabase_372.wmdb, though if you want, you can delete them all. If you’re uneasy about deleting these files, make sure to back them up first. Now after you restart WMP you can begin adding your FLAC files. For those of us with large collections, it’s extremely annoying to see WMP try to pick up all of your media by default. To delete the other directories go to Organize \ Manage Libraries then open the directories you want to remove. For example here we’re removing the default libraries it tries to check for music. Remove the directories you don’t want it to gather contents from in each of the categories. We removed all of the other collections and only added the FLAC music directory from our home server. SoftPointer Tag Support Plugin Even though we were able to get FLAC files to play in WMP and WMC at this point, there’s another utility from SoftPointer to add. It enables FLAC (and other file formats) to be picked up in the library much easier. It has a long name but is effective –M4a/FLAC/Ogg/Ape/Mpc Tag Support Plugin for Media Player and Media Center (link below). Just install it by accepting the defaults, and you’ll be glad you did. After installing it, and re-launching Media Player, give it some time to collect all of the data from your FLAC directory…it can take a while. In fact, if your collection is huge, just walk away and let it do its thing. If you try to use it right away, WMP slows down considerably while updating the library.   Once the library is setup you’ll be able to play your FLAC tunes in Windows 7 Media Center as well and Windows Media Player 12.   Album Art One caveat is that some of our albums didn’t show any cover art. But we were usually able to get it by right-clicking the album and selecting Find album info.   Then confirming the album information is correct…   Conclusion Although this seems like several steps to go through to play FLAC files in Windows 7 Media Center and Player, it seems to work really well after it’s set up. We haven’t tried this with a 64-bit machine, but the process should be similar, but you might want to make sure the codecs you use are 64-bit. We’re sure there are other methods out there that some of you use, and if so leave us a comment and tell us about it. Download madFlac V1.8  M4a/FLAC/Ogg/Ape/Mpc Tag Support Plugin for Media Player and Media Center from SoftPointer Similar Articles Productive Geek Tips How to Play .OGM Video Files in Windows VistaFixing When Windows Media Player Library Won’t Let You Add FilesUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Kantaris is a Unique Media Player Based on VLCEasily Change Audio File Formats with XRECODE TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7?

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Diagnose PC Hardware Problems with an Ubuntu Live CD

    - by Trevor Bekolay
    So your PC randomly shuts down or gives you the blue screen of death, but you can’t figure out what’s wrong. The problem could be bad memory or hardware related, and thankfully the Ubuntu Live CD has some tools to help you figure it out. Test your RAM with memtest86+ RAM problems are difficult to diagnose—they can range from annoying program crashes, or crippling reboot loops. Even if you’re not having problems, when you install new RAM it’s a good idea to thoroughly test it. The Ubuntu Live CD includes a tool called Memtest86+ that will do just that—test your computer’s RAM! Unlike many of the Live CD tools that we’ve looked at so far, Memtest86+ has to be run outside of a graphical Ubuntu session. Fortunately, it only takes a few keystrokes. Note: If you used UNetbootin to create an Ubuntu flash drive, then memtest86+ will not be available. We recommend using the Universal USB Installer from Pendrivelinux instead (persistence is possible with Universal USB Installer, but not mandatory). Boot up your computer with a Ubuntu Live CD or USB drive. You will be greeted with this screen: Use the down arrow key to select the Test memory option and hit Enter. Memtest86+ will immediately start testing your RAM. If you suspect that a certain part of memory is the problem, you can select certain portions of memory by pressing “c” and changing that option. You can also select specific tests to run. However, the default settings of Memtest86+ will exhaustively test your memory, so we recommend leaving the settings alone. Memtest86+ will run a variety of tests that can take some time to complete, so start it running before you go to bed to give it adequate time. Test your CPU with cpuburn Random shutdowns – especially when doing computationally intensive tasks – can be a sign of a faulty CPU, power supply, or cooling system. A utility called cpuburn can help you determine if one of these pieces of hardware is the problem. Note: cpuburn is designed to stress test your computer – it will run it fast and cause the CPU to heat up, which may exacerbate small problems that otherwise would be minor. It is a powerful diagnostic tool, but should be used with caution. Boot up your computer with a Ubuntu Live CD or USB drive, and choose to run Ubuntu from the CD or USB drive. When the desktop environment loads up, open the Synaptic Package Manager by clicking on the System menu in the top-left of the screen, then selecting Administration, and then Synaptic Package Manager. Cpuburn is in the universe repository. To enable the universe repository, click on Settings in the menu at the top, and then Repositories. Add a checkmark in the box labeled “Community-maintained Open Source software (universe)”. Click close. In the main Synaptic window, click the Reload button. After the package list has reloaded and the search index has been rebuilt, enter “cpuburn” in the Quick search text box. Click the checkbox in the left column, and select Mark for Installation. Click the Apply button near the top of the window. As cpuburn installs, it will caution you about the possible dangers of its use. Assuming you wish to take the risk (and if your computer is randomly restarting constantly, it’s probably worth it), open a terminal window by clicking on the Applications menu in the top-left of the screen and then selection Applications > Terminal. Cpuburn includes a number of tools to test different types of CPUs. If your CPU is more than six years old, see the full list; for modern AMD CPUs, use the terminal command burnK7 and for modern Intel processors, use the terminal command burnP6 Our processor is an Intel, so we ran burnP6. Once it started up, it immediately pushed the CPU up to 99.7% total usage, according to the Linux utility “top”. If your computer is having a CPU, power supply, or cooling problem, then your computer is likely to shutdown within ten or fifteen minutes. Because of the strain this program puts on your computer, we don’t recommend leaving it running overnight – if there’s a problem, it should crop up relatively quickly. Cpuburn’s tools, including burnP6, have no interface; once they start running, they will start driving your CPU until you stop them. To stop a program like burnP6, press Ctrl+C in the terminal window that is running the program. Conclusion The Ubuntu Live CD provides two great testing tools to diagnose a tricky computer problem, or to stress test a new computer. While they are advanced tools that should be used with caution, they’re extremely useful and easy enough that anyone can use them. Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Persistent Bootable Ubuntu USB Flash DriveAdding extra Repositories on UbuntuHow to Share folders with your Ubuntu Virtual Machine (guest)Building a New Computer – Part 3: Setting it Up TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Create an iTunes Account without a credit card

    - by Matthew Guay
    iTunes Store offers a large variety of free content, but to download it you have to have an account. Usually you have to enter your credit card information to sign up, but here’s an easy way to get an iTunes account for free downloads without entering any payment info. Although iTunes Store is known for paid downloads of movies, music, and more, it also has a treasure trove of free media.  Some of it, including Podcasts and iTunes U educational content do not require an account to download.  However, any other free content, including free iPhone/iPod Touch apps and free or promotional music, videos, and TV Shows all require an account to download.  If you try to download a free movie or music download, you will be required to enter payment information. Even though your card will not be charged, it will be kept on file so you can be charged if you download a for-pay item.  However, if you only plan to download free items, it may be preferable to not have your account linked to a credit card. The following steps will get you an account without entering your credit card info. Getting Started First, make sure you have iTunes installed.  If you don’t already have it, download and install it (link below) with the default settings. Now open iTunes, and click the iTunes Store link on the left. Click the App Store link on the top of this page. Select a free app to download.  A simple way to do this is to scroll down to the Top Free Apps box on the right side, hover your mouse over the first item, and click on the Free button that appears when you hover over it. A popup will open asking you to sign in with your Apple ID.  Click “Create New Account”. Click Continue to create your account. Check the box to accept the Store Terms and Conditions, and click Continue.   Enter your email address, password, security question, and date of birth, and uncheck the boxes to get email if you don’t want it…then click Continue. Now, you will be asked to provide a payment method.  Notice now that the last option says None!  Click that bullet option… Then enter your billing address.  Simply enter your normal billing address, even though you are not entering a payment method.  Click Continue and your account will be created! If you get the Address Verification screen just verify your county and click Done. An email will be sent to you to verify your account… Click on the link in your email to verify your account, iTunes will launch and you’re prompted to enter in the Apple ID and Password you just created. Your account is successfully created! Now you can easily download any free media from iTunes.  Keep an eye on the Free on iTunes box on the bottom of the iTunes Store page for interesting downloads, or if you have an iPhone or iPod Touch, watch the popular Free downloads on the Apps page. And of course there is always great content on iTunes U to grab free as well. Purchasing for-pay media If you want to purchase an item on the iTunes store later, simply click on the item to download as normal.  Click Buy to proceed with the purchase. iTunes will prompt you that you need to enter payment information to complete the purchase.  Enter your Apple ID email and password, and then add the payment information as prompted.   Remove Payment Information from an iTunes Account If you’ve already entered payment information into your iTunes account, and would like to remove it, click Store in the top iTunes menu, and select View My Account. Enter your Apple ID email and password, and click View Account.   This will open your account information.  Click the Edit Payment Information button.   Now, click the None button to remove your payment information.  Click Done to save the changes. Your account will now prompt you to enter payment information if you try to make a purchase.  You could repeat these steps after making a purchase if you do not want iTunes to keep your payment info on file. Conclusion This is a great way to make an iTunes account without entering your credit card, or to remove your credit card info from your account.  Parents may especially enjoy this tip, as they can have an iTunes account on their kids computer or iPod Touch without worrying about them spending money with it. Links Download iTunes Similar Articles Productive Geek Tips Quick Tip: Switch Between Signatures in Outlook 2007 the Easy WayRedeem Pre-paid Zune Card Points for Zune Marketplace MediaCreate An Electronic Business Card In Outlook 2007Understanding Windows Vista Aero Glass RequirementsSpeed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually

    Read the article

  • CodePlex Daily Summary for Saturday, June 12, 2010

    CodePlex Daily Summary for Saturday, June 12, 2010New ProjectsAdverTool (Advertisement tool): AdverTool is an online tool which integrates the most popular advertisement networks (such as Microsoft adCenter, Google AdWords, Yahoo! Search Mar...Authentication Configuration Tool for SharePoint: Helpful tools to automatically configure SharePoint 2007 and 2010 for forms based authentication and other authentication mechanisms.Bacicworx: A C# .Net 3.5 helper library containing functionality for compression, encryption, hashes, downloading, PayPal API, text analysis and generation, a...BlogEngine.Net iPhone Theme: A port of BETouch originally created by soundbbgBT UPnP Nat Library: This Library makes it extremly simple to add NAT upnp port forwarding to your .net applications. Developed in C# using .Net 4.0CheckBox & CheckBoxList Validators: These validators fill the much needed gap in the Asp.Net Server controlsDataFactories: The DataFactories project was created to provide a standardized interface to SSAS and MSSQL data. However, as it is implemented using the Abstract ...DVD Swarm: Converts unprotected DVD video & audio streams to H.264 with AAC/Vorbis.Frio IM: Frio IM - is cross protocol instant messenger.jiuyuan: jiuyuan management systemMGM: MyGroupManager is a simple graphical interface written in PowerShell that can be deployed to Active Directory users to simplify the managed of grou...MGR2010: This the MA thesis by Witold Stanik & Michał Sereja, PJWSTK.Nauplius.ActiveDirectory: Web-based Active Directory management.Partial rendering control using JQuery: This article show a web custom control that allows partial rendering using JQueryREG - The Random Entertainment Generator: A simple tool to make your mid up when you can't figure out what you want to do!Runes of Magic - Heilerrechner: Heilerrechner für die Heiler von Runes of Magic (www.runes.ofmagic.com)Semagsoft Calculator: Basic calculator for Windows XP, Vista and Windows 7.SO League Tables: SOLT: Stack Overflow League Tables. A fun little app that lets you compare your stack overflow performance for each month, relative to other member...Stacky StackApps .Net Client Library: StackApps is a REST API for which provides access to the stackoverflow.com family of websites. Stacky is a .net client for that API. Stacky current...TwitterDotNet: TwitterDotNet is a TwitterLibrary for .NET Framework.ValiVIN: VIN (Vehicle Identification Number) Validator Validate Vin NumberWorkLogger: Simple work hour logger in WPFNew ReleasesAdverTool (Advertisement tool): Official releases: Please visit http://advertool.org to access the complete source code and downloads.Authentication Configuration Tool for SharePoint: Auth Config Tool (WSS 3.0, MOSS 2007 version): This tool automates the setup of dual authentication web applications in SharePoint that use Windows Authentication and Forms Based Authentication....BlogEngine.Net iPhone Theme: Version 0.1: Original version 0.1 from soundbbgBraintree Client Library: Braintree-2.3.0: Return AvsErrorResponseCode, AvsPostalCodeResponseCode, AvsStreetAddressResponseCode, CurrencyIsoCode, CvvResponseCode with Transaction Return Cr...BT UPnP Nat Library: Bt_Upnp Nat Library Alpha: Alpha Release of the libraryCNZK Library: Silverlight Behaviors - Deep Zoom Tag Filter: Behavior library for Silverlight 4 containing a Deep Zoom Tag Filter Behavior. Sample at the Expression Gallery http://gallery.expression.microsof...Demina: Demina Binaries version 0.2: Updated binaries. This release contains all of the new features, including simple animation transitions.DTLoggedExec: 1.0.0.2: -Fixed a bug that prevented loading packages from SSIS Package Store -Added support for {filename} placeholder in both Data Flow Profiling and CSV ...DVD Swarm: v0.8.10.611: Initial release, mostly stable.Exchange 2010 RBAC Editor (RBAC GUI) - updated on 6/11/2010: RBAC Editor 0.9.5.1: now supports creating and editing Role Assignment Policies; rest of the stuff is the same - still a lot of way to go :) Please use email address i...Extend SmallBasic: Teaching Extensions v.021: Compatible with SmallBasic v0.9 Lame version of TicTacToe Added - more coming later.Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.1.1 GA Released: Hi, Today we are releasing Visifire 3.1.1 GA with the following features: * Logarithmic Axis * ShowIndicator() in Chart. * HideIndica...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.5.4 GA Released: Hi, Today we are releasing Visifire 3.1.1 GA with the following features: Logarithmic Axis ShowIndicator() in Chart. HideIndicator() in Chart...Keep Focused - an enhanced tool for Time Management using Pomodoro Technique: Release 0.3.1 Alpha: Release 0.3.1 Alpha Technical patch. The previous release 0.3 Alpha had some errors and missing features. It was probably not build from the source...Mesopotamia Experiment: Mesopotamia 1.2.96: Bug Fixes - Fixed duplicate cells being added on creating new cells via mutations - Fixed bug where organisms without IO synapses where getting ios...NLog - Advanced .NET Logging: Nightly Build 2010.06.11.001: Changes since the last build:No changes. Unit test results:Passed 243/243 (100%) Passed 243/243 (100%) Passed 267/267 (100%) Passed 269/269 (100%)...Partial rendering control using JQuery: JQuery Web Control V 1.0: This is the first release of the code. It includes the source code and a web application to see how it worksphpxw: Phpxw2.0: 框架目录说明 ./_mod 模块存放目录 ./phpxw/ 框架核心目录 ./phpxw/common/ 框架核心函数 ./phpxw/system/ 框架核心基础类存放目录 ./phpxw/userlib/ 用户继承类存放目录 ./temp...Questionable Content Screensaver: Questionable Content Screensaver: Should be pretty self explanatory, install the appropriate version for your computer (x64 or x86). Features Include Cache comics for offline viewi...Quick Performance Monitor: Version 1.4.1: Added option to change the 'minimum' maximum value visible on the graph at run-time. Also fixed a number of other bugs.Refix - .NET dependency management: Refix v0.1.0.82 ALPHA: This has now been run against a real life project to tease out some of the issues. While this remains alpha software, which you use at your own ris...Rhyduino - Arduino and Managed Code: Beta Release (v0.8.2): ContentsSample Project - Demonstrates basic functionality and is flooded with code comments, so it's capable of being used as a learning tool. It d...Runes of Magic - Heilerrechner: Rom_Heiler_0.1: Erste Version von "RoM Heilerrechner". .Net 4.0 Framework wird vorausgesetzt. Das erhälst du hier: http://www.microsoft.com/downloads/details.aspx?...Semagsoft Calculator: 2.0: new theme and bug fix'sSilverlight Reporting: Release 2: Updated to correct issue in report footer xaml, and to add support for a calculated report footer.Stacky StackApps .Net Client Library: Beta Preview: This is a beta preview to go along with the StackApps beta.TwitterDotNet: TwitterDotNet Library: first versionUnOfficial AW Wrapper dot Net: Aw Wrapper 1.0.0.0 (5.0): New Functions :DValiVIN: ValiVIN first release: First Iteration. METHODS: IsValid(string vin) - Checks if a string is a valid VIN (returns true or false) GetCheckSumValue(string vin) - Returns...VCC: Latest build, v2.1.30611.0: Automatic drop of latest buildViewModelSupport: ViewModelSupport 1.0: Version 1.0 More information: http://houseofbilz.net/archives/2010/05/08/adventures-in-mvvm-my-viewmodel-base/ http://houseofbilz.net/archives/201...VolgaTransTelecomClient: v.1.0.3.0: v.1.0.3.0WCF Client Generator: Version 0.9.3.19259: Changed: - Always generate full type names for parameters and return typesWCF Client Generator: Version 0.9.3.21153: Fixed: - Service contracts namespace generation Added: - Templates assembly code base read from configurationXen: Graphics API for XNA: Xen 2.0 ALPHA: This is a very early alpha for Xen 2.0. Please note: The documentation for this alpha has not been updated yet. Xen 2.0 is not backwards compatib...ZGuideTV.NET: ZGuideTV.NET 0.93: Vendredi 11 avril 2010 (ZGuideTV.NET bêta 9 build 0.93) - English below Ajout : - Classement du contenu dans la description (affichage légende si...Most Popular ProjectsCAML GeneratorSharePoint Geographic Data VisualizerDbIdiom for ADO.NET CorestudyDTSRun Job RunnerXBStudio.asp.net.automationSilverlight load on demand with MEFCloud Business ServicesSharePoint 2010 Taxonomy Import UtilitySTS Federation Metadata EditorMost Active ProjectsRhyduino - Arduino and Managed Codepatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETMediaCoder.NETMicrosoft Silverlight Media FrameworkAndrew's XNA Helpers

    Read the article

  • CodePlex Daily Summary for Monday, March 08, 2010

    CodePlex Daily Summary for Monday, March 08, 2010New Projects38fj4ncg2: 38fj4ncg2Ac#or: A actor framework written in Mono (C#) Make it easy to make multithreaded programs with the actor model.Aerial Phone Book: It's a ASP app that allow more of one user see a contacts on phone book and add new contacts. This way a group of users can maintain a common phon...AmiBroker Plug-Ins with C#: Plug-ins for AmiBroker built with Microsoft .NET Framework and C#.AxUnit: AxUnit is a Unit Testing framework for Microsoft Dynamics Ax (X++). It's an extension to the SysTest framework provided with DAX4.0 and newer versi...Botola PHP Class: Une class en PHP qui vous permet d'avoir les informations qui concernent les équipes de le championnat Marocain du football.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: Misc. stuff from Lars Wilhelmsen.Codename T: Codename T is in the very basic stages of development. It should be ready for beta testing by the start of April.ComBrowser: combrowserCompact Unity: The Compact Unity is a lightweight dependency injection container with support for constructor and property call injection written in .NET Compact ...FAST for Sharepoint MOSS 2010 Query Tool: Tool to query FAST for Sharepoint and Sharepoint 2010 Enterprise Search. It utilizes the search web services to run your queries so you can test y...Icarus Scene Engine: Icarus Scene Engine is a cross-platform 3D eLearning, games and simulation engine, integrating open source APIs into a cohesive cross-platform solu...jQuery.cssLess: jQuery plugin that interprets and loads LESS css files. (http://lesscss.org).Katara Dental Phase II: Second phase of Kdpl.Lunar Phase Silverlight Gadget: Meet the moon phase, percent of illumination and corresponding zodiac sign from your desktop. Reflection Studio: Reflection Studio is a development tool that encapsulate all my work around reflection, performance and WPF. It allows to inject performance traces...RSNetty: RSNetty is a RuneScape Private Server programmed in the Java programming language.Simple WMV/ASF files muxer/demuxer: Simple WMV files muxer/demuxer implemented in C#/C++. It has simple WPF-based UI and allows copy/replace operations on video, audio and script stre...sm: managerTFS Proxy Monitor: TFS Proxy Monitor. A winform application allow administrator can monitor the TFS Server Proxy statistics remotely.umbracoSamplePackageCreator (beta): This is an early version of a simple package creator for Umbraco as a Visual Studio project. Currently with an Xslt extension and a user control. O...WatchersNET.TagCloud: 3D Flash TagCloud Module for DotNetNukeWriterous: A Plug-in For Windows Live Writer: This plug-in for Live Writer allows the user to create their post in Live Writer and then publish to Posterous.comNew Releases.NET Extensions - Extension Methods Library: Release 2010.05: Added a common set of extension methods for IDataReader, DataRow and DataRowView to access field values in a type safe manner using type dedicated ...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.1: This is just a demo plug-in which shows how you can write plug-ins for AmiBroker with fully managed code.AxUnit: Version 1: AxUnit let's you write Unit Test assertions in Dynamics Ax like this: assert.that(2, is.equalTo2)); Installation instructions (Microsoft Dynamics ...BattLineSvc: V2: - Fixed bug where sometimes the line would not show up, even with the 90 second boot-up delay. This was due to the window being created too early ...Botola PHP Class: Botola API: la classe PHPBugTracker.NET: BugTracker.NET 3.4.0: In screen capture app, "Go to website" now goes to the bug you just created. In screen capture app, fixed where the crosshairs weren't always to...Bulk Project Delete: Version 1.1.1: A minor fix to 1.1: fixes a problem that indicated some projects were not found on the server when they were in fact found. This problem only exist...C# Linear Hash Table: Linear Hash Table b3: Remove functionality added. Now IDictionary Compliant, but most functions not yet tested.Code examples, utilities and misc from Lars Wilhelmsen [MVP]: LarsW.MexEdmxFixer 1.0: A quick hack to fix the Edmx files output by mex.exe (a tool in the SQL Modeling suite - November 2009 CTP) so that they can be opened in the desig...Code Snippet With Syntaxhighlighter Support for Windows Live Writer: Version 5.0.2: Minor update. Added brushes for F#, PowerShell and Erlang. Now a Windows Presentation Framework (WPF) application. ComponentFactory.Krypton.Toolki...Compact Unity: Compact Unity 1.0: Release.Compact Unity: CompactUnity 1.0: Release.FAST for Sharepoint MOSS 2010 Query Tool: Version 0.9: The tool is fully functioning. All of the cases for exceptions may not have been caught yet. I wanted to release a version to allow people to use...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite RC (for .NET 4.0 RC): Build for .NET 4.0 RC. Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC. BEAWARE! Fluent for .NET 4.0 RC is...FluentNHibernate.Search: 0.2 Beta: 0.2 Beta Fixed : #7275 - Field Mapping without specifying "Name" Fixed : #7271 - StackOverFlow Exception while Configure Embedded Mappings Fixed :...InfoService: InfoService v1.5 Beta 9: InfoService Beta Release Please note this is a BETA. It should be stable, but i can't guarantee that! So use it on your own risk. Please read Plug...jQuery.cssLess: jQuery.cssLess 0.2: Version supports variables, mixins and nested rules. TODO: lower scope variables and mixins should not delete higher scope variables and mixins ...Lunar Phase Silverlight Gadget: Lunar Phase: First public beta for Lunar Phase Silverlight Gadget. It's a stable release but it hasn't auto update state. That will come with the final release ...MapWindow GIS: MapWindow 6.0 msi (March 7): This is an update that fixes a number of problems with the multi-point features, the M and Z features as well as enabling multi-part creation using...Mews: Mews.Application V0.7: Installation InstuctionsNew Features15390 15085 Fixed Issues16173 16552. This happens when the database maintenance process kicks in during sta...sELedit: sELedit v1.0a: Added: Basic exception handlers (load/save/export) Added: List 57 support (no search and replace) Added: MYEN 1.3.1 Client ->CN 1.3.6 Server export...Sem.Sync: 2010-03-07 - End user client for Xing to Outlook: This client does include the binaries for syncing Xing contacts to Microsoft Outlook. It does contain only the binaries to sync from Xing to Outloo...Sem.Sync: 2010-03-07 - Synchronization Manager: This client does provide a more advanced (and more complex) GUI that allows you to select from two included templates (you can add your own, too) a...SharePoint Outlook Connector: Source Code for Version 1.2.3.2: Source Code for Version 1.2.3.2SharePoint Video Player Web Part & SharePoint Video Library: Version 2.0.0: Release Notes: New The new SharePoint Video Player release includes a SharePoint video template to create your own video library Changes The Shar...SilverSprite: SilverSprite 3.0 Alpha 2: These are the latest binaries for SilverSprite. The major changes for this release are that we are now using the XNA namespaces (no more #Iif SILVE...Simple WMV/ASF files muxer/demuxer: Initial release: Initial releaseStarter Master Pages for SharePoint 2010: Starter Master Pages for SP2010 - RC: Release Candidate release of Starter Master Pages for SharePoint 2010 by Randy Drisgill http://blog.drisgill.com _starter.master - Starter Master ...Text Designer Outline Text Library: 11th minor release: New Feature : Reflection!!ToolSuite.ValidationExpression: 01.00.01.002: second release of the validation class; the assembly file is ready to use, the documentation is complete;Truecrafting: Truecrafting 0.51: overhauled truecrafting code: combined all engines into 1 mage engine, made the engine and artificial intelligence support any spec, and achieved a...WatchersNET.TagCloud: WatchersNET.TagCloud 01.00.00: First ReleaseWCF Contrib: WCF Contrib v2.1 Mar07: This release is the final version of v2.1 Beta that was published on February 10th. Below you will find the changes that were made: Changes from v...WillStrohl.LightboxGallery Module for DotNetNuke: WillStrohl.LightboxGallery v1.02.00: This version of the Lightbox Gallery Module adds the following features: New Lightbox provider: Fancybox Thumbnails generated keeping their aspec...Writerous: A Plug-in For Windows Live Writer: Writerous v1.0: This is the first release of Writerous.WSDLGenerator: WSDLGenerator 0.0.0.5: - Use updated CommandLineParser.dll - Code uses 'ServiceDescriptionReflector' instead of custom code. - Added option to support SharePoint 2007 com...Xpress - ASP.NET MVC 个人博客程序: xpress2.1.0.beta.bin: 原 DsJian1.0的升级版本,名字修改为 xpress 此正式版本YSCommander: Version 1.0.1.0: Fixed bug: 1st start with non-existing data file.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesImage Resizer Powertoy Clone for WindowsMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFluent AssertionsFasterflect - A Fast and Simple Reflection API

    Read the article

  • Hot to fix nautilus desktop on linux mint

    - by user59530
    so I'm using Linux Mint 13 with Cinnamon and suddenly there are no icons on the desktop and the right click doesn't work, it's like the desktop doesn't start up at all, but the Cinnamon interface and everything else are working just fine. This happens only when I open the session with Cinnamon, if I start the session on the classic Gnome or MATE the desktop works. I tried to re-install Cinnamon but nothing changed. Then, I noticed that there are some little problems in Nautilus (sometimes menus aren't the color they're supposed to be), so I'm convinced that Nautilus might be the problem, but I don't know how to fix this, I've tried a few thing but I'm starting to fear that I'm only making it worse. Also, when I open the terminal and type in nautilus here's what's shows up, any help? (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:85:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:192:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:228:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:275:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:310:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:389:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:737:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1095:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1137:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1755:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1856:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1873:18: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1889:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1947:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1954:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:1967:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2025:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2075:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2090:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gtk-widgets.css:2195:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: gnome-panel.css:92:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:15:15: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:15:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:79:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:84:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:113:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nautilus.css:118:18: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:15:15: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:15:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:79:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:84:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:113:17: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING **: Theme parsing error: nemo.css:118:18: Not using units is deprecated. Assuming 'px'. (nautilus:2906): Gtk-WARNING *: Theme parsing error: unity.css:21:18: Not using units is deprecated. Assuming 'px'. Initializing nautilus-dropbox 1.4.0 Initializing nautilus-open-terminal extension * Message: Initializing gksu extension...

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Wednesday, December 08, 2010

    CodePlex Daily Summary for Wednesday, December 08, 2010Popular ReleasesAlgorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.SubtitleTools: SubtitleTools 1.1: Added a better ToUTF-8 converter (not just from windows-1256 to utf-8). Refactored OpenFileDialogs to a more MVVM friendly behavior.SuperSocket, an extensible socket application framework: SuperSocket 1.0 SP1: Fixed bugs: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration valueHydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.1.340: HydroDesktop 1.1 Stable Release (Build 340)CslaGenFork: CslaGenFork 4.0 CTP 2: The version is 4.0.1 CTP2 and was released 2010 December 7 and includes the following files: CslaGenFork 4.0.1-2010-12-07 Setup.msi Templates-2010-10-07.zip For getting started instructions, refer to How to section. Overview of the changes Since CTP1 there were 53 work items closed (28 features, 24 issues and 1 task). During this 60 days a lot of work has been done on several areas. First the stereotypes: EditableRoot is OK EditableChild is OK EditableRootCollection is OK Editable...My Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.EnhSim: EnhSim 2.2.0 ALPHA: 2.2.0 ALPHAThis release adds in the changes for 4.03a. at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Updated En...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).TweetSharp: TweetSharp v2.0.0.0 - Preview 4: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous fixes and improvements to core engine Twitter API coverage: a...myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...MiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide New ProjectsCore Motives Tracking Web Part: This C# web part was created to allow users of SharePoint 2007 to place CoreMotives (http://www.coremotives.com) tracking code on any web part page. Can also be used in master pages and page layouts.CPEBook: OsefEatFrsh: Keep Track of contents of your Fridge. Eat items while they are still fresh.ENUH10Publisher: Et prosjekt for studenter ved eCademy H10EVE Community Portal: EVE Community Portal is a complete community portal for EVE alliances (and corporations), which will host everything an eve alliance needs, from a forum and blogs to every tool you could whish for and more...FER CSLA.NET Compact: .NET Compact Framework edition of CSLA.NET application framework.Finger Mouse: it's a good idea and simple program help you to use the mouse feature from webcam (without mouse) note : you should have i3 or higher processorGambaru: Gambaru é um techdemo de um game 3D desenvolvido em Delphi. Utiliza engine de física Newton e Open GL (pacote GL-Scene). Foi apresentado como trabalho de conclusão de curso no SENAC-SP por Marcelo, Daniel e Thais em 2007. Exploramos o universo Samurai. Contribua, programe, sonhe!GameBoyEmu: ????,GameBoy ???GenericList Inherits IDataReader ( ListEx<T>() : IDataReader ): This Generic List implements the IDataReader interface and displays the usage of Linq, Lamba expressions and some creative thought around working with collection types. I hope it can serve as reference to your projects.Guard: Provides the argument validation class Guard, ubiquitously used throughout all .NET projects but with no central place for updates.HolidayChecker Library: HolidayChecker Library is a usefull library that allows programmers to know if a certain date is a festivity or not. This library also allows the calculation of Easter day based on the algorithm of Tondering. It's developed in C#.JuniorTour - Junior Golf Tour Silverlight Application: JuniorTour makes it easier for golf tour operators to publish tournament results for multiple divisions and multiple seasons. You'll no longer have to manually edit player pages, tournament results, or compute rankings. It's developed in C# and Silverlight.LibGT: LibGT aims to reduce the amount of overall code a programmer has to write in C#. This library provides many shortcuts and extension methods to facilitate robust development rapidly even without the use of an IDE.Mayhem: Mayhem makes it simple for end users to control complex events with their PCs. Whether you want to Update a Twitter status when your cat is detected by your webcam or monitor your serial ports and trigger events, it's no problem with Mayhem -- wreak your own personal havoc.mmht: ???????,??????MobilePolice: my mobilepolice projectOpenCallback: This is a implementation of the callback handler pattern that allows you to invoke the callback handle methods with out type switching or if...else if chains.Project Baron: Simple, yet powerful Project Management System.PS3Lib for SDK 1.92: Création d'une librairie de ceveloppement pour le SDK 1.92Remote Controller for Trackmania: Evzrecon is a remote controller for Trackmania Forever dedicated servers, much like XASECO but written in Java.SharePointSocial: SharePointSocial is focused on taking social interaction within SharePoint to the next level, extending beyond the corporate boundaries. Corporate listening, one-click interaction through key social media outlets and data-driven management and reporting are planned features.SIGS: ssSimple Routing: Simple Routing allows you to associate arbitrary routes with static methods in an ASP.NET application through attribution.SlimCRM: Aplicación de referencia de buenas prácticas de programaciónTalkBoard: ????????,?????????!uHelpsy - Umbraco Helper Library: uHelpsy is a tiny (but growing) library which makes programatically interacting with Umbraco 4.5.2 much more pleasant. It provides helper methods for creating and updating nodes, working with the Umbraco cache, and dealing with unpublished nodes. VCSS: VCSS is a decorated version of CSS (Cascading Style Sheets) that allows you to specify variables and also nest rules. The VCSS file can be compiled into a standard CSS file to be used on any website.Vote: ?,???????????????!WP7 GPS Simulator: Use this project to simulate GPS while doing development on your Windows Phone 7.WPF_CAD: this proyect it's a college proyect form grafic computer course. Consist in the development of a cad soft implementing all grafics algoritms.Xml-Racing: Réaliser une application Web 3-tiers qui permette d'interroger une BD et de restituer les infos sous forme de graphiques et cartes.

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • Emails forwarded via postfix get flagged as spam and forged in Gmail

    - by Kendall Hopkins
    I'm trying to setup a forwarding only email server. I'm running into the problem where all messages forwarded via postfix are getting put into gmail's spam folder and getting flagged as forged. I'm testing a very similar setup on a cpanel box and their forwarded emails make it through without any problem. Things I've done: Setup reverse dns on forwarding box Setup SPF record for forwarding box domain CPanel route (not flagged as spam): [email protected] - [email protected] - [email protected] AWS postfix route (flagged as spam): [email protected] - [email protected] - [email protected] Gmail error message: /etc/postfix/main.cf myhostname = sputnik.*domain*.com smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myorigin = /etc/mailname mydestination = sputnik.*domain*.com, localhost.*domain*.com, , localhost relayhost = mynetworks = 127.0.0.0/8 10.0.0.0/24 [::1]/128 [fe80::%eth0]/64 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all virtual_alias_maps = hash:/etc/postfix/virtual Email forwarded by CPanel (doesn't get marked as spam): Delivered-To: *personaluser*@gmail.com Received: by 10.182.144.98 with SMTP id sl2csp14396obb; Wed, 9 May 2012 09:18:36 -0700 (PDT) Received: by 10.182.52.38 with SMTP id q6mr1137571obo.8.1336580316700; Wed, 09 May 2012 09:18:36 -0700 (PDT) Return-Path: <mail@*personaldomain*.com> Received: from web6.*domain*.com (173.193.55.66-static.reverse.softlayer.com. [173.193.55.66]) by mx.google.com with ESMTPS id ec7si1845451obc.67.2012.05.09.09.18.36 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 09 May 2012 09:18:36 -0700 (PDT) Received-SPF: neutral (google.com: 173.193.55.66 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) client-ip=173.193.55.66; Authentication-Results: mx.google.com; spf=neutral (google.com: 173.193.55.66 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) smtp.mail=mail@*personaldomain*.com Received: from mail-vb0-f43.google.com ([209.85.212.43]:56152) by web6.*domain*.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.77) (envelope-from <mail@*personaldomain*.com>) id 1SS9b2-0007J9-LK for mail@kendall.*domain*.com; Wed, 09 May 2012 12:18:36 -0400 Received: by vbbfq11 with SMTP id fq11so599132vbb.2 for <mail@kendall.*domain*.com>; Wed, 09 May 2012 09:18:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=Hr0AH40uUtx/w/u9hltbrhHJhRaD5ubKmz2gGg44VLs=; b=IBKi6Xalr9XVFYwdkWxn9PLRB69qqJ9AjUPdvGh8VxMNW4S+hF6r4GJcGOvkDn2drO kw5r4iOpGuWUQPEMHRPyO4+Ozc9SE9s4Px2oVpadR6v3hO+utvFGoj7UuchsXzHqPVZ8 A9FS4cKiE0E0zurTjR7pfQtZT64goeEJoI/CtvcoTXj/Mdrj36gZ2FYtO8Qj4dFXpfu9 uGAKa4jYfx9zwdvhLzQ3mouWwQtzssKUD+IvyuRppLwI2WFb9mWxHg9n8y9u5IaduLn7 7TvLIyiBtS3DgqSKQy18POVYgnUFilcDorJs30hxFxJhzfTFW1Gdhrwjvz0MTYDSRiGQ P4aw== MIME-Version: 1.0 Received: by 10.52.173.209 with SMTP id bm17mr326586vdc.54.1336580315681; Wed, 09 May 2012 09:18:35 -0700 (PDT) Received: by 10.220.191.134 with HTTP; Wed, 9 May 2012 09:18:35 -0700 (PDT) X-Originating-IP: [99.50.225.7] Date: Wed, 9 May 2012 12:18:35 -0400 Message-ID: <CA+tP6Viyn0ms5RJoqtd20ms3pmQCgyU0yy7GBiaALEACcDBC2g@mail.gmail.com> Subject: test5 From: Kendall Hopkins <mail@*personaldomain*.com> To: mail@kendall.*domain*.com Content-Type: multipart/alternative; boundary=bcaec51b9bf5ee11c004bf9cda9c X-Gm-Message-State: ALoCoQm3t1Hohu7fEr5zxQZsC8FQocg662Jv5MXlPXBnPnx2AiQrbLsNQNknLy39Su45xBMCM47K X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - web6.*domain*.com X-AntiAbuse: Original Domain - kendall.*domain*.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - *personaldomain*.com X-Source: X-Source-Args: X-Source-Dir: --bcaec51b9bf5ee11c004bf9cda9c Content-Type: text/plain; charset=ISO-8859-1 test5 --bcaec51b9bf5ee11c004bf9cda9c Content-Type: text/html; charset=ISO-8859-1 test5 --bcaec51b9bf5ee11c004bf9cda9c-- Email forwarded via AWS postfix box (marked as spam): Delivered-To: *personaluser*@gmail.com Received: by 10.182.144.98 with SMTP id sl2csp14350obb; Wed, 9 May 2012 09:17:46 -0700 (PDT) Received: by 10.229.137.143 with SMTP id w15mr389471qct.37.1336580266237; Wed, 09 May 2012 09:17:46 -0700 (PDT) Return-Path: <mail@*personaldomain*.com> Received: from sputnik.*domain*.com (sputnik.*domain*.com. [107.21.39.201]) by mx.google.com with ESMTP id o8si1330855qct.115.2012.05.09.09.17.46; Wed, 09 May 2012 09:17:46 -0700 (PDT) Received-SPF: neutral (google.com: 107.21.39.201 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) client-ip=107.21.39.201; Authentication-Results: mx.google.com; spf=neutral (google.com: 107.21.39.201 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) smtp.mail=mail@*personaldomain*.com Received: from mail-vb0-f52.google.com (mail-vb0-f52.google.com [209.85.212.52]) by sputnik.*domain*.com (Postfix) with ESMTP id A308122AD6 for <mail@*personaldomain2*.com>; Wed, 9 May 2012 16:17:45 +0000 (UTC) Received: by vbzb23 with SMTP id b23so448664vbz.25 for <mail@*personaldomain2*.com>; Wed, 09 May 2012 09:17:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=XAzjH9tUXn6SbadVSLwJs2JVbyY4arosdTuV8Nv+ARI=; b=U8gIgHd6mhWYqPU4MH/eyvo3kyZsDn/GiYwZj5CLbs6Zz/ZOXQkenRi7zW3ewVFi/9 uAFylT8SQ+Wjw2l6OgAioCTojfZ58s4H/JW+1bu460KAP9aeOTcZDNSsHlsj0wvH5XRV 4DQJa11kz+WFVtVVcFuB33WVUPAgJfXzY+pSTe+FWsrZyrrwL7/Vm9TSKI5PBwRN9i4g zAZabgkmw1o2THT3kbJi6vAbPzlqK2LVbgt82PP0emHdto7jl4iD5F6lVix4U0dsrtRv xuGUE0gDyIwJuR4Q5YTkNubwGH/Y2bFBtpx2q1IORANrolWxIGaZSceUWawABkBGPABX 1/eg== MIME-Version: 1.0 Received: by 10.52.96.169 with SMTP id dt9mr282954vdb.107.1336580265812; Wed, 09 May 2012 09:17:45 -0700 (PDT) Received: by 10.220.191.134 with HTTP; Wed, 9 May 2012 09:17:45 -0700 (PDT) X-Originating-IP: [99.50.225.7] Date: Wed, 9 May 2012 12:17:45 -0400 Message-ID: <CA+tP6VgqZrdxP543Y28d1eMwJAs4DxkS4EE6bvRL8nFoMkgnQQ@mail.gmail.com> Subject: test4 From: Kendall Hopkins <mail@*personaldomain*.com> To: mail@*personaldomain2*.com Content-Type: multipart/alternative; boundary=20cf307f37f6f521b304bf9cd79d X-Gm-Message-State: ALoCoQkrNcfSTWz9t6Ir87KEYyM+zJM4y1AbwP86NMXlk8B3ALhnis+olFCKdgPnwH/sIdzF3+Nh --20cf307f37f6f521b304bf9cd79d Content-Type: text/plain; charset=ISO-8859-1 test4 --20cf307f37f6f521b304bf9cd79d Content-Type: text/html; charset=ISO-8859-1 test4 --20cf307f37f6f521b304bf9cd79d--

    Read the article

  • Cryptographic Validation Explained

    - by MarkPearl
    We have been using LogicNP’s CryptoLicensing for some of our software and I was battling to understand how exactly the whole process worked. I was sent the following document which really helped explain it – so if you ever use the same tool it is well worth a read. Licensing Basics LogicNP CryptoLicensing For .Net is the most advanced and state-of-the art licensing and copy protection system you can use for your software. LogicNP CryptoLicensing System uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA algorithm which consists of a pair of keys called as the generation key and the validation key. Data encrypted using the generation key can only be decrypted using the corresponding validation key. How does cryptographic validation work? When a new license project is created, a unique validation-generation key pair is created for the project. When LogicNP CryptoLicensing For .Net generates licenses, it encrypts the license settings using the generation key. The validation key can be safely distributed with your software and is used during validation. During license validation, LogicNP CryptoLicensing For .Net attempts to decrypt the encrypted license code using the validation key. If the decryption is successful, this means that the data was encrypted using the generation key, since only the corresponding validation key can decrypt data encrypted with the generation key. This further means that not only is the license valid but that it was generated by you and only you since nobody else has access to the generation key. Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Note that the generation key pair is stored in the project file created by LogicNP CryptoLicensing For .Net, so it is very important to backup this file and to keep it secure. Once the file is lost, it is not possible to retrieve the key pair. FAQ Do I use the same validation key to validate all license codes? Yes, the validation key (and generation key) for the project remains the same; you use the same key to validate all license codes generated using the project. You can retrieve the validation key using the "Project" menu --> "Get Validation Key & Code" menu item. Can license codes generated using generation key from one project be validated using validation key of another project? No! Q. Is every generated license code unique? A. Yes, every license code generated by CryptoLicensing is guaranteed to be unique, even if you generate thousands of codes at a time. Q. What makes CryptoLicensing so secure? A. CryptoLicensing uses the latest cryptographic technology to generate and validate licenses. The cryptographic algorithm used is the RSA asymmetric key algorithm which can use upto 3072-bit keys. Given current computing power, it takes years to break a 3072-bit key. Q. Is is possible for a hacker to develop a keygen for my software? A. Impossible. The cryptographic algorithm used by CryptoLicensing consists of a pair of keys called as the generation key and the validation key. Data encrypted with one key can only be decrypted by the other key and vice versa. Licenses are generated using the generation key and validated using the validation key. Without the generation key, it is impossible to generate valid licenses. Q. What is the difference between validation key and generation key? Generation Key This key is used by CryptoLicensing Generator to generate encrypted license codes. This key is stored in the license project file, so the license project file must be kept secure and confidential and must be accorded the same care as any other critical asset such as source code. Validation Key This key is used for validating generated license codes. It is the same key displayed in the 'Get Validation Key And Code' dialog (Ctrl+K) and is used by your software when validating license codes (using LogicNP.CryptoLicensing.dll). Unlike the generation key, it is not necessary to keep this key secure and confidential. Q. Do I have to include the license project file (.licproj) with my software? A. No!!! This goes against the very essence of the security of the asymmetric cryptographic scheme because the project file contains both the validation and generation key. With your software, you only need to include the validation key which will be used to validate licenses generated by CryptoLicensing using the generation key. The license project file should be treated as any other valuable and confidential asset such as your source code. Q. Does the license service need the license project file? A. Yes. The license project file is needed whenever new licenses are generated (via the UI, via the API or via the license service). As just one example, the license service generates new machine-locked licenses when activated licenses are presented to it for activation, therefore the license service needs the license project file. Q. Is it possible to embed my own data in the generated licenses? A. Yes. You can embed any amount of additional data in the licenses. This data will have the same amount of security as the license code itself and will be tamper-proof. The embedded user data can be retrieved from your software. Q. What additional steps can I take to ensure that my software does not get cracked? A. There are many methods and techniques which can make it extremely difficult for a hacker to crack your software. See Writing Effective License Checking Code And Designing Effective Licenses for more information. Q. Why is the license service not working? A. The most common cause is not setting the CryptoLicense.LicenseServiceURL property before trying to validate a license. Make sure that this property is set to the correct URL where your license service is hosted. The most common cause after this is that the license project file on the web server where your license service is hosted is not the latest. This happens if you make changes to the license project (for example, set the 'Enable With Serials' setting for a profile), but don't upload the updated project file to your web server. Q. Why are my serials not working? Serial codes require the user of a license service. See Using Serial Codes for more details. Also see the earlier question 'Why is the license service not working?' Q. Is the same validation key used to validate license codes generated from different profiles. A. Yes. Profiles are just pre specified license settings for quickly generating licenses having those settings. The actual license code is still generated using the license project's cryptographic generation key and thus, can be validated using the project's validation key. Q. Why are changes made to a profile not getting saved? A. Simply changing license settings via UI and saving the license project does not save those license settings to the active profile. You must first save the license settings to a profile using the Save/Save As command from the Profiles menu (see above). Q. Why is validation of activated licenses failing from CryptoLicensing Generator, but works from my software? A. Make sure that you have specified the URL of the license service using the Project Properties Dialog. Also see the earlier question 'Why is the license service not working?' Q. How can I extend the trial period of my customer? A. To extend the evaluation period of the customer, simply send him a new license code specifying the desired evaluation limits. Evaluation information such as the current used days, executions, etc are stored in garbled form in a registry location which is derived from the license code. Therefore, when a new license code is used, the old evaluation information will not be used and a new evaluation period will be started.

    Read the article

  • Create Chemistry Equations and Diagrams in Word

    - by Matthew Guay
    Microsoft Word is a great tool for formatting text, but what if you want to insert a chemistry formula or diagram?  Thanks to a new free add-in for Word, you can now insert high-quality chemistry formulas and diagrams directly from the Ribbon in Word. Microsoft’s new Education Labs has recently released the new Chemistry Add-in for Word 2007 and 2010.  This free download adds support for entering and editing chemistry symbols, diagrams, and formulas using the standard XML based Chemical Markup Language.  You can convert any chemical name, such as benzene, or formula, such as H2O, into a chemical diagram, standard name, or formula.  Whether you’re a professional chemist, just taking chemistry in school, or simply curious about the makeup of Citric Acid, this add-in is an exciting way to bring chemistry to your computer. This add-in works great on Word 2007 and 2010, including the 64 bit version of Word 2010.  Please note that the current version is still in beta, so only run it if you are comfortable running beta products. Getting Started Download the Chemistry add-in from Microsoft Education Labs (link below), and unzip the file.  Then, run the ChemistryAddinforWordBeta2.Setup.msi. It may inform you that you need to install the Visual Studio Tools for Office 3.0.  Simply click Yes to download these tools. This will open the download in your default browser.  Simply click run, or save and then run it when it is downloaded. Now, click next to install the Visual Studio Tools for Office as usual. When this is finished, run the ChemistryAddinforWordBeta2.Setup.msi again.  This time, you can easily install it with the default options. Once it’s finished installing, open Word to try out the Chemistry Add-in.  You will be asked if you want to install this customization, so click Install to enable it. Now you will have a new Chemistry tab in your Word ribbon.  Here’s the ribbon in Word 2010… And here it is in Word 2007.   Using the Chemistry Add-in It’s very easy to insert nice chemistry diagrams and formulas in Word with the Chemistry add-in.  You can quickly insert a premade diagram from the Chemistry Gallery: Or you can insert a formula from file.  Simply click “From File” and choose any Chemical Markup Language (.cml) formatted file to insert the chemical formula. You can also convert any chemical name to it’s chemical form.  Simply select the word, right-click, select “Convert to Chemistry Zone” and then click on its name. Now you can see the chemical form in the sidebar if you click the Chemistry Navigator button, and can choose to insert the diagram into the document.  Some chemicals will automatically convert to the diagram in the document, while others simply link to it in the sidebar.  Either way, you can display exactly what you want. You can also convert a chemical formula directly to it’s chemical diagram.  Here we entered H2O and converted it to Chemistry Zone: This directly converted it to the diagram directly in the document. You can click the Edit button on the top, and from there choose to either edit the 2D model of the chemical, or edit the labels. When you click Edit Labels, you may be asked which form you wish to display.  Here’s the options for potassium permanganate: You can then edit the names and formulas, and add or remove any you wish. If you choose to edit the chemical in 2D, you can even edit the individual atoms and change the chemical you’re diagramming.  This 2D editor has a lot of options, so you can get your chemical diagram to look just like you want. And, if you need any help or want to learn more about the Chemistry add-in and its features, simply click the help button in the Chemistry Ribbon.  This will open a Word document containing examples and explanations which can be helpful in mastering all the features of this add-in. All of this works perfectly, whether you’re running it in Word 2007 or 2010, 32 or 64 bit editions. Conclusion Whether you’re using chemistry formulas everyday or simply want to investigate a chemical makeup occasionally, this is a great way to do it with tools you already have on your computer.  It will also help make homework a bit easier if you’re struggling with it in high school or college. Links Download the Chemistry Add-in for Word Introducing Chemistry Add-in for Word – MSDN blogs Chemistry Markup Language – Wikipedia Similar Articles Productive Geek Tips Geek Reviews: Using Dia as a Free Replacement for Microsoft VisioEasily Summarize A Word 2007 DocumentCreate a Hyperlink in a Word 2007 Flow Chart and Hide Annoying ScreenTipsHow To Create and Publish Blog Posts in Word 2010 & 2007Using Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Backup options in SharePoint 2007

    - by sreejukg
    It is very important to make sure the server farm backup is taking properly, making sure that in case of any disaster, the administrator has the latest backup that can be used to restore. This articles addresses some of the options available for backup/restore in SharePoint 2007 Backup There are two options that can be utilized to take backup of SharePoint sites. Using SharePoint Central Administration website Using SharePoint central administration website, you can do backup/restore from user interface. Using central administration website you can back up the following · Server farm · Web application · Content databases Follow these steps to take backup of the server farm using central administration 1. Open Central administration website 2. Navigate to Operations -> Backup and Restore -> Perform a backup 3. Here you will have options to choose the item to back up. Select Farm (the top most item in the list) 4. Once you select the items to backup, click on “Continue to backup options” 5. Select “Full” as type of backup. 6. In the backup file location, enter the path where you need to store the backup. The path should be according to the UNC, for e.g. for c drive you may use \\server\c$\mybackupFolder 7. Click ok 8. Now you will be redirected to Backup and Restore Status page. This page shows the progress for the backup operation. You can use the refresh button to update the status of backup(this page will automatically refresh in every 30 seconds). Once completed you can find the files in the specified folder. Using STSADM website SharePoint comes with a STSADM command line tool. STSADM provides lot of administrative operations that can be performed on SharePoint 2007 sites. You can find STSADM command from the following location C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin (You may change the drive letter according to your installation) STSADM provides a method for performing the Office SharePoint Server 2007 administration tasks at the command line or by using batch files or scripts. STSADM provides access to operations not available by using the Central Administration site The general syntax for STSADM is as follows STSADM -operation Operation Name –parameter1 value1 –parameter2 value2 ……….. Using STSADM you can back up the following · Server farm · Web application · Content databases To perform any STSADM, operation you need to be a member of administrators group. Follow these steps to take backup of SharePoint server farm using STSADM tool. Note: make sure you are logged in to the computer where central administration website is installed. 1. Open the Command prompt (You should run command prompt with administrator privileges) 2. Change the working directory to C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin 3. Enter the command, then press enter Stsadm –o backup -directory <UNC path> -backupmethod full 4. You will get success / failure message once the command finishes. How to schedule the backup There is no option to schedule a backup using central administration site. Also there is no operation provided by STSADM to automate the backup. The farm administrators need to take backup in regular intervals. To achieve this, you can write a batch file that includes STSADM command to take full backup of the server. This batch file can be scheduled using windows task scheduler to execute in certain intervals. Sample of the batch file 1. Open notepad(or any other text editor) 2. Enter the following commands @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm.exe -o backup -directory "<\backup>" -backupmethod full echo completed 3. Save the file with .bat extension You can schedule this batch file as you require. Other Options Using STSADM tool, you will be able to take backup for individual site collection. The syntax for this is stsadm -o backup -url <URL name for site collection> -filename <file name> [-overwrite] The explanations for the parameters are as follows. -url The url of the site collection you need to backup -filename The name of the backup file. E.g. c:\backup.bak -overwrite optional. Indicates if the filename specified exists, whether to overwrite or not. If you are creating the batch file for scheduling the backup for a site collection, you may need to specify the backup filename automatically created. It is an option that you can generate the filename with date so that you can keep backup for each day. e.g. The following commands can be utilized create a site collection backup. @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== echo =============================================================== echo getting todays date to a variable echo =============================================================== @For /F "tokens=1,2,3 delims=/ " %%A in (‘Date /t’) do @( Set Day=%%A Set Month=%%B Set Year=%%C Set todayDate=%%C%%B%%A ) cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm -o backup -url <sitecollection url> -filename \\ServerName\ShareName\Backup_%todayDate%.bak -overwrite echo completed To read more about backup STSADM operation, read this http://technet.microsoft.com/en-us/library/cc263441.aspx

    Read the article

  • How to Get All the Windows 8 Editions on One Install Disk

    - by Taylor Gibb
    There are a lot of different versions of Windows, but you probably didn’t know that short of the Enterprise edition, the disc or image that you own contains all versions for that architecture. Read on to see how we can use them to make a universal Windows 8 install disc. Things You Will Need A x86 Version of Windows 8 A x64 Version of Windows 8 A x86 Version of Windows 8 Enterprise A x64 Version of Windows 8 Enterprise A Windows 8 PC Note: While we will use all the images above you don’t really need the Enterprise Edition. You could always leave out parts of the tutorial if you know what you are doing, if you are not comfortable with that and still want to follow through you could always grab the Enterprise evaluation images that are available for free to the public, on MSDN. Getting Started To get started you will need to Download the Windows 8 ADK from Microsoft. Once downloaded go ahead and install it, you will only need the Deployment tools so be sure to uncheck the rest of the options. Lastly you will also need to create the following folder structure on the root of your C:\ drive to make things a bit easier. C:\Windows8Root C:\Windows8Root\x86 C:\Windows8Root\x64 C:\Windows8Root\Enterprisex86 C:\Windows8Root\Enterprisex64 C:\Windows8Root\Temp C:\Windows8Root\Final OK lets get started. Making The Image The first thing we need to do is create a base image, so mount the x86 version of Windows 8 and copy its files to: C:\Windows8Root\Final Now move the install.wim file from: C:\Windows8Root\Final\sources To: C:\Windows8Root\x86 Next go ahead and copy the install.wim file from the other 3 images, Windows 8 x64, Windows 8 Enterprise x86 and Windows 8 Enterprise x64 to the respective folders in Windows8Root, the install.wim file can be located at: D:\sources\install.wim Note: The above assumes that the images are always mounted at drive D. Remember that each install.wim is different so don’t copy them to the wrong directories or the rest of the tutorial wont work. Next switch to the Metro Start Screen and open the Deployment and Imaging Tools Environment. Note: If you are not a local administrator on your PC, you will need to right-click on it and choose to run it as an administrator. Now run the following commands: Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:2 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8″ /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro with Media Center” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\Enterprisex86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Enterprise” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:2 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8″ /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro with Media Center” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\Enterprisex64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Enterprise” /compress:maximum Next navigate to: C:\Windows8Root\sources\ And create a new text file. You will need to call it: EI.cfg Then edit it to look like the following: The last thing we need to do is work some magic to get Windows Media Center added to the WMC editions of Windows 8. For that I have written a little script to make it easier for everybody, you can grab it here. Once you have downloaded it extract it. In order to use it right-click in the bottom left hand corner of the screen, and open an elevated command prompt. Then go ahead and paste the following into the command prompt window. powershell.exe -ExecutionPolicy Unrestricted -File C:\Users\Taylor\Documents\HTGWindows8Converter.ps1 Note: You will need to replace the path to the script, another thing to note is that if the path you replace it with has spaces you will need to enclose the path in quotes. The script should kick off straight away and has some progress bars you can watch while it does its thing. Half way through another Window will pop open, which will start creating your final ISO image. When its complete, close the command prompt and you should have an ISO image on the root of your C drive called: HTGWindows8.iso That’s all there is to it. 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • CodePlex Daily Summary for Friday, July 19, 2013

    CodePlex Daily Summary for Friday, July 19, 2013Popular ReleasesuComponents: uComponents v5.5.0: Following on from our 101355 release, we bring you... v5.5.0! Resolved issues The following issues have been resolved in 5.5.0: 14634 14848 DataType Grid 14840 14842 14845 UpgradingThe best way to upgrade uComponents is to re-install via Umbraco's back-office package installer, (either from the package repository or a local package upload). Do not uninstall an existing uComponents package, as this may lead to data loss! Umbraco compatibilityThis release is compatible with Umbraco 4...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Scryber PDF Generation: Scryber.Installer.v0.8.3: This one has tables! A significant update to the scryber library, with the inclusion of tables. There is no column span, or row span, and nested tables are not fully supported. But - they overflow nicely and are well styled. And they can be data bound. Other updates The Link element has inner components directly as a child, rather than in a Content element. The existing files will still continue to work, but are invalid wrt the schema. Databinding is now supported at the document level, r...Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...GoAgent GUI: GoAgent GUI 1.2.0 ???: *???????,??????????????,?????????????? ????????* ????????(Beta) Beta????:https://goagent.codeplex.com/workitem/list/basic ??????:uygw@outlook.com ????:https://goagent.codeplex.com/documentation ?????????????????。Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesAscend 3D: Ascend (2013-07-17): Ascend 2.1 Animation bugfixes and speed/robustness improvements Added compatibility for multiple top-level bones on armatures Ascend Blender Exporter 2.1 Removed need to create parent-child relationship between armature and skinning mesh Added support for exporting multiple top-level bones on armatures AscendViewer 1.0.4 Updated to use latest version of Ascend libraryCommunity TFS Build Extensions: July 2013: The July 2013 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 The Community TFS Build Manager can also be found in the Visual Studio Gallery here where updates will first become available. A version supporting VS2010 is also available in the Gallery here.smoketestgit1: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesthg: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesttfs: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingdatajs - JavaScript Library for data-centric web applications: datajs version 1.1.1: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes ...Orchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...New Projects[C#]A Complete MySQL Manager Project: An open-source MySQL database manager oriented to multi-threading environments, supporting to create an entire database from scratch and edit it.bCal Javascript Calendar Generator: Using this simple javascript library, you can create simple navigable calendars in your website with just one line of user code.Better Network: Tools to delete network profile and network lists in Windows 8.C# Intellisense for Notepad++: 'CS-Script Intellisense' is a real C# intellisense solution based on CS-Script and ICSharpCode.NRefactory/Mono.Cecil.DARF: Dynamic Application Runtime Framework: A set of tools and libraries which help software developers create fully distributed and component-based applications. FeiXinV5: Just TestFollower Catcher: Game made for the 2012 Windows 8 Megathon. Second place in Madrid's local competition.MeeOS .NET: El primer sistema operativo GNU GPL en español.Music Vault: Music Vault is program to store your favorite music in one place, you can in any time move all your music library by moving only one file.MVC uTorrent: This is an ASP.NET MVC uTorrent front end, that utilises the uTorrent API. The main features: - Single page application - Can be hosted in IISMyTaskManager: This project Related to distribute the task to log in users OO2RDF: OO2RDF is a framework for data triplification. P(y)rotein Subcellular Location Image Database: A library that allows communication with an OMERO database, an open-source software for storage and manipulation of biological images.PE Toolbox: PE Toolbox ? ????? ?? ???????????, ????? ??????? ??? ??????????? ?? ?????. ???? ????? ??????????? ???????: - PE IFE (Importing from Excel) PeGscan: *PeGscan*ScriptZilla: ScriptZilla is an Free Editor for Windows with more Than 23 Languages.Sharepoint : Reconciliate SSRS Reports and Datasets with datasources Powershell: Powershell to link datasources to SSRS reports and datasetsSync-Moped: The idea behind the "Sync-Moped" is to synchronize two directories with backup functionality.System.Time: System.Time provides a type for .NET that allows the managing of a time in System.String, System.DateTime and System.TimeSpan form at the same time.Text Analytics: Project summary will be added later.Tibia Universal MC .NET: This tool was written as an attempt to port the [url:http://code.google.com/p/xenomc/] to the .NET Framework.VRFramework: VR Framework is a system for creating a virtual reality games using your PC, and Android smart phone, and Unity 3D.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >