Search Results

Search found 30695 results on 1228 pages for 'old software'.

Page 233/1228 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • How do I record sound from my CD/DVD player without other system sounds in the mix?

    - by Software Monkey
    Using GoldWave I can record via the "Stereo Mix" channel, but I get no sound on the "CD" channel. Of course, using the stereo mix also mixes in all system sounds, including beeps, etc. I have the analog out on the DVD player connected to the CD-IN connector on the MoBo. I can hear CDs and DVDs playing just fine through my speakers - is this because the CD is also IDE data connection in to deliver the sound to the sound card, then? I specifically want to record a DVD; I can easily rip a CD using GoldWave's built-in ripper. Is there anything I have forgotten or have to enable? Or is it likely I have a damaged cable? My system is an MSI mobo and is running Windows XP SP3.

    Read the article

  • Can I change from BIOS IDE mode to AHCI mode at any time?

    - by Software Monkey
    Currently my Windows 7 computer is crashing during startup, after loading the AMD achix64s.sys driver, if I enable BIOS AHCI mode for the disks. It boots fine with IDE mode. Since I need my computer working, I am wondering if I can just use IDE mode for now, and later change to AHCI mode, when I figure out what is wrong. Background: I was running RAID mode, which needed additional drivers to install/boot Windows. But the MoBo RAID is flaky so I'm trying to switch to using a Windows mirrored volume instead - for that I expected to use AHCI mode.

    Read the article

  • How do I set up two existing disks with identical contents as a single mirrored volume in Windows 7 without losing data?

    - by Software Monkey
    I have two data disks that were, heretofore, in a mobo RAID configuration in Windows 7. They are now separate AHCI disks, visible in Computer Management. How to I go about making them a single mirrored volume in Windows? Note: The data is backed up up on two other separate disks, but it's a fair amount of work to do a restore (over 120'000 files, and I have to reset permissions). Note2: Currently the two disks are identical, and I can use the content of either one for this.

    Read the article

  • Is it possible to print on a networked Windows Print server from an AIX server, without using remote printer queues?

    - by Stringent Software
    I have an application on an AIX server (v5.3) that needs to print via a Windows Print Server over the LAN. The simplest way to do this is to use SMIT to setup a remote print queue - which I've done on the test environment - but the IT department have refused to set up a remote print queue on the Production server. I don't have root access to the Production server. Is there any other method for connecting the app to the print server that doesn't involve print queues on the AIX box?

    Read the article

  • Setting nagios location in map

    - by Mech Software
    I have Nagios installed and I'm working on getting the network map correct. The problem I have is that "Nagios" appears to be in the "internet" when it should be located on the MechNAS server. What I want is Nagios Process to show up inside the local network. So it should show up at the same layer as MechNAS and development. Where exactly is that configured? I didnt see any place to set that up and it looks now like it's out there on it's own. Documentation and Googling didnt seem to turn up anything either.

    Read the article

  • Amazon Product Advertising API SOAP Namespace Changes

    - by Rick Strahl
    About two months ago (twowards the end of February 2012 I think) Amazon decided to change the namespace of the Product Advertising API. The error that would come up was: <ItemSearchResponse > was not expected. If you've used the Amazon Product Advertising API you probably know that Amazon has made it a habit to break the services every few years or so and I guess last month was about the time for another one. Basically the service namespace of the document has been changed and responses from the service just failed outright even though the rest of the schema looks fine. Now I looked around for a while trying to find a recent update to the Product Advertising API - something semi-official looking but everything is dated around 2009. Really??? And it's not just .NET - the newest thing on the sample/APIs is dated early 2011 and a handful of 2010 samples. There are newer full APIs for the 'cloud' offerings, but the Product Advertising API apparently isn't part of that. After searching for quite a bit trying to trace this down myself and trying some of the newer samples (which also failed) I found an obscure forum post that describes the solution of getting past the namespace issue. FWIW, I've been using an old version of the Product Advertising API using the old Microsoft WSE3 services (pre-WCF), which provides some of the WS* security features required by the Amazon service. The fix for this code is to explicitly override the namespace declaration on each of the imported service method signatures. The old service namespace (at least on my build) was: http://webservices.amazon.com/AWSECommerceService/2009-03-31 and it should be changed to: http://webservices.amazon.com/AWSECommerceService/2011-08-01 Change it on the class header:[Microsoft.Web.Services3.Messaging.SoapService("http://webservices.amazon.com/AWSECommerceService/2011-08-01")] [System.Xml.Serialization.XmlIncludeAttribute(typeof(Property[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(BrowseNode[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(TransactionItem[]))] public partial class AWSECommerceService : Microsoft.Web.Services3.Messaging.SoapClient { and on all method signatures:[Microsoft.Web.Services3.Messaging.SoapMethodAttribute("http://soap.amazon.com/ItemSearch")] [return: System.Xml.Serialization.XmlElementAttribute("ItemSearchResponse", Namespace="http://webservices.amazon.com/AWSECommerceService/2011-08-01")] public ItemSearchResponse ItemSearch(ItemSearch ItemSearch1) { Microsoft.Web.Services3.SoapEnvelope results = base.SendRequestResponse("ItemSearch", ItemSearch1); return ((ItemSearchResponse)(results.GetBodyObject(typeof(ItemSearchResponse), this.SoapServiceAttribute.TargetNamespace))); } It's easy to do with a Search and Replace on the above strings. Amazon Services <rant> FWIW, I've not been impressed by Amazon's service offerings. While the services work well, their documentation and tool support is absolutely horrendous. I was recently working with a customer on an old AWS application and their old API had been completely removed with a new API that wasn't even a close match. One old API call resulted in requiring three different APIs to perform the same functionality. We had to re-write the entire piece from scratch essentially. The documentation was downright wrong, and incomplete and so scattered it was next to impossible to follow. The examples weren't examples at all - they're mockups of real service calls with fake data that didn't even provide everything that was required to make same service calls work. Additionally there appears to be just about no public support from Amazon, only peer support which is sparse at best - and getting a hold of somebody at Amazon, even for pay seems to be mythical task. It's a terrible business model they have going. I can't see why anybody would put themselves through this sort of customer and development experience. Sad really, but an experience we see more and more these days. Nobody puts in the time to document anything anymore, leaving it to devs to figure this stuff out over and over again… </rant>© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • Versioning SharePoint binary Workflow ASPX task forms

    - by Janis Veinbergs
    Hello. As noted by some developers, workflow versioning is somekind of headache in SharePoint. I`m wondering is there a way I can version my aspx forms? For sure, i can version code behind assemblies, but if markup changes for any of my files in LAYOUTS folder? Is there versioning available for files or do i have to choose new filename for my form? Sorry, i should have been more specific. Yes, i have files under version control (i can restore previous versions etc), but i`m not talking about this kind of version control. But by deploying new Workflow Version, i must not delete old one, because it is still running on many items in SharePoint, but rather , as noted in previous links, deploy new one so i don't break execution of workflows. But workflows will still break if i don't preserve old aspx forms used by users to interact with workflows. So i must ensure that Assemblies with old version numbers used by old workflow exists (this one is ok, i just changed assembly version number and deployed to GAC) I must ensure that old workflow still uses old aspx form used users to interact with workflow, but new workflow version should use new aspx form with more options (how to do this?).

    Read the article

  • Designing bayesian networks

    - by devoured elysium
    I have a basic question about Bayesian networks. Let's assume we have an engine, that with 1/3 probability can stop working. I'll call this variable ENGINE. If it stops working, then your car doesn't work. If the engine is working, then your car will work 99% of the time. I'll call this one CAR. Now, if your car is old(OLD), instead of not working 1/3 of the time, your engine will stop working 1/2 of the time. I'm being asked to first design the network and then assign all the conditional probabilities associated with the table. I'd say the diagram of this network would be something like OLD -> ENGINE -> CAR Now, for the conditional probabilities tables I did the following: OLD |ENGINE ------------ True | 0.50 False | 0.33 and ENGINE|CAR ------------ True | 0.99 False | 0.00 Now, I am having trouble about how to define the probabilities of OLD. In my point of view, old is not something that has a CAUSE relationship with ENGINE, I'd say it is more a characteristic of it. Maybe there is a different way to express this in the diagram? If the diagram is indeed correct, how would I go to make the tables? Thanks

    Read the article

  • Git Diff with Beyond Compare

    - by Avanst
    I have succeeded in getting git to start Beyond Compare 3 as a diff tool however, when I do a diff, the file I am comparing against is not being loaded. Only the latest version of the file is loaded and nothing else, so there is nothing in the right pane of Beyond Compare. I am running git 1.6.3.1 with Cygwin with Beyond Compare 3. I have set up beyond compare as they suggest in the support part of their website with a script like such: #!/bin/sh # diff is called by git with 7 parameters: # path old-file old-hex old-mode new-file new-hex new-mode "path_to_bc3_executable" "$2" "$5" | cat Has anyone else encountered this problem and know a solution to this? Edit: I have followed the suggestions by VonC but I am still having exactly the same problem as before. I am kinda new to Git so perhaps I am not using the diff correctly. For example, I am trying to see the diff on a file with a command like such: git diff main.css Beyond Compare will then open and only display my current main.css in the left pane, there is nothing in the right pane. I would like the see my current main.css in the left pane compared to the HEAD, basically what I have last committed. My git-diff-wrapper.sh looks like this: #!/bin/sh # diff is called by git with 7 parameters: # path old-file old-hex old-mode new-file new-hex new-mode "c:/Program Files/Beyond Compare 3/BCompare.exe" "$2" "$5" | cat My git config looks like this for Diff: [diff] external = c:/cygwin/bin/git-diff-wrapper.sh

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • Multiple jQuery includes in a document

    - by bah
    Hi, I have a document which uses old jQuery and I need new jQuery for a particular plug-in. My document structure looks like this: <html> <head> <script type="text/javascript" src="jQuery.old.js"></script> </head> <body> <script> $("#elem").doSomething(); // use old jQuery </script> <!-------- My plugin begins --------> <script type="text/javascript" src="jQuery.new.js"></script> <script type="text/javascript" src="jQuery.doSomething.js"></script> <script> $().ready(function(){ $("#elem").doSomething(); // use new jQuery }); </script> <div id="elem"></div> <!-------- My plugin ends ----------> <script> $("#elem").doSomething(); // use old jQuery </script> </body> </html> I have googled for this question but found nothing that would look like my case (I need first to load old javascript (in the head) and THEN new (in the body). By the way, in the Firefox looks like old jQuery lib loads and scripts that depends on it works, but script that uses new version, and in IE and Chrome everything is exactly opposite.

    Read the article

  • Office 2013 OCT unhandled exception when saving .RSP

    - by user52874
    I'm trying to prepare a deployment of office 2013 pro plus. If I deploy an existing .rsp file that was left behind by the old analyst (typing from the client): PS C: \\deploybox\software\Office2013\setup.exe /adminfile \\deploybox\software\Office2013\SWKS.MSP Things seem to deploy just fine. if I make any changes to the .rsp file by doing (all from the client): PS C: \\deploybox\software\Office2013\setup.exe /admin * Open SWKS.MSP * Make changes * Save under a different name SWKS1.MSP I get the following errorbox: Unhandled Exception: MsiGetSummaryInformation call failed. And if I try to deploy the new SWKS1.MSP, PS C: \\deploybox\software\Office2013\setup.exe /adminfile \\deploybox\software\Office2013\SWKS1.MSP it fails with the message: Path or file specified with /adminfile did not contain any customization patches that apply to this product or platform. If I even open the old known good .rsp file SWKS.MSP, and immediately save it as a new name SWKS1.MSP, making no changes, then the same thing happens. So what stupid newbie mistake am I making here? Thanks!

    Read the article

  • How to remove MySQL completely with config and library files on ubuntu 12.04 gnome 3.0

    - by codeartist
    I tried everything till now: sudo apt-get remove mysql-server mysql-client mysql-common sudo apt-get purge mysql-server mysql-client mysql-common sudo apt-get autoremove and even more commands... But whenever I am trying to locate mysql. I get a no. of files related to mysql command: shell>> locate mysql Output: /etc/mysql /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/abstractions/mysql /etc/apparmor.d/cache/usr.sbin.mysqld /etc/apparmor.d/cache/usr.sbin.mysqld-akonadi /etc/apparmor.d/local/usr.sbin.mysqld /etc/bash_completion.d/mysqladmin /etc/init/mysql.conf /etc/logcheck/ignore.d.paranoid/mysql-server-5_5 /etc/logcheck/ignore.d.server/mysql-server-5_5 /etc/logcheck/ignore.d.workstation/mysql-server-5_5 /etc/logrotate.d/mysql-server /etc/mysql/conf.d /etc/mysql/debian-start /etc/mysql/debian.cnf /etc/mysql/conf.d/mysqld_safe_syslog.cnf /home/pkr/.mysql_history /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,libqt4-sql-mysql,,349051c3a57da571aa832adb39177aff /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,cbf77a486cdc80547317981a33144427 /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,de8220dee4d957a9502caa79e8d2fdda /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,libqt4-sql-mysql,page,1,helpful,,17fb2e657321dc51526ee8fe9928da30 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,mysql-client,page,1,helpful,,a4c1b6e8200f36ab5745c6f81f14da0a /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,libqt4-sql-mysql,page,1,helpful,,c54295fb82b8183350cd34f22c3547ef /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,mysql-client,page,1,helpful,,fcf201c1abff3f774af89173a84de2cc /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,libqt4-sql-mysql,page,1,helpful,,0cd86648584efeccfb16119012f89540 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,mysql-client,page,1,helpful,,eb84724e9da7851ff8862a227d8bac59 /home/pkr/.local/share/akonadi/mysql.conf /home/pkr/.local/share/akonadi/db_data/mysql /home/pkr/.local/share/akonadi/db_data/mysql.err /home/pkr/.local/share/akonadi/db_data/mysql.err.old /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/db.MYD /home/pkr/.local/share/akonadi/db_data/mysql/db.MYI /home/pkr/.local/share/akonadi/db_data/mysql/db.frm /home/pkr/.local/share/akonadi/db_data/mysql/event.MYD /home/pkr/.local/share/akonadi/db_data/mysql/event.MYI /home/pkr/.local/share/akonadi/db_data/mysql/event.frm /home/pkr/.local/share/akonadi/db_data/mysql/func.MYD /home/pkr/.local/share/akonadi/db_data/mysql/func.MYI /home/pkr/.local/share/akonadi/db_data/mysql/func.frm /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/general_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_category.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.frm /home/pkr/.local/share/akonadi/db_data/mysql/host.MYD /home/pkr/.local/share/akonadi/db_data/mysql/host.MYI /home/pkr/.local/share/akonadi/db_data/mysql/host.frm /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYD /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYI /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.frm /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYD /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYI /home/pkr/.local/share/akonadi/db_data/mysql/plugin.frm /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proc.frm /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYD /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYI /home/pkr/.local/share/akonadi/db_data/mysql/servers.frm /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.frm /home/pkr/.local/share/akonadi/db_data/mysql/user.MYD /home/pkr/.local/share/akonadi/db_data/mysql/user.MYI /home/pkr/.local/share/akonadi/db_data/mysql/user.frm /usr/bin/mysql /usr/bin/mysql_install_db /usr/bin/mysql_upgrade /usr/bin/mysqlcheck /usr/sbin/mysqld /usr/share/mysql /usr/share/app-install/desktop/gmysqlcc:gmysqlcc.desktop /usr/share/app-install/desktop/mysql-client.desktop /usr/share/app-install/desktop/mysql-navigator:mysql-navigator.desktop /usr/share/app-install/desktop/mysql-server.desktop /usr/share/app-install/icons/gmysqlcc-32.png /usr/share/app-install/icons/mysql-navigator.png /usr/share/doc/mysql-client-core-5.5 /usr/share/doc/mysql-server-core-5.5 /usr/share/kde4/apps/katepart/syntax/sql-mysql.xml /usr/share/man/man1/mysql.1.gz /usr/share/man/man1/mysql_install_db.1.gz /usr/share/man/man1/mysql_upgrade.1.gz /usr/share/man/man1/mysqlcheck.1.gz /usr/share/man/man8/mysqld.8.gz /var/cache/apt/archives/akonadi-backend-mysql_1.7.2-0ubuntu1_all.deb /var/cache/apt/archives/libmysqlclient-dev_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libmysqlclient18_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libqt4-sql-mysql_4%3a4.8.1-0ubuntu4.1_i386.deb /var/cache/apt/archives/mysql-client-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-common_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-server-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server_5.5.22-0ubuntu1_all.deb /var/lib/dpkg/info/mysql-client-core-5.5.list /var/lib/dpkg/info/mysql-client-core-5.5.md5sums /var/lib/dpkg/info/mysql-server-5.5.list /var/lib/dpkg/info/mysql-server-5.5.postrm /var/lib/dpkg/info/mysql-server-core-5.5.list /var/lib/dpkg/info/mysql-server-core-5.5.md5sums /var/log/mysql /var/log/mysql.err /var/log/mysql.log /var/log/mysql.log.1.gz /var/log/mysql.log.2.gz /var/log/mysql.log.3.gz /var/log/mysql.log.4.gz /var/log/mysql.log.5.gz /var/log/mysql.log.6.gz /var/log/mysql.log.7.gz /var/log/upstart/mysql.log.1.gz /var/log/upstart/mysql.log.2.gz /var/log/upstart/mysql.log.3.gz /var/log/upstart/mysql.log.4.gz /var/log/upstart/mysql.log.5.gz /var/log/upstart/mysql.log.6.gz /var/log/upstart/mysql.log.7.gz What should I do now? Please help me out in this :( I was trying to find out if there is any way I can remove mysql related every file and then reinstall mysql. I need it for Qt connectivity. I don't understand what to do! Please help :(

    Read the article

  • Rapidly Deploy Oracle Applications with Oracle VM Templates

    - by monica.kumar
    Oracle today announced Oracle VM Templates for a number of Oracle Applications including Oracle E-Business Suite 12.1 Oracle's JD Edwards Enterprise One 9.0 Oracle's PeopleSoft 9.1 These Oracle VM Templates, based on Oracle Enterprise Linux, provide pre-installed and pre-configured enterprise software images that help eliminate the need to install new software from scratch, offering customers a time-saving approach to deploying a fully configured software stack. Learn more about Oracle VM Templates

    Read the article

  • How to remove MySQL completely with config and library files?

    - by codeartist
    I tried everything till now: sudo apt-get remove mysql-server mysql-client mysql-common sudo apt-get purge mysql-server mysql-client mysql-common sudo apt-get autoremove and even more commands... But whenever I am trying to locate mysql. I get a no. of files related to mysql command: shell>> locate mysql Output: /etc/mysql /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/abstractions/mysql /etc/apparmor.d/cache/usr.sbin.mysqld /etc/apparmor.d/cache/usr.sbin.mysqld-akonadi /etc/apparmor.d/local/usr.sbin.mysqld /etc/bash_completion.d/mysqladmin /etc/init/mysql.conf /etc/logcheck/ignore.d.paranoid/mysql-server-5_5 /etc/logcheck/ignore.d.server/mysql-server-5_5 /etc/logcheck/ignore.d.workstation/mysql-server-5_5 /etc/logrotate.d/mysql-server /etc/mysql/conf.d /etc/mysql/debian-start /etc/mysql/debian.cnf /etc/mysql/conf.d/mysqld_safe_syslog.cnf /home/pkr/.mysql_history /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,libqt4-sql-mysql,,349051c3a57da571aa832adb39177aff /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,cbf77a486cdc80547317981a33144427 /home/pkr/.cache/software-center/piston-helper/rec.ubuntu.com,api,1.0,recommend_app,mysql-client,,de8220dee4d957a9502caa79e8d2fdda /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,libqt4-sql-mysql,page,1,helpful,,17fb2e657321dc51526ee8fe9928da30 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,any,any,any,mysql-client,page,1,helpful,,a4c1b6e8200f36ab5745c6f81f14da0a /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,libqt4-sql-mysql,page,1,helpful,,c54295fb82b8183350cd34f22c3547ef /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,oneiric,any,mysql-client,page,1,helpful,,fcf201c1abff3f774af89173a84de2cc /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,libqt4-sql-mysql,page,1,helpful,,0cd86648584efeccfb16119012f89540 /home/pkr/.cache/software-center/rnrclient/reviews.ubuntu.com,reviews,api,1.0,reviews,filter,en,ubuntu,precise,any,mysql-client,page,1,helpful,,eb84724e9da7851ff8862a227d8bac59 /home/pkr/.local/share/akonadi/mysql.conf /home/pkr/.local/share/akonadi/db_data/mysql /home/pkr/.local/share/akonadi/db_data/mysql.err /home/pkr/.local/share/akonadi/db_data/mysql.err.old /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/columns_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/db.MYD /home/pkr/.local/share/akonadi/db_data/mysql/db.MYI /home/pkr/.local/share/akonadi/db_data/mysql/db.frm /home/pkr/.local/share/akonadi/db_data/mysql/event.MYD /home/pkr/.local/share/akonadi/db_data/mysql/event.MYI /home/pkr/.local/share/akonadi/db_data/mysql/event.frm /home/pkr/.local/share/akonadi/db_data/mysql/func.MYD /home/pkr/.local/share/akonadi/db_data/mysql/func.MYI /home/pkr/.local/share/akonadi/db_data/mysql/func.frm /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/general_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/general_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_category.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_category.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_keyword.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_relation.frm /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYD /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.MYI /home/pkr/.local/share/akonadi/db_data/mysql/help_topic.frm /home/pkr/.local/share/akonadi/db_data/mysql/host.MYD /home/pkr/.local/share/akonadi/db_data/mysql/host.MYI /home/pkr/.local/share/akonadi/db_data/mysql/host.frm /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYD /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.MYI /home/pkr/.local/share/akonadi/db_data/mysql/ndb_binlog_index.frm /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYD /home/pkr/.local/share/akonadi/db_data/mysql/plugin.MYI /home/pkr/.local/share/akonadi/db_data/mysql/plugin.frm /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proc.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proc.frm /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/procs_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/proxies_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYD /home/pkr/.local/share/akonadi/db_data/mysql/servers.MYI /home/pkr/.local/share/akonadi/db_data/mysql/servers.frm /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSM /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.CSV /home/pkr/.local/share/akonadi/db_data/mysql/slow_log.frm /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYD /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.MYI /home/pkr/.local/share/akonadi/db_data/mysql/tables_priv.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_leap_second.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_name.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition.frm /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYD /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.MYI /home/pkr/.local/share/akonadi/db_data/mysql/time_zone_transition_type.frm /home/pkr/.local/share/akonadi/db_data/mysql/user.MYD /home/pkr/.local/share/akonadi/db_data/mysql/user.MYI /home/pkr/.local/share/akonadi/db_data/mysql/user.frm /usr/bin/mysql /usr/bin/mysql_install_db /usr/bin/mysql_upgrade /usr/bin/mysqlcheck /usr/sbin/mysqld /usr/share/mysql /usr/share/app-install/desktop/gmysqlcc:gmysqlcc.desktop /usr/share/app-install/desktop/mysql-client.desktop /usr/share/app-install/desktop/mysql-navigator:mysql-navigator.desktop /usr/share/app-install/desktop/mysql-server.desktop /usr/share/app-install/icons/gmysqlcc-32.png /usr/share/app-install/icons/mysql-navigator.png /usr/share/doc/mysql-client-core-5.5 /usr/share/doc/mysql-server-core-5.5 /usr/share/kde4/apps/katepart/syntax/sql-mysql.xml /usr/share/man/man1/mysql.1.gz /usr/share/man/man1/mysql_install_db.1.gz /usr/share/man/man1/mysql_upgrade.1.gz /usr/share/man/man1/mysqlcheck.1.gz /usr/share/man/man8/mysqld.8.gz /var/cache/apt/archives/akonadi-backend-mysql_1.7.2-0ubuntu1_all.deb /var/cache/apt/archives/libmysqlclient-dev_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libmysqlclient18_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/libqt4-sql-mysql_4%3a4.8.1-0ubuntu4.1_i386.deb /var/cache/apt/archives/mysql-client-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-client_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-common_5.5.22-0ubuntu1_all.deb /var/cache/apt/archives/mysql-server-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server-core-5.5_5.5.22-0ubuntu1_i386.deb /var/cache/apt/archives/mysql-server_5.5.22-0ubuntu1_all.deb /var/lib/dpkg/info/mysql-client-core-5.5.list /var/lib/dpkg/info/mysql-client-core-5.5.md5sums /var/lib/dpkg/info/mysql-server-5.5.list /var/lib/dpkg/info/mysql-server-5.5.postrm /var/lib/dpkg/info/mysql-server-core-5.5.list /var/lib/dpkg/info/mysql-server-core-5.5.md5sums /var/log/mysql /var/log/mysql.err /var/log/mysql.log /var/log/mysql.log.1.gz /var/log/mysql.log.2.gz /var/log/mysql.log.3.gz /var/log/mysql.log.4.gz /var/log/mysql.log.5.gz /var/log/mysql.log.6.gz /var/log/mysql.log.7.gz /var/log/upstart/mysql.log.1.gz /var/log/upstart/mysql.log.2.gz /var/log/upstart/mysql.log.3.gz /var/log/upstart/mysql.log.4.gz /var/log/upstart/mysql.log.5.gz /var/log/upstart/mysql.log.6.gz /var/log/upstart/mysql.log.7.gz What should I do now? Please help me out in this :( I was trying to find out if there is any way I can remove mysql related every file and then reinstall mysql. I need it for Qt connectivity. I don't understand what to do! Please help :(

    Read the article

  • How do I build a DIY NAS?

    - by Kaushik Gopal
    I'm looking for good, detailed instructions on how to build a DIY NAS (Network Access Storage). I'm planning on doing it cheap (old PC config + open source software). I would like to know: What hardware I need to built one What kind of hard-drive setup I should take (like RAID) Or any other relevant hardware related advices (power supply, motherboard etc...) What software I should run on it, both what OS and software to manage the contents effectively So the NAS is recognizable and accessible to my network I can make sure my Windows computers will recognize it (when using Linux distro's) I can access my files from outside my network I already did a fair bit of searching and found these links, but while these links are great they delve more on the hardware side. I'm looking for more instructions in the software side. Ubuntu Setting up a Home NAS DIY NAS Smackdown How to Configure an $80 File Server in 45 Minutes FreeNAS Build a NAS Device With an Old PC and Free Software Build Your Own NAS Device

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • How to Use An Antivirus Boot Disc or USB Drive to Ensure Your Computer is Clean

    - by Chris Hoffman
    If your computer is infected with malware, running an antivirus within Windows may not be enough to remove it. If your computer has a rootkit, the malware may be able to hide itself from your antivirus software. This is where bootable antivirus solutions come in. They can clean malware from outside the infected Windows system, so the malware won’t be running and interfering with the clean-up process. The Problem With Cleaning Up Malware From Within Windows Standard antivirus software runs within Windows. If your computer is infected with malware, the antivirus software will have to do battle with the malware. Antivirus software will try to stop the malware and remove it, while the malware will attempt to defend itself and shut down the antivirus. For really nasty malware, your antivirus software may not be able to fully remove it from within Windows. Rootkits, a type of malware that hides itself, can be even trickier. A rootkit could load at boot time before other Windows components and prevent Windows from seeing it, hide its processes from the task manager, and even trick antivirus applications into believing that the rootkit isn’t running. The problem here is that the malware and antivirus are both running on the computer at the same time. The antivirus is attempting to fight the malware on its home turf — the malware can put up a fight. Why You Should Use an Antivirus Boot Disc Antivirus boot discs deal with this by approaching the malware from outside Windows. You boot your computer from a CD or USB drive containing the antivirus and it loads a specialized operating system from the disc. Even if your Windows installation is completely infected with malware, the special operating system won’t have any malware running within it. This means the antivirus program can work on the Windows installation from outside it. The malware won’t be running while the antivirus tries to remove it, so the antivirus can methodically locate and remove the harmful software without it interfering. Any rootkits won’t be able to set up the tricks they use at Windows boot time to hide themselves from the rest o the operating system. The antivirus will be able to see the rootkits and remove them. These tools are often referred to as “rescue disks.” They’re meant to be used when you need to rescue a hopelessly infected system. Bootable Antivirus Options As with any type of antivirus software, you have quite a few options. Many antivirus companies offer bootable antivirus systems based on their antivirus software. These tools are generally free, even when they’re offered by companies that specialized in paid antivirus solutions. Here are a few good options: avast! Rescue Disk – We like avast! for offering a capable free antivirus with good detection rates in independent tests. avast! now offers the ability to create an antivirus boot disc or USB drive. Just navigate to the Tools -> Rescue Disk option in the avast! desktop application to create bootable media. BitDefender Rescue CD – BitDefender always seems to receive good scores in independent tests, and the BitDefender Rescue CD offers the same antivirus engine in the form of a bootable disc. Kaspersky Rescue Disk – Kaspersky also receives good scores in independent tests and offers its own antivirus boot disc. These are just a handful of options. If you prefer another antivirus for some reason — Comodo, Norton, Avira, ESET, or almost any other antivirus product — you’ll probably find that it offers its own system rescue disk. How to Use an Antivirus Boot Disc Using an antivirus boot disc or USB drive is actually pretty simple. You’ll just need to find the antivirus boot disc you want to use and burn it to disc or install it on a USB drive. You can do this part on any computer, so you can create antivirus boot media on a clean computer and then take it to an infected computer. Insert the boot media into the infected computer and then reboot. The computer should boot from the removable media and load the secure antivirus environment. (If it doesn’t, you may need to change the boot order in your BIOS or UEFI firmware.) You can then follow the instructions on your screen to scan your Windows system for malware and remove it. No malware will be running in the background while you do this. Antivirus boot discs are useful because they allow you to detect and clean malware infections from outside an infected operating system. If the operating system is severely infected, it may not be possible to remove — or even detect — all the malware from within it. Image Credit: aussiegall on Flickr     

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • Change or Reset Windows Password from a Ubuntu Live CD

    - by Trevor Bekolay
    If you can’t log in even after trying your twelve passwords, or you’ve inherited a computer complete with password-protected profiles, worry not – you don’t have to do a fresh install of Windows. We’ll show you how to change or reset your Windows password from a Ubuntu Live CD. This method works for all of the NT-based version of Windows – anything from Windows 2000 and later, basically. And yes, that includes Windows 7. You’ll need a Ubuntu 9.10 Live CD, or a bootable Ubuntu 9.10 Flash Drive. If you don’t have one, or have forgotten how to boot from the flash drive, check out our article on creating a bootable Ubuntu 9.10 flash drive. The program that lets us manipulate Windows passwords is called chntpw. The steps to install it are different in 32-bit and 64-bit versions of Ubuntu. Installation: 32-bit Open up Synaptic Package Manager by clicking on System at the top of the screen, expanding the Administration section, and clicking on Synaptic Package Manager. chntpw is found in the universe repository. Repositories are a way for Ubuntu to group software together so that users are able to choose if they want to use only completely open source software maintained by Ubuntu developers, or branch out and use software with different licenses and maintainers. To enable software from the universe repository, click on Settings > Repositories in the Synaptic window. Add a checkmark beside the box labeled “Community-maintained Open Source software (universe)” and then click close. When you change the repositories you are selecting software from, you have to reload the list of available software. In the main Synaptic window, click on the Reload button. The software lists will be downloaded. Once downloaded, Synaptic must rebuild its search index. The label over the text field by the Search button will read “Rebuilding search index.” When it reads “Quick search,” type chntpw in the text field. The package will show up in the list. Click on the checkbox near the chntpw name. Click on Mark for Installation. chntpw won’t actually be installed until you apply the changes you’ve made, so click on the Apply button in the Synaptic window now. You will be prompted to accept the changes. Click Apply. The changes should be applied quickly. When they’re done, click Close. chntpw is now installed! You can close Synaptic Package Manager. Skip to the section titled Using chntpw to reset your password. Installation: 64-bit The version of chntpw available in Ubuntu’s universe repository will not work properly on a 64-bit machine. Fortunately, a patched version exists in Debian’s Unstable branch, so let’s download it from there and install it manually. Open Firefox. Whether it’s your preferred browser or not, it’s very readily accessible in the Ubuntu Live CD environment, so it will be the easiest to use. There’s a shortcut to Firefox in the top panel. Navigate to http://packages.debian.org/sid/amd64/chntpw/download and download the latest version of chntpw for 64-bit machines. Note: In most cases it would be best to add the Debian Unstable branch to a package manager, but since the Live CD environment will revert to its original state once you reboot, it’ll be faster to just download the .deb file. Save the .deb file to the default location. You can close Firefox if desired. Open a terminal window by clicking on Applications at the top-left of the screen, expanding the Accessories folder, and clicking on Terminal. In the terminal window, enter the following text, hitting enter after each line: cd Downloadssudo dpkg –i chntpw* chntpw will now be installed. Using chntpw to reset your password Before running chntpw, you will have to mount the hard drive that contains your Windows installation. In most cases, Ubuntu 9.10 makes this simple. Click on Places at the top-left of the screen. If your Windows drive is easily identifiable – usually by its size – then left click on it. If it is not obvious, then click on Computer and check out each hard drive until you find the correct one. The correct hard drive will have the WINDOWS folder in it. When you find it, make a note of the drive’s label that appears in the menu bar of the file browser. If you don’t already have one open, start a terminal window by going to Applications > Accessories > Terminal. In the terminal window, enter the commands cd /medials pressing enter after each line. You should see one or more strings of text appear; one of those strings should correspond with the string that appeared in the title bar of the file browser earlier. Change to that directory by entering the command cd <hard drive label> Since the hard drive label will be very annoying to type in, you can use a shortcut by typing in the first few letters or numbers of the drive label (capitalization matters) and pressing the Tab key. It will automatically complete the rest of the string (if those first few letters or numbers are unique). We want to switch to a certain Windows directory. Enter the command: cd WINDOWS/system32/config/ Again, you can use tab-completion to speed up entering this command. To change or reset the administrator password, enter: sudo chntpw SAM SAM is the file that contains your Windows registry. You will see some text appear, including a list of all of the users on your system. At the bottom of the terminal window, you should see a prompt that begins with “User Edit Menu:” and offers four choices. We recommend that you clear the password to blank (you can always set a new password in Windows once you log in). To do this, enter “1” and then “y” to confirm. If you would like to change the password instead, enter “2”, then your desired password, and finally “y” to confirm. If you would like to reset or change the password of a user other than the administrator, enter: sudo chntpw –u <username> SAM From here, you can follow the same steps as before: enter “1” to reset the password to blank, or “2” to change it to a value you provide. And that’s it! Conclusion chntpw is a very useful utility provided for free by the open source community. It may make you think twice about how secure the Windows login system is, but knowing how to use chntpw can save your tail if your memory fails you two or eight times! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDChange Your Forgotten Windows Password with the Linux System Rescue CDHow to Create and Use a Password Reset Disk in Windows Vista & Windows 7Reset Your Forgotten Password the Easy Way Using the Ultimate Boot CD for WindowsHow to install Spotify in Ubuntu 9.10 using Wine TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >