Search Results

Search found 7007 results on 281 pages for 'third party'.

Page 57/281 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How can I restore my laptop from Windows 7 to its original Windows Vista recovery partition?

    - by Cam Jackson
    I bought my Acer laptop 4 years ago with Vista Home Premium x86. It has a recovery partition that I have used successfully in the past to format everything and reinstall Windows to factory settings. I have since upgraded to Windows 7, but I now need to get back to my original installation. Not sure what it's called, but I can successfully get into this recovery thingy: However, when I click the third option (for me I think it says 'Windows Image Recovery' or something like that) it tells me that it can't find any images to recover from :( I have checked and I don't have a windows.old that I can recover from either. One final note, if I launch diskmgmt.msc, these are the partitions: Why is the first partition shaded? Does that mean anything? Both of the 'unlettered' partitions are 100% empty. Did the Windows 7 upgrade process format my Vista system recovery partition?! And finally: How can I get back to my factory settings? EDIT: I did see this question, but neither of the answers apply to my situation. Edit to address jdh's answer: From what I can tell, I never get the option to boot the Vista recovery partition. After hitting F10, I get this screen, except it's partition 2, and I don't have the IN/MINT bit: I hit Escape, and then I get this screen, except without Ubuntu listed, and without the auto-countdown thing: I hit F8, and then I get this screen: I hit Enter on the first option, I end up at the screen in the first screen shot. As I said, from there I click the third option, and it fails to find the image, which I guess makes sense if it's only looking for a Windows 7 recovery. So I either need to make the Windows 7 tool see the Vista recovery partition, or I need the boot loader (?) to let me select Vista earlier in the process. Any ideas?

    Read the article

  • How do I count the times each number appears in columns of numbers?

    - by Andy C.
    I am sure this must be easy, but I am inexperienced. About the best way to think of my problem is to think of it as trying to sort and then count lottery numbers. To stay simple, let's do a Pick 3 game. Let's look at 10 drawings. I would split each drawn number into a separate column: DATE BALL#1 BALL#2 BALL#3 3/1 1 3 5 3/2 3 7 8 3/3 2 2 1 3/4 5 7 6 3/5 2 3 1 3/6 0 5 9 3/7 3 7 0 3/8 6 8 4 3/9 2 4 3 3/10 7 1 2 I would like to be able to build formulas into cells that would tell me how many times each number appeared overall, and how many times each number appeared in the position it occurred. Like this (using the above example): Number Overall Count Ball#1 Count Ball#2 Count Ball#3 Count 0 2 1 0 1 1 4 1 1 2 (That is, The number zero appears twice overall, and came up once as the first number drawn; zero times as the middle ball; and once as the third ball. Likewise, the number 1 was drawn four times in our 10-day period. It was the first ball once, the second ball once and the third ball twice.) And so on. All help appreciated. I have access to Excel and Microsoft Works, or of course if there is a Google Docs way to handle this All thanks for any help.

    Read the article

  • Linux: Send mail to external mail box from a server with that user's hostname

    - by dtbarne
    I've got sendmail running on a linux box. Let's say the hostname of the box is bar.com. If I run the following command, I don't receive the email (which is hosted with a third party), presumably due to the hostname pointing to the local machine. echo "Test Body" | mail -s "Test Subject" [email protected] Is there any way to get this to work so that I can receive emails at my third party email address even though it has the same hostname? Do I have to change the hostname of this server (not preferred)? It may be worth noting that I created a user "foo" on my machine and noticed that the mailbox for that account is empty. I noticed these log entries, which may or may not be relevant: Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: from=apache, size=80, class=0, nrcpts=1, msgid=<[email protected]>, relay=apache@localhost Jun 28 01:09:48 bar sendmail[14339]: p5S59mIA014339: from=<[email protected]>, size=293, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.$ Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: [email protected], ctladdr=apache (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30080, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5S59mIA$ Jun 28 01:09:48 bar sendmail[14340]: p5S59mIA014339: to=<[email protected]>, ctladdr=<[email protected]> (48/48), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30495, dsn=2.0.0, stat=Sent

    Read the article

  • openVPN as a way to connect to a LAN by another client, different from server

    - by Einar
    Setup: one LAN handled by a router without a publicly available IP address but without any outbound connection restrictions ("target LAN"); a separate server publicly reachable from the Internet ("gateway"). I am trying to set up openVPN so that a third client can connect to the "gateway" and access the "target LAN". As the router of "target LAN" is not reachable from the Internet directly, it connects to the gateway itself via openVPN as well. The problem is how to handle routing. The LAN router has two network interfaces (for the outside network and the LAN itself). In openVPN (the server on the gateway) I set client-to-client and push "route 192.168.10.0 255.255.255.0" but I assume this would be horribly wrong (it actually messed up the routing on the LAN router until I killed openVPN). openVPN is not using bridging, is configured via tun. Other config details from the server server 10.8.0.0 255.255.255.0 client-config-dir ccd route 192.168.10.0 255.255.255.0 And the client file in ccd is iroute 192.168.10.0 255.255.255.0 What can be adjusted to ensure that a third client can connect through openVPN and access the LAN mentioned earlier?

    Read the article

  • How to disable Utility Manager (Windows Key + U)

    - by Skizz
    How do I disable the Windows + U hotkey in Windows XP? Alternatively, how do I stop the utility manager from being active? The two are related. The utilty manager is currently providing a potential security hole and I need to remove it[1]. The system I'm developing uses a custom Gina to log in and start a custom shell. This removes most Windows Key hotkeys but the Win + U still pops up the manager app. Update: Things I've tried and don't work: NoWinKeys registry setting - this only affects explorer hotkeys; Renaming utilman.exe - program reappears next login; Third party software - not really an option, these machines are audited by the clients and additional, third party software would be unlikely to be accepted. Also, the proedure needs to be reasonably straightforward - this has to be done by field service engineers to existing machines (machines currently in Russia, Holland, France, Spain, Ireland and USA). [1] The hole is via the internet options in the help viewer the utility app links to.

    Read the article

  • UIDs for service users in Mac OS X

    - by LaC
    Some third-party servers should be run under a special user for security reasons (eg, PostgreSQL is typically run by "postgres"). Of course, these service users should not show up in the Mac OS X login windows. I know how to create hidden users using dscl or dsimport, but I'm wondering what the best policy is for assigning UIDs (and matching GIDs). Apple's documentation states that UIDs from 0 to 100 are reserved (pg. 69), but OS X comes with several special users and groups outside that range. I used to use ids from 401 onwards for services, but I noticed that OS X 10.6 has started using that range for groups created by the Sharing pane in System Preferences. What is the recommended ID range to use for third-party services, then? Perhaps I should just use IDs in the 500 range, since all that is needed to hide a user in Snow Leopard is setting his password to "*"? Also, most of Apple's services have names starting with an underscore, with an alias sans underscore; eg, _sandbox and sandbox. Is there any special significance to this? Should I do the same for my services?

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • How to set up multiple video cards in Windows XP

    - by Swish
    All I am trying to do is run a three monitor setup, two widescreens on the good card at high res and a third smaller monitor at 1024x768 that I watch continuously updating info in. I have a Dell Vostro 200 with an HIS Radeon HD 4670 video card in the PCI-e slot. I need an additional video card that will work in an available PCI slot. I Had a Nvidia GeForce 5200 that was installed but due to what I'm assuming was a compatible driver issue (because it was Nvidia not ATI) it wouldn't run at more than 640x480 4bit color when installed simultaneously with the HIS card. Previously I used the Nvidia with the onboard Intel video to run a third monitor, but now that the HSI uses PCI-e the onboard has to be disabled. I tried a Radeon 9200 PCI card that I found (microsoft made I think) with it and only that or the HIS card could be enabled at the same time. When both were in the HIS card said "The device cannot start". SO, based on the fact that I got the Nvidia to work (although very poorly) with the new HIS Radeon HD 4670 PCI-e I know it's possible, just don't know what type of card I need to use. Any ideas?

    Read the article

  • SSO to multiple websites from Sharepoint website

    - by Aico
    We have an intranet based on Sharepoint 2010. In this intranet we have several links to other webservers within the same Active Directory, for example a link to our Outlook Web Access site on our Exchange 2010 environment. We have three different setups which visit this Sharepoint environment and the other webservers: Windows 7 clients that are a member of the Active Directory Home pc's that connect through a SSL VPN appliance Standalone thin clients (Windows 7 embedded) within the corporate network The goal is to let people only sign in once. In the first group this isn't a problem because the AD Integrated Authentication works fine and the Windows logon is passed on to Sharepoint and the other webservers. The second group is also working fine because of the LDAP integration that the SSL VPN appliance uses. The third group is however experiencing issues. They need to enter their credentials everytime they click a link to another webserver. They first need to enter credentials for accessing the Sharepoint environment. When clicking the link for their webmail they have to re-enter their credentials, and so on. Can someone tell me what the best solution would be to also get SSO working fine for the third group? Some extra information: We also have a Forefront TMG server in our environment. I read somewhere that Forefront might be part of a solution for this problem, but not sure how. Maybe someone here can help me? Look forward to some help. Best regards, Aico

    Read the article

  • Using iptables to make a VPN router

    - by lost_in_the_sauce
    I am attempting to make a VPN connection to a third party VPN site, then forward traffic from my internal computers (ssh and ping for now) out to the VPN site using IPTables. 3rd Party <- (tun0/eth0)Linux VPN Box(eth1) <- Windows7TestBox I am running on CentOS 6.3 Linux and have two network connections eth0-public eth1-private. I am running vpnc-0.5.3-4 which is currently connecting to my destination. When I connect I am able to ping the destination IPAddresses but that is as far as I can get. ping -I tun0 10.1.33.26 success ping -I eth0 10.1.33.26 fail ping -I eth1 10.1.33.26 fail I have my private network Windows 7 test box set up to have the eth1 (private) network of my VPN Server as its gateway and can ping him fine. I need IPTables to send the Windows 7 traffic out the VPN tunnel. I have tried for a few days many different IPTables configurations from this site and others, either the other examples are too simple or overly complicated. The only thing this server is doing is connecting to the VPN and forwarding all traffic. So we can "flush" everything and start from scratch here. It is a blank slate. #!/bin/bash echo "Define variables" ipt="/sbin/iptables" echo "Zero out all counters" $ipt -Z $ipt -t nat -Z $ipt -t mangle -Z echo "Flush all active rules, delete all chains" $ipt -F $ipt -X $ipt -t nat -F $ipt -t nat -X $ipt -t mangle -F $ipt -t mangle -X $ipt -P INPUT ACCEPT $ipt -P FORWARD ACCEPT $ipt -P OUTPUT ACCEPT $ipt -t nat -A POSTROUTING -o tun0 -j MASQUERADE $ipt -A FORWARD -i eth1 -o eth0 -j ACCEPT $ipt -A FORWARD -i eth0 -o eth1 -j ACCEPT $ipt -A FORWARD -i eth0 -o tun0 -j ACCEPT $ipt -A FORWARD -i tun0 -o eth0 -j ACCEPT Again I have done many variations of the above and many other rules from other posts but haven't been able to move forward. It seems like such a simple task, and yet....

    Read the article

  • mysql cluster problem in ubuntu

    - by Firman
    I have a problem while installing and configuring mysql cluster runnign on ubuntu 10.10 This is configuration for Cluster management [NDBD DEFAULT] NoOfReplicas=2 DataMemory=10MB IndexMemory=25MB MaxNoOfTables=256 MaxNoOfOrderedIndexes=256 MaxNoOfUniqueHashIndexes=128 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] [NDB_MGMD] Id=1 # the NDB Management Node (this one) HostName=192.168.10.101 [NDBD] Id=2 # the first NDB Data Node HostName=192.168.10.11 DataDir= /var/lib/mysql-cluster [NDBD] Id=3 # the second NDB Data Node HostName=192.168.10.12 DataDir=/var/lib/mysql-cluster [MYSQLD] [MYSQLD] and this is configuration for both node : [mysqld] ndbcluster ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER [mysql_cluster] ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER After running all node and management, and I use ndb_mgm, the type 'show' command, and something appear like this : ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.10.11 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master) id=3 @192.168.10.12 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.101 (mysql-5.1.39 ndb-7.0.9) [mysqld(API)] 1 node(s) id=4 (not connected, accepting connect from 192.168.10.101) look at two last line.. not as what http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-servers.html look like (see at point 4) anyone have ever had this problem ?

    Read the article

  • Is Google tracking our web history even when we do not visit Google or if affiliate websites?

    - by Anoyon-12
    I have recently updated to Google Chrome 29.0.1547.76. And when I click a the new tab button there is a new homepage now. Ok. My current settings forbid from third party cookies being set. And I clear all of my browsing data every time I close or well if its been too long browsing. Ok so there is this help dialog that appears first time you open the new tab page. What I did was Cleared all my browsing settings (From Beginning of time) and then again I went to the new tab, the Help dialog appears. And for the third time I did the same and the Same thing Happened. So For the fourth time I cleared all my Browsing data. Clicked open new tab and then. navigate myself to chrome://settings/cookies. and there were So does this mean google is tracking our web History just for using Chrome. I know Its not Illegal because these cookies only appear when you click the new tab Google Chrome 29.0.1547.76. maybe that was the reason google redesigned the entire New:tab page. From this Google is forcing us to allow them track us. I don't want to set another page as my new:tab page. I just want the old one. Google has a long History of invading Privacy without users consent. There was that Safari incident. I am sure you people remember. So can anyone tell me about this issue? I maybe wrong. So please explain.

    Read the article

  • Here i m sending my code for presentmodelViewController

    - by aman-gupta
    Hi Please help me out i want to switch to first view from third view directly here is my code:-where i m using presentModelViewcontroller // // ExperimentAppDelegate.h // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import <UIKit/UIKit.h> @class ExperimentViewController; @class SecondView; @class ThirdView; BOOL parentScreenNo; @interface ExperimentAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; ExperimentViewController *viewController; SecondView *ptrSecond; ThirdView *ptrThird; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet ExperimentViewController *viewController; @property (nonatomic, retain) IBOutlet SecondView *ptrSecond; @property (nonatomic, retain) IBOutlet ThirdView *ptrThird; @end ///////////////////////////////////// // // ExperimentAppDelegate.m // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "ExperimentAppDelegate.h" #import "ExperimentViewController.h" @implementation ExperimentAppDelegate @synthesize window; @synthesize viewController,ptrThird,ptrSecond; - (void)applicationDidFinishLaunching:(UIApplication *)application { parentScreenNo = NO; // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } @end ///////////////////////////////////////// // // ExperimentViewController.h // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import <UIKit/UIKit.h> @interface ExperimentViewController : UIViewController { } -(IBAction)second:(id)sender; @end /////////////////////////////// // // ExperimentViewController.m // Experiment // // Created by Aman on 4/20/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "ExperimentViewController.h" #import "ExperimentAppDelegate.h" @implementation ExperimentViewController /* // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ -(IBAction)second:(id)sender { ExperimentAppDelegate *app = (ExperimentAppDelegate *)[[UIApplication sharedApplication]delegate]; [self presentModalViewController:app.ptrSecond animated:YES]; } /* // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } */ /* // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } */ /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end ///////////////////////////////// // // SecondView.h // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import <UIKit/UIKit.h> @interface SecondView : UIViewController { } -(IBAction)third:(id)sender; -(void)down; @end //////////////////////////////////////////// // // SecondView.m // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "SecondView.h" #import "ExperimentAppDelegate.h" @implementation SecondView /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } -(void)viewWillAppear:(BOOL)animated { if(parentScreenNo == YES) { [self performSelector:@selector(down) withObject:nil]; } [super viewWillAppear:YES]; } -(IBAction)third:(id)sender { ExperimentAppDelegate *app = (ExperimentAppDelegate *)[[UIApplication sharedApplication]delegate]; [self presentModalViewController:app.ptrThird animated:YES]; } -(void)down { [self dismissModalViewControllerAnimated:YES]; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end ////////////////////////////////// // // ThirdView.h // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import <UIKit/UIKit.h> @interface ThirdView : UIViewController { } -(IBAction)back:(id)sender; @end ////////////////////////////////// // // ThirdView.m // Experiment // // Created by Aman on 4/20/10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "ThirdView.h" #import "ExperimentAppDelegate.h" @implementation ThirdView /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ /* // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } */ -(IBAction)back:(id)sender { [self dismissModalViewControllerAnimated:YES]; parentScreenNo = YES; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end Above is my code for calling other view please tell how i can call my first view by using dismismodelViewController from third screen Please help me out

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • How to determine if two generic type values are equal?

    - by comecme
    I'm trying to figure out how I can successfully determine if two generic type values are equal to each other. Based on Mark Byers' answer on this question I would think I can just use value.Equals() where value is a generic type. My actual problem is in a LinkedList implementation, but the problem can be shown with this simpler example. class GenericOjbect<T> { public T Value { get; private set; } public GenericOjbect(T value) { Value = value; } public bool Equals(T value) { return (Value.Equals(value)); } } Now I define an instance of GenericObject<StringBuilder> containing new StringBuilder("StackOverflow"). I would expect to get true if I call Equals(new StringBuilder("StackOverflow") on this GenericObject instance, but I get false. A sample program showing this: using System; using System.Text; class Program { static void Main() { var sb1 = new StringBuilder("StackOverflow"); var sb2 = new StringBuilder("StackOverflow"); Console.WriteLine("StringBuilder compare"); Console.WriteLine("1. == " + (sb1 == sb2)); Console.WriteLine("2. Object.Equals " + (Object.Equals(sb1, sb2))); Console.WriteLine("3. this.Equals " + (sb1.Equals(sb2))); var go1 = new GenericOjbect<StringBuilder>(sb1); var go2 = new GenericOjbect<StringBuilder>(sb2); Console.WriteLine("\nGenericObject compare"); Console.WriteLine("1. == " + (go1 == go2)); Console.WriteLine("2. Object.Equals " + (Object.Equals(go1, go2))); Console.WriteLine("3. this.Equals " + (go1.Equals(go2))); Console.WriteLine("4. Value.Equals " + (go1.Value.Equals(go2.Value))); } } For the three methods of comparing two StringBuilder objects, only the StringBuilder.Equals instance method (the third line) returns true. This is what I expected. But when comparing the GenericObject objects, its Equals() method (the third line) returns false. Interestingly enough, the fourth compare method does return true. I'd think the third and fourth comparison are actually doing the same thing. I would have expected true. Because in the Equals() method of the GenericObject class, both value and Value are of type T which in this case is a StringBuilder. Based on Mark Byers' answer in this question, I would've expected the Value.Equals() method to be using the StringBuilder's Equals() method. And as I've shown, the StringBuilder's Equal() method does return true. I've even tried public bool Equals(T value) { return EqualityComparer<T>.Default.Equals(Value, value); } but that also returns false. So, two questions here: Why doesn't the code return true? How could I implement the Equals method so it does return true?

    Read the article

  • How to query on table returned by Stored procedure within a procedure.

    - by Shantanu Gupta
    I have a stored procedure that is performing some ddl dml operations. It retrieves a data after processing data from CTE and cross apply and other such complex things. Now this returns me a 4 tables which gets binded to various sources at frontend. Now I want to use one of the table to further processing so as to get more usefull information from it. eg. This table would be containing approx 2000 records at most of which i want to get records that belongs to lodging only. PK_CATEGORY_ID DESCRIPTION FK_CATEGORY_ID IMMEDIATE_PARENT Department_ID Department_Name DESCRIPTION_HIERARCHY DEPTH IS_ACTIVE ID_PATH DESC_PATH -------------------- -------------------------------------------------- -------------------- -------------------------------------------------- -------------------- -------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ----------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1 Food NULL NULL 1 Food (Food) Food 0 1 0 Food 5 Chinese 1 Food 1 Food (Food) ----Chinese 1 1 1 Food->Chinese 14 X 5 Chinese 1 Food (Food) --------X 2 1 1->5 Food->Chinese->X 15 Y 5 Chinese 1 Food (Food) --------Y 2 1 1->5 Food->Chinese->Y 65 asdasd 5 Chinese 1 Food (Food) --------asdasd 2 1 1->5 Food->Chinese->asdasd 66 asdas 5 Chinese 1 Food (Food) --------asdas 2 1 1->5 Food->Chinese->asdas 8 Italian 1 Food 1 Food (Food) ----Italian 1 1 1 Food->Italian 48 hfghfgh 1 Food 1 Food (Food) ----hfghfgh 1 1 1 Food->hfghfgh 55 Asd 1 Food 1 Food (Food) ----Asd 1 1 1 Food->Asd 2 Lodging NULL NULL 2 Lodging (Lodging) Lodging 0 1 0 Lodging 3 Room 2 Lodging 2 Lodging (Lodging) ----Room 1 1 2 Lodging->Room 4 Floor 3 Room 2 Lodging (Lodging) --------Floor 2 1 2->3 Lodging->Room->Floor 9 First 4 Floor 2 Lodging (Lodging) ------------First 3 1 2->3->4 Lodging->Room->Floor->First 10 Second 4 Floor 2 Lodging (Lodging) ------------Second 3 1 2->3->4 Lodging->Room->Floor->Second 11 Third 4 Floor 2 Lodging (Lodging) ------------Third 3 1 2->3->4 Lodging->Room->Floor->Third 29 Fourth 4 Floor 2 Lodging (Lodging) ------------Fourth 3 1 2->3->4 Lodging->Room->Floor->Fourth 12 Air Conditioned 3 Room 2 Lodging (Lodging) --------Air Conditioned 2 1 2->3 Lodging->Room->Air Conditioned 20 With Balcony 12 Air Conditioned 2 Lodging (Lodging) ------------With Balcony 3 1 2->3->12 Lodging->Room->Air Conditioned->With Balcony 24 Mountain View 20 With Balcony 2 Lodging (Lodging) ----------------Mountain View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Mountain View 25 Ocean View 20 With Balcony 2 Lodging (Lodging) ----------------Ocean View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Ocean View 26 Garden View 20 With Balcony 2 Lodging (Lodging) ----------------Garden View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Garden View 52 Smoking 20 With Balcony 2 Lodging (Lodging) ----------------Smoking 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Smoking 21 Without Balcony 12 Air Conditioned 2 Lodging (Lodging) ------------Without Balcony 3 1 2->3->12 Lodging->Room->Air Conditioned->Without Balcony 13 Non Air Conditioned 3 Room 2 Lodging (Lodging) --------Non Air Conditioned 2 1 2->3 Lodging->Room->Non Air Conditioned 22 With Balcony 13 Non Air Conditioned 2 Lodging (Lodging) ------------With Balcony 3 1 2->3->13 Lodging->Room->Non Air Conditioned->With Balcony 71 EA 3 Room 2 Lodging (Lodging) --------EA 2 1 2->3 Lodging->Room->EA 50 Casabellas 2 Lodging 2 Lodging (Lodging) ----Casabellas 1 1 2 Lodging->Casabellas 51 North Beach 50 Casabellas 2 Lodging (Lodging) --------North Beach 2 1 2->50 Lodging->Casabellas->North Beach 40 Fooding NULL NULL 40 Fooding (Fooding) Fooding 0 1 0 Fooding 41 Pizza 40 Fooding 40 Fooding (Fooding) ----Pizza 1 1 40 Fooding->Pizza 45 Onion 41 Pizza 40 Fooding (Fooding) --------Onion 2 1 40->41 Fooding->Pizza->Onion 47 Extra Cheeze 41 Pizza 40 Fooding (Fooding) --------Extra Cheeze 2 1 40->41 Fooding->Pizza->Extra Cheeze 77 Burger 40 Fooding 40 Fooding (Fooding) ----Burger 1 1 40 Fooding->Burger This result is being obtained to me using some stored procedure which contains some DML operations as well. i want something like this select description from exec spName where fk_category_id=5 Remember that this spName is returning me 4 tables of which i want to perform some query on one of the table whose index will be known to me. I dont have to send it to UI before querying further. I am using Sql Server 2008 but would like a compatible solution for 2005 also.

    Read the article

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • jQuery gallery scrolling effect with ease

    - by Sebastian Otarola
    So, I got this page from a friend and I think the gallery is amazingly done. Too bad it's in Flash ; http://selected.com/en/#/collection/homme/ Now, I'm trying to replicate the effect with jQuery. I've made all the loco searches on google one could think of. Zooming the picture is not a problem, the problem lies within the scrolling, how they come together at the ease part. I'm looking for solution in how to make the thumbnail animate when you scroll the page, they drag behind and infront of each other in a very subtle way - I've got (With a lot of help from Whirl3d in the jQuery-irc channel) this for the scrollup/down part of the mouse but the scrolling goes haywire; I Thought I post it here where I've come many times to get answers to a lot of questions and code-errors. This is my first post in stackoverflow and I know you guys are geniuses! Give it a shot! Thanks in advance! jQuery Part $(document).ready(function() { var fronts=$(".front"); var backs=$(".back"); var tempScrollTop, currentScrollTop = 0; $(document).scroll(function () { currentScrollTop = $(document).scrollTop(); if (tempScrollTop < currentScrollTop) { //Scroll down fronts.animate({marginTop:"-=100"},{duration:500, queue:false, easing:"easeOutBack"}); backs.animate({marginTop:"-=100"}, {duration:300, queue:false, easing:"easeOutBack"}); console.log('scroll down'); } else if (tempScrollTop > currentScrollTop) { //scroll up fronts.animate({marginTop:"+=100"},{duration:500, queue:false, easing:"easeOutBack"}); backs.animate({marginTop:"+=100"}, {duration:300, queue:false, easing:"easeOutBack"}); console.log('scroll up'); } tempScrollTop = currentScrollTop ;}) ;}); The HTML <html> <head> <link rel="stylesheet" type="text/css" href="style.css" /> <script type="text/javascript" src="http://code.jquery.com/jquery-1.7.2.min.js"></script> <script type="text/javascript" src="http://www.paigeharvey.net/assets/js/jquery.easing.js"></script> <script type="text/javascript" src="gallery.js"></script> <title>Parallax testing image gallery</title> </head> <body> <div class="container"> <div class='box front'>First Group</div> <div class='box back'>First Group</div> <div class='box front'>First Group</div> <div class='box back'>First Group</div> <br style="clear:both"/> <div class='box front'>Second Group</div> <div class='box back'>Second Group</div> <div class='box front'>Second Group</div> <div class='box back'>Second Group</div> <br style="clear:both"/> <div class='box front'>Third Group</div> <div class='box back'>Third Group</div> <div class='box front'>Third Group</div> <br style="clear:both"/> </div> </body> And finally the CSS Part .container {margin: auto; width: 410px; border: 1px solid red;} .box.front{border: 1px solid red;background-color:Black;color:white;z-Index:500;} .box.back {border: 1px solid green;z-Index:300;background-color:white;} .box {float:left; text-align:center; width:100px; height:100px;}

    Read the article

  • MVC design in Cocoa - are all 3 always necessary? Also: naming conventions, where to put Controller

    - by Nektarios
    I'm new to MVC although I've read a lot of papers and information on the web. I know it's somewhat ambiguous and there are many different interpretations of MVC patterns.. but the differences seem somewhat minimal My main question is - are M, V, and C always going to be necessary to be doing this right? I haven't seen anyone address this in anything I've read. Examples (I'm working in Cocoa/Obj-c although that shouldn't much matter).. 1) If I have a simple image on my GUI, or a text entry field that is just for a user's convenience and isn't saved or modified, these both would be V (view) but there's no M (no data and no domain processing going on), and no C to bridge them. So I just have some aspects that are "V" - seems fine 2) I have 2 different and visible windows that each have a button on them labeled as "ACTIVATE FOO" - when a user clicks the button on either, both buttons press in and change to say "DEACTIVATE FOO" and a third window appears with label "FOO". Clicking the button again will change the button on both windows to "ACTIVATE FOO" and will remove the third "FOO" window. In this case, my V consists of the buttons on both windows, and I guess also the third window (maybe all 3 windows). I definitely have a C, my Controller object will know about these buttons and windows and will get their clicks and hold generic states regarding windows and buttons. However, whether I have 1 button or 10 button, my window is called "FOO" or my window is called "BAR", this doesn't matter. There's no domain knowledge or data here - just control of views. So in this example, I really have "V" and "C" but no "M" - is that ok? 3) Final example, which I am running in to the most. I have a text entry field as my View. When I enter text in this, say a number representing gravity, I keep it in a Model that may do things like compute physics of a ball while taking in to account my gravity parameter. Here I have a V and an M, but I don't understand why I would need to add a C - a controller would just accept the signals from the View and pass it along to the Model, and vice versa. Being as the C is just a pure passthrough, it's really "junk" code and isn't making things any more reusable in my opinion. In most situations, when something changes I will need to change the C and M both in nearly identical ways. I realize it's probably an MVC beginner's mistake to think most situations call for only V and M.. leads me in to next subject 4) In Cocoa / Xcode / IB, I guess my Controllers should always be an instantiated object in IB? That is, I lay all of my "V" components in IB, and for each collection of View objects (things that are related) I should have an instantiated Controller? And then perhaps my Models should NOT be found in IB, and instead only found as classes in Xcode that tie in with Controller code found there. Is this accurate? This could explain why you'd have a Controller that is not really adding value - because you are keeping consistent.. 5) What about naming these things - for my above example about FOO / BAR maybe something that ends in Controller would be the C, like FancyWindowOpeningController, etc? And for models - should I suffix them with like GravityBallPhysicsModel etc, or should I just name those whatever I like? I haven't seen enough code to know what's out there in the wild and I want to get on the right track early on Thank you in advance for setting me straight or letting me know I'm on the right track. I feel like I'm starting to get it and most of what I say here makes sense, but validation of my guessing would help me feel confident..

    Read the article

  • What are the principles of developing web-applications with action-based java frameworks?

    - by Roman
    Background I'm going to develop a new web-application with java. It's not very big or very complex and I have enough time until it'll "officially" start. I have some JSF/Facelets development background (about half a year). And I also have some expirience with JSP+JSTL. In self-educational purpose (and also in order to find the best solution) I want to prototype the new project with one of action-based frameworks. Actually, I will choose between Spring MVC and Stripes. Problem In order to get correct impression about action-based frameworks (in comparison with JSF) I want to be sure that I use them correctly (in bigger or lesser extent). So, here I list some most-frequent tasks (at least for me) and describe how I solve them with JSF. I want to know how they should be solved with action-based framework (or separately with Spring MVC and Stripes if there is any difference for concrete task). Rendering content: I can apply ready-to-use component from standard jsf libraries (core and html) or from 3rd-party libs (like RichFaces). I can combine simple components and I can easily create my own components which are based on standard components. Rendering data (primitive or reference types) in the correct format: Each component allow to specify a converter for transforming data in both ways (to render and to send to the server). Converter is, as usual, a simple class with 2 small methods. Site navigation: I specify a set of navigation-cases in faces-config.xml. Then I specify action-attribute of a link (or a button) which should match one or more of navigation cases. The best match is choosen by JSF. Implementing flow (multiform wizards for example): I'm using JSF 1.2 so I use Apache Orchestra for the flow (conversation) scope. Form processing: I have a pretty standard java-bean (backing bean in JSF terms) with some scope. I 'map' form fields on this bean properties. If everything goes well (no exceptions and validation is passed) then all these properties are set with values from the form fields. Then I can call one method (specified in button's action attribute) to execute some logic and return string which should much one of my navigation cases to go to the next screen. Forms validation: I can create custom validator (or choose from existing) and add it to almost each component. 3rd-party libraries have sets of custom ajax-validators. Standard validators work only after page is submitted. Actually, I don't like how validation in JSF works. Too much magic there. Many standard components (or maybe all of them) have predefined validation and it's impossible to disable it (Maybe not always, but I met many problems with it). Ajax support: many 3rd-party libraries (MyFaces, IceFaces, OpenFaces, AnotherPrefixFaces...) have strong ajax support and it works pretty well. Until you meet a problem. Too much magic there as well. It's very difficult to make it work if it doesn't work but you've done right as it's described in the manual. User-friendly URLs: people say that there are some libraries for that exist. And it can be done with filters as well. But I've never tried. It seems too complex for the first look. Thanks in advance for explaning how these items (or some of them) can be done with action-based framework.

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >