Search Results

Search found 7073 results on 283 pages for 'shared printers'.

Page 102/283 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • How to export and import an user profile from one Quassel core to another?

    - by Zertrin
    I have been using Quassel as my bouncer for IRC for quite a long time now. We (a group of administrators of a small network) have set up a shared Quassel core with many users on the same core. But now I would like to export everything related to my user account from the Quassel database on this core, in order to re-import it later in another Quassel core on my own server. Unfortunately, while a feature for adding users has been implemented into Quassel, nothing is so far provided for either exporting or deleting an user. (if deleting-a-user feature was available, I could have made a copy of the current database, delete all the other users leaving only mine, and use this resulting database on my own server, while leaving the first one untouched on the shared server) Despite extensive research on the Internet on this subject, I've found so far no solution. I have to precise that the backend database for the core has been migrated from the default SQLite backend to a PosgreSQL backend as the database grew sensibly (over 1,5 GB for now). However I'd be glad to hear from any working solution (SQLite or PostgreSQL backend) describing a way to export the data related to a specific user profile and then re-import-it in a new Quasselcore database.

    Read the article

  • How do I set up Grub properly to quad-boot Windows, Mac OS X, Linux, and FreeBSD?

    - by Joe
    Grub has gone completely insane on me. My quad-boot system was working great up until I upgraded Ubuntu to 12.04. Since Ubuntu overwrote the Grub stuff I had to repair it with my Mac OS X and FreeBSD entries. After this, trying to boot Mac OS X gave me the error "couldn't open file" and FreeBSD gave the error "no such partition". Windows and Ubuntu worked fine. So I tried repairing again because I figured something must've gone wrong in the install process. Then only Ubuntu would boot. Trying to boot Windows would give me the error "no argument specified". I tried repairing Grub once again, since I seemed to be getting different results each time. This time, Ubuntu no longer appeared in the Grub menu, and the errors for the other OSes were the same. So I booted into the Ubuntu 12.04 live CD and ran Boot-Repair with recommended settings. Now Grub is completely skipped and Windows boots up. I have absolutely no idea what is going on or why I get different results every time I reinstall Grub. Here is how my partitions are set up: sda1 - Storage drive, sdb1 - Windows, sdb2 - Mac OS X, sdb3 - FreeBSD, sdb4 - Extended, sdb5 - Ubuntu, sdb6 - Shared storage, sdb7 - Shared Storage, Here's my grub.cfg file: grub.cfg

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • PHP unable to allocate memory.

    - by AlReece45
    On my way to the office this morning, every website on our shared VPS started giving the same error (several times, not the typical memory_limit error which is fatal): Warning: Unknown: Unable to allocate memory for pool. in Unknown on line 0 The shared server is a 64-bit OpenVZ container running cPanel. There are only ~6 VPSes on the host-- this is the largest one at only 4GB. The host itself has 24GB RAM. As the below graphs show, the memory usage on the host and VPS are both rather low. CPU Usage/Disk/Host all seem to be normal. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. Apache 2.2, PHP 5.2 (mod_php) Restarting Apache has corrected the problem for now. However, I'd like to prevent it from happening again and I'm not sure what was limiting the memory. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. There's seems to be plenty of memory: what caused this error? VPS Memory Usage Host Memory Usage APC Information apc.ttl=0 apc.shm_size=0 apc.mmap_file_mask=(blank) 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking)

    Read the article

  • How to choose the right PHP Framework for web development?

    - by liliwang
    Hi! I'm a PHP Pro, but haven't use any PHP framework till now, so I have no clue on how to choose a PHP framework. Do you have some tips to help me choose the/a right PHP Framework? I want a stable and secure PHP framework for Projects with about 400 hours development time. It should be possible to use the framework on Shared-Hosting-Webservers. I don't need some AJAX support (I'm using extJS). It would be nice if the framework supports Rapid Application Development and object-relational mapping. Also some of the standard-functions (Authentification, form validation) would be nice. Caching would be a useful, but isn't needed. Needs for a PHP framework: Shared-Hosting-Webserver-Support for Projects between 200 und 400 hours work Developing Modell "Rapid Application Development" supported object-relational mapping supported If possible: Caching Already finished Modules (e.g. Authentification, form validation, ..) Easy to learn Which PHP framework is the right one I am seeking for?

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • Website always having DNS problems

    - by Root
    I moved my website from shared hosting to VPS. When it was in shared hosting all I did is updated my name servers whereas now I got my own VPS server and I used one of my domain sjdpublishing.com as the primary domain for my VPS. I created nameservers as ns1.sjdpublishing.com and ns2.sjdpublishing.com and then my actual website is creativeproperty.com.au which are pointing to ns1.sjdpublishing.com and ns2.sjdpublishing.com I am having repeated problems with my domain creativeproperty.com.au a few weeks back I had a problem which was resolved by flushing DNS and later I got similar problem which was not resolved by flushing DNS, I posted a question here and someone answered me to go to Network Settings in my MAC OSX and remove the IP as in my MAC terminal nslookup creativeproperty.com.au points to my router IP and I fixed this problem Now many of my clients were complaining that they are having same troubles accessing my website. I don't know whether its to flush DNS or change network settings or other issues. Can anyone please check my domain creativeproperty.com.au and sjdpublishing.com are having correct records or not and also can anyone tell me the best solution for this issue?

    Read the article

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • how to recover my xml default icon?

    - by moonway
    My XML files are showing the unknown programs icon, you can see in this picture: i cant revise its icon why? i look it up in the registry i find no error look at the following Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT.xml] @="xmlfile" "Content Type"="text/xml" "PerceivedType"="text" [HKEY_CLASSES_ROOT.xml\PersistentHandler] @="{7E9D8D44-6926-426F-AA2B-217A819A5CCE}" Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\xmlfile] @="@C:\Windows\System32\msxml3r.dll,-1" "EditFlags"=hex:00,00,00,00 "FriendlyTypeName"=hex(2):40,00,25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,\ 00,6f,00,6f,00,74,00,25,00,5c,00,53,00,79,00,73,00,74,00,65,00,6d,00,33,00,\ 32,00,5c,00,6d,00,73,00,78,00,6d,00,6c,00,33,00,72,00,2e,00,64,00,6c,00,6c,\ 00,2c,00,2d,00,31,00,00,00 [HKEY_CLASSES_ROOT\xmlfile\BrowseInPlace] [HKEY_CLASSES_ROOT\xmlfile\CLSID] @="{48123BC4-99D9-11D1-A6B3-00C04FD91555}" [HKEY_CLASSES_ROOT\xmlfile\DefaultIcon] @="C:\Windows\System32\msxml3.dll,0" [HKEY_CLASSES_ROOT\xmlfile\shell] @="open" [HKEY_CLASSES_ROOT\xmlfile\shell\edit] [HKEY_CLASSES_ROOT\xmlfile\shell\edit\command] @="\"C:\Program Files\Common Files\Microsoft Shared\OFFICE11\MSOXMLED.EXE\" /verb edit \"%1\"" [HKEY_CLASSES_ROOT\xmlfile\shell\Open] [HKEY_CLASSES_ROOT\xmlfile\shell\Open\Command] @="\"C:\Program Files\Common Files\Microsoft Shared\OFFICE11\MSOXMLED.EXE\" /verb open \"%1\"" [HKEY_CLASSES_ROOT\xmlfile\shell\Open\ddeexec] @="" [HKEY_CLASSES_ROOT\xmlfile\ShellEx] [HKEY_CLASSES_ROOT\xmlfile\ShellEx\IconHandler] @="{AB968F1E-E20B-403A-9EB8-72EB0EB6797E}" can you find something wrong or you can paste your reg hehe i need default reg about xml which is with default associated exe all right!

    Read the article

  • Network external hard drive reports not enough free space

    - by mzhang
    I'm running an Ubuntu (10.04) Samba server on a local network. The server has a 50GB internal drive with only 24MB free. I've shared a folder /samba from that drive. I also have a 1TB NTFS external hard drive mounted to the system. There is a symbiotic link from the Samba shared folder on the nearly-full internal drive to the plenty-of-free-space external drive (i.e. /samba/external_hd). I wish to copy a 3.25GB folder into the (remote) external hard drive, via a Mac (10.6.8). The Mac reports (correctly) that there's 24MB free on the server, and so will not let me copy the folder on the Mac over to the external drive (dragging the folder into /samba/external_hd), failing with a "server does not have enough free space" error. However, it seems that I can still scp the folder into the external drive, via the symbolic link. Is there a reason as to why this is happening (and are there any ways to prevent it)? Is this even good practice (to mount a drive and link into the directory)?

    Read the article

  • Win7 loses connection to network shares after resume unless server specified using FQDN

    - by Szonja Zemkó
    My Win7 client has a connection to a Linux server and its shared folders. The problem occurs when the computer wakes up after a sleep and then one of the shared folders is not accessible. I receive the following message: Error code: 80070035, The network path was not found. I have problem with one specific folder only. When I restart the computer this problematic folder is accessible again. When I log off before sleep the folder is accessible after wakeup. If I try to access the folder by using the FQDN of the server or the server IP it is also accessible. As a temporary solution I mapped the folder to a network drive using the FQDN and it's working fine but it's inconvenient since every other folder is accessible on the server. To summarize: \server\problematicshare no longer works after resume (the Samba server sees my client connect, then disconnects a few seconds later while I receive the above error message) \server\othershare works after resume \fqdn.of.server\problematicshare always works \ip.of.server\problematicshare always works once the problem manifests, I'm no longer able to restart the "Workstation" service (it is not responding) restarting the "Computer Browser" service has no apparent effect the event log doesn't contain anything that seems relevant "ping server" works

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • How to use Ninject with XNA?

    - by Rosarch
    I'm having difficulty integrating Ninject with XNA. static class Program { /** * The main entry point for the application. */ static void Main(string[] args) { IKernel kernel = new StandardKernel(NinjectModuleManager.GetModules()); CachedContentLoader content = kernel.Get<CachedContentLoader>(); // stack overflow here MasterEngine game = kernel.Get<MasterEngine>(); game.Run(); } } // constructor for the game public MasterEngine(IKernel kernel) : base(kernel) { this.inputReader = kernel.Get<IInputReader>(); graphicsDeviceManager = kernel.Get<GraphicsDeviceManager>(); Components.Add(kernel.Get<GamerServicesComponent>()); // Tell the loader to look for all files relative to the "Content" directory. Assets = kernel.Get<CachedContentLoader>(); //Sets dimensions of the game window graphicsDeviceManager.PreferredBackBufferWidth = 800; graphicsDeviceManager.PreferredBackBufferHeight = 600; graphicsDeviceManager.ApplyChanges(); IsMouseVisible = false; } Ninject.cs: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Ninject.Modules; using HWAlphaRelease.Controller; using Microsoft.Xna.Framework; using Nuclex.DependencyInjection.Demo.Scaffolding; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; namespace HWAlphaRelease { public static class NinjectModuleManager { public static NinjectModule[] GetModules() { return new NinjectModule[1] { new GameModule() }; } /// <summary>Dependency injection rules for the main game instance</summary> public class GameModule : NinjectModule { #region class ServiceProviderAdapter /// <summary>Delegates to the game's built-in service provider</summary> /// <remarks> /// <para> /// When a class' constructor requires an IServiceProvider, the dependency /// injector cannot just construct a new one and wouldn't know that it has /// to create an instance of the Game class (or take it from the existing /// Game instance). /// </para> /// <para> /// The solution, then, is this small adapter that takes a Game instance /// and acts as if it was a freely constructable IServiceProvider implementation /// while in reality, it delegates all lookups to the Game's service container. /// </para> /// </remarks> private class ServiceProviderAdapter : IServiceProvider { /// <summary>Initializes a new service provider adapter for the game</summary> /// <param name="game">Game the service provider will be taken from</param> public ServiceProviderAdapter(Game game) { this.gameServices = game.Services; } /// <summary>Retrieves a service from the game service container</summary> /// <param name="serviceType">Type of the service that will be retrieved</param> /// <returns>The service that has been requested</returns> public object GetService(Type serviceType) { return this.gameServices; } /// <summary>Game services container of the Game instance</summary> private GameServiceContainer gameServices; } #endregion // class ServiceProviderAdapter #region class ContentManagerAdapter /// <summary>Delegates to the game's built-in ContentManager</summary> /// <remarks> /// This provides shared access to the game's ContentManager. A dependency /// injected class only needs to require the ISharedContentService in its /// constructor and the dependency injector will automatically resolve it /// to this adapter, which delegates to the Game's built-in content manager. /// </remarks> private class ContentManagerAdapter : ISharedContentService { /// <summary>Initializes a new shared content manager adapter</summary> /// <param name="game">Game the content manager will be taken from</param> public ContentManagerAdapter(Game game) { this.contentManager = game.Content; } /// <summary>Loads or accesses shared game content</summary> /// <typeparam name="AssetType">Type of the asset to be loaded or accessed</typeparam> /// <param name="assetName">Path and name of the requested asset</param> /// <returns>The requested asset from the the shared game content store</returns> public AssetType Load<AssetType>(string assetName) { return this.contentManager.Load<AssetType>(assetName); } /// <summary>The content manager this instance delegates to</summary> private ContentManager contentManager; } #endregion // class ContentManagerAdapter /// <summary>Initializes the dependency configuration</summary> public override void Load() { // Allows access to the game class for any components with a dependency // on the 'Game' or 'DependencyInjectionGame' classes. Bind<MasterEngine>().ToSelf().InSingletonScope(); Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); Bind<Game>().To<MasterEngine>().InSingletonScope(); // Let the dependency injector construct a graphics device manager for // all components depending on the IGraphicsDeviceService and // IGraphicsDeviceManager interfaces Bind<GraphicsDeviceManager>().ToSelf().InSingletonScope(); Bind<IGraphicsDeviceService>().To<GraphicsDeviceManager>().InSingletonScope(); Bind<IGraphicsDeviceManager>().To<GraphicsDeviceManager>().InSingletonScope(); // Some clever adapters that hand out the Game's IServiceProvider and allow // access to its built-in ContentManager Bind<IServiceProvider>().To<ServiceProviderAdapter>().InSingletonScope(); Bind<ISharedContentService>().To<ContentManagerAdapter>().InSingletonScope(); Bind<IInputReader>().To<UserInputReader>().InSingletonScope().WithConstructorArgument("keyMapping", Constants.DEFAULT_KEY_MAPPING); Bind<CachedContentLoader>().ToSelf().InSingletonScope().WithConstructorArgument("rootDir", "Content"); } } } } NinjectGame.cs /// <summary>Base class for Games making use of Ninject</summary> public class NinjectGame : Game { /// <summary>Initializes a new Ninject game instance</summary> /// <param name="kernel">Kernel the game has been created by</param> public NinjectGame(IKernel kernel) { Type ownType = this.GetType(); if(ownType != typeof(Game)) { kernel.Bind<NinjectGame>().To<MasterEngine>().InSingletonScope(); } kernel.Bind<Game>().To<NinjectGame>().InSingletonScope(); } } } // namespace Nuclex.DependencyInjection.Demo.Scaffolding When I try to get the CachedContentLoader, I get a stack overflow exception. I'm basing this off of this tutorial, but I really have no idea what I'm doing. Help?

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • SQL Server 2012 - AlwaysOn

    - by Claus Jandausch
    Ich war nicht nur irritiert, ich war sogar regelrecht schockiert - und für einen kurzen Moment sprachlos (was nur selten der Fall ist). Gerade eben hatte mich jemand gefragt "Wann Oracle denn etwas Vergleichbares wie AlwaysOn bieten würde - und ob überhaupt?" War ich hier im falschen Film gelandet? Ich konnte nicht anders, als meinen Unmut kundzutun und zu erklären, dass die Fragestellung normalerweise anders herum läuft. Zugegeben - es mag vielleicht strittige Punkte geben im Vergleich zwischen Oracle und SQL Server - bei denen nicht unbedingt immer Oracle die Nase vorn haben muss - aber das Thema Clustering für Hochverfügbarkeit (HA), Disaster Recovery (DR) und Skalierbarkeit gehört mit Sicherheit nicht dazu. Dieses Erlebnis hakte ich am Nachgang als Einzelfall ab, der so nie wieder vorkommen würde. Bis ich kurz darauf eines Besseren belehrt wurde und genau die selbe Frage erneut zu hören bekam. Diesmal sogar im Exadata-Umfeld und einem Oracle Stretch Cluster. Einmal ist keinmal, doch zweimal ist einmal zu viel... Getreu diesem alten Motto war mir klar, dass man das so nicht länger stehen lassen konnte. Ich habe keine Ahnung, wie die Microsoft Marketing Abteilung es geschafft hat, unter dem AlwaysOn Brading eine innovative Technologie vermuten zu lassen - aber sie hat ihren Job scheinbar gut gemacht. Doch abgesehen von einem guten Marketing, stellt sich natürlich die Frage, was wirklich dahinter steckt und wie sich das Ganze mit Oracle vergleichen lässt - und ob überhaupt? Damit wären wir wieder bei der ursprünglichen Frage angelangt.  So viel zum Hintergrund dieses Blogbeitrags - von meiner Antwort handelt der restliche Blog. "Windows was the God ..." Um den wahren Unterschied zwischen Oracle und Microsoft verstehen zu können, muss man zunächst das bedeutendste Microsoft Dogma kennen. Es lässt sich schlicht und einfach auf den Punkt bringen: "Alles muss auf Windows basieren." Die Überschrift dieses Absatzes ist kein von mir erfundener Ausspruch, sondern ein Zitat. Konkret stammt es aus einem längeren Artikel von Kurt Eichenwald in der Vanity Fair aus dem August 2012. Er lautet Microsoft's Lost Decade und sei jedem ans Herz gelegt, der die "Microsoft-Maschinerie" unter Steve Ballmer und einige ihrer Kuriositäten besser verstehen möchte. "YOU TALKING TO ME?" Microsoft C.E.O. Steve Ballmer bei seiner Keynote auf der 2012 International Consumer Electronics Show in Las Vegas am 9. Januar   Manche Dinge in diesem Artikel mögen überspitzt dargestellt erscheinen - sind sie aber nicht. Vieles davon kannte ich bereits aus eigener Erfahrung und kann es nur bestätigen. Anderes hat sich mir erst so richtig erschlossen. Insbesondere die folgenden Passagen führten zum Aha-Erlebnis: “Windows was the god—everything had to work with Windows,” said Stone... “Every little thing you want to write has to build off of Windows (or other existing roducts),” one software engineer said. “It can be very confusing, …” Ich habe immer schon darauf hingewiesen, dass in einem SQL Server Failover Cluster die Microsoft Datenbank eigentlich nichts Nenneswertes zum Geschehen beiträgt, sondern sich voll und ganz auf das Windows Betriebssystem verlässt. Deshalb muss man auch die Windows Server Enterprise Edition installieren, soll ein Failover Cluster für den SQL Server eingerichtet werden. Denn hier werden die Cluster Services geliefert - nicht mit dem SQL Server. Er ist nur lediglich ein weiteres Server Produkt, für das Windows in Ausfallszenarien genutzt werden kann - so wie Microsoft Exchange beispielsweise, oder Microsoft SharePoint, oder irgendein anderes Server Produkt das auf Windows gehostet wird. Auch Oracle kann damit genutzt werden. Das Stichwort lautet hier: Oracle Failsafe. Nur - warum sollte man das tun, wenn gleichzeitig eine überlegene Technologie wie die Oracle Real Application Clusters (RAC) zur Verfügung steht, die dann auch keine Windows Enterprise Edition voraussetzen, da Oracle die eigene Clusterware liefert. Welche darüber hinaus für kürzere Failover-Zeiten sorgt, da diese Cluster-Technologie Datenbank-integriert ist und sich nicht auf "Dritte" verlässt. Wenn man sich also schon keine technischen Vorteile mit einem SQL Server Failover Cluster erkauft, sondern zusätzlich noch versteckte Lizenzkosten durch die Lizenzierung der Windows Server Enterprise Edition einhandelt, warum hat Microsoft dann in den vergangenen Jahren seit SQL Server 2000 nicht ebenfalls an einer neuen und innovativen Lösung gearbeitet, die mit Oracle RAC mithalten kann? Entwickler hat Microsoft genügend? Am Geld kann es auch nicht liegen? Lesen Sie einfach noch einmal die beiden obenstehenden Zitate und sie werden den Grund verstehen. Anders lässt es sich ja auch gar nicht mehr erklären, dass AlwaysOn aus zwei unterschiedlichen Technologien besteht, die beide jedoch wiederum auf dem Windows Server Failover Clustering (WSFC) basieren. Denn daraus ergeben sich klare Nachteile - aber dazu später mehr. Um AlwaysOn zu verstehen, sollte man sich zunächst kurz in Erinnerung rufen, was Microsoft bisher an HA/DR (High Availability/Desaster Recovery) Lösungen für SQL Server zur Verfügung gestellt hat. Replikation Basiert auf logischer Replikation und Pubisher/Subscriber Architektur Transactional Replication Merge Replication Snapshot Replication Microsoft's Replikation ist vergleichbar mit Oracle GoldenGate. Oracle GoldenGate stellt jedoch die umfassendere Technologie dar und bietet High Performance. Log Shipping Microsoft's Log Shipping stellt eine einfache Technologie dar, die vergleichbar ist mit Oracle Managed Recovery in Oracle Version 7. Das Log Shipping besitzt folgende Merkmale: Transaction Log Backups werden von Primary nach Secondary/ies geschickt Einarbeitung (z.B. Restore) auf jedem Secondary individuell Optionale dritte Server Instanz (Monitor Server) für Überwachung und Alarm Log Restore Unterbrechung möglich für Read-Only Modus (Secondary) Keine Unterstützung von Automatic Failover Database Mirroring Microsoft's Database Mirroring wurde verfügbar mit SQL Server 2005, sah aus wie Oracle Data Guard in Oracle 9i, war funktional jedoch nicht so umfassend. Für ein HA/DR Paar besteht eine 1:1 Beziehung, um die produktive Datenbank (Principle DB) abzusichern. Auf der Standby Datenbank (Mirrored DB) werden alle Insert-, Update- und Delete-Operationen nachgezogen. Modi Synchron (High-Safety Modus) Asynchron (High-Performance Modus) Automatic Failover Unterstützt im High-Safety Modus (synchron) Witness Server vorausgesetzt     Zur Frage der Kontinuität Es stellt sich die Frage, wie es um diesen Technologien nun im Zusammenhang mit SQL Server 2012 bestellt ist. Unter Fanfaren seinerzeit eingeführt, war Database Mirroring das erklärte Mittel der Wahl. Ich bin kein Produkt Manager bei Microsoft und kann hierzu nur meine Meinung äußern, aber zieht man den SQL AlwaysOn Team Blog heran, so sieht es nicht gut aus für das Database Mirroring - zumindest nicht langfristig. "Does AlwaysOn Availability Group replace Database Mirroring going forward?” “The short answer is we recommend that you migrate from the mirroring configuration or even mirroring and log shipping configuration to using Availability Group. Database Mirroring will still be available in the Denali release but will be phased out over subsequent releases. Log Shipping will continue to be available in future releases.” Damit wären wir endlich beim eigentlichen Thema angelangt. Was ist eine sogenannte Availability Group und was genau hat es mit der vielversprechend klingenden Bezeichnung AlwaysOn auf sich?   SQL Server 2012 - AlwaysOn Zwei HA-Features verstekcne sich hinter dem “AlwaysOn”-Branding. Einmal das AlwaysOn Failover Clustering aka SQL Server Failover Cluster Instances (FCI) - zum Anderen die AlwaysOn Availability Groups. Failover Cluster Instances (FCI) Entspricht ungefähr dem Stretch Cluster Konzept von Oracle Setzt auf Windows Server Failover Clustering (WSFC) auf Bietet HA auf Instanz-Ebene AlwaysOn Availability Groups (Verfügbarkeitsgruppen) Ähnlich der Idee von Consistency Groups, wie in Storage-Level Replikations-Software von z.B. EMC SRDF Abhängigkeiten zu Windows Server Failover Clustering (WSFC) Bietet HA auf Datenbank-Ebene   Hinweis: Verwechseln Sie nicht eine SQL Server Datenbank mit einer Oracle Datenbank. Und auch nicht eine Oracle Instanz mit einer SQL Server Instanz. Die gleichen Begriffe haben hier eine andere Bedeutung - nicht selten ein Grund, weshalb Oracle- und Microsoft DBAs schnell aneinander vorbei reden. Denken Sie bei einer SQL Server Datenbank eher an ein Oracle Schema, das kommt der Sache näher. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema. Wenn Sie die genauen Unterschiede kennen möchten, finden Sie eine detaillierte Beschreibung in meinem Buch "Oracle10g Release 2 für Windows und .NET", erhältich bei Lehmanns, Amazon, etc.   Windows Server Failover Clustering (WSFC) Wie man sieht, basieren beide AlwaysOn Technologien wiederum auf dem Windows Server Failover Clustering (WSFC), um einerseits Hochverfügbarkeit auf Ebene der Instanz zu gewährleisten und andererseits auf der Datenbank-Ebene. Deshalb nun eine kurze Beschreibung der WSFC. Die WSFC sind ein mit dem Windows Betriebssystem geliefertes Infrastruktur-Feature, um HA für Server Anwendungen, wie Microsoft Exchange, SharePoint, SQL Server, etc. zu bieten. So wie jeder andere Cluster, besteht ein WSFC Cluster aus einer Gruppe unabhängiger Server, die zusammenarbeiten, um die Verfügbarkeit einer Applikation oder eines Service zu erhöhen. Falls ein Cluster-Knoten oder -Service ausfällt, kann der auf diesem Knoten bisher gehostete Service automatisch oder manuell auf einen anderen im Cluster verfügbaren Knoten transferriert werden - was allgemein als Failover bekannt ist. Unter SQL Server 2012 verwenden sowohl die AlwaysOn Avalability Groups, als auch die AlwaysOn Failover Cluster Instances die WSFC als Plattformtechnologie, um Komponenten als WSFC Cluster-Ressourcen zu registrieren. Verwandte Ressourcen werden in eine Ressource Group zusammengefasst, die in Abhängigkeit zu anderen WSFC Cluster-Ressourcen gebracht werden kann. Der WSFC Cluster Service kann jetzt die Notwendigkeit zum Neustart der SQL Server Instanz erfassen oder einen automatischen Failover zu einem anderen Server-Knoten im WSFC Cluster auslösen.   Failover Cluster Instances (FCI) Eine SQL Server Failover Cluster Instanz (FCI) ist eine einzelne SQL Server Instanz, die in einem Failover Cluster betrieben wird, der aus mehreren Windows Server Failover Clustering (WSFC) Knoten besteht und so HA (High Availability) auf Ebene der Instanz bietet. Unter Verwendung von Multi-Subnet FCI kann auch Remote DR (Disaster Recovery) unterstützt werden. Eine weitere Option für Remote DR besteht darin, eine unter FCI gehostete Datenbank in einer Availability Group zu betreiben. Hierzu später mehr. FCI und WSFC Basis FCI, das für lokale Hochverfügbarkeit der Instanzen genutzt wird, ähnelt der veralteten Architektur eines kalten Cluster (Aktiv-Passiv). Unter SQL Server 2008 wurde diese Technologie SQL Server 2008 Failover Clustering genannt. Sie nutzte den Windows Server Failover Cluster. In SQL Server 2012 hat Microsoft diese Basistechnologie unter der Bezeichnung AlwaysOn zusammengefasst. Es handelt sich aber nach wie vor um die klassische Aktiv-Passiv-Konfiguration. Der Ablauf im Failover-Fall ist wie folgt: Solange kein Hardware-oder System-Fehler auftritt, werden alle Dirty Pages im Buffer Cache auf Platte geschrieben Alle entsprechenden SQL Server Services (Dienste) in der Ressource Gruppe werden auf dem aktiven Knoten gestoppt Die Ownership der Ressource Gruppe wird auf einen anderen Knoten der FCI transferriert Der neue Owner (Besitzer) der Ressource Gruppe startet seine SQL Server Services (Dienste) Die Connection-Anforderungen einer Client-Applikation werden automatisch auf den neuen aktiven Knoten mit dem selben Virtuellen Network Namen (VNN) umgeleitet Abhängig vom Zeitpunkt des letzten Checkpoints, kann die Anzahl der Dirty Pages im Buffer Cache, die noch auf Platte geschrieben werden müssen, zu unvorhersehbar langen Failover-Zeiten führen. Um diese Anzahl zu drosseln, besitzt der SQL Server 2012 eine neue Fähigkeit, die Indirect Checkpoints genannt wird. Indirect Checkpoints ähnelt dem Fast-Start MTTR Target Feature der Oracle Datenbank, das bereits mit Oracle9i verfügbar war.   SQL Server Multi-Subnet Clustering Ein SQL Server Multi-Subnet Failover Cluster entspricht vom Konzept her einem Oracle RAC Stretch Cluster. Doch dies ist nur auf den ersten Blick der Fall. Im Gegensatz zu RAC ist in einem lokalen SQL Server Failover Cluster jeweils nur ein Knoten aktiv für eine Datenbank. Für die Datenreplikation zwischen geografisch entfernten Sites verlässt sich Microsoft auf 3rd Party Lösungen für das Storage Mirroring.     Die Verbesserung dieses Szenario mit einer SQL Server 2012 Implementierung besteht schlicht darin, dass eine VLAN-Konfiguration (Virtual Local Area Network) nun nicht mehr benötigt wird, so wie dies bisher der Fall war. Das folgende Diagramm stellt dar, wie der Ablauf mit SQL Server 2012 gehandhabt wird. In Site A und Site B wird HA jeweils durch einen lokalen Aktiv-Passiv-Cluster sichergestellt.     Besondere Aufmerksamkeit muss hier der Konfiguration und dem Tuning geschenkt werden, da ansonsten völlig inakzeptable Failover-Zeiten resultieren. Dies liegt darin begründet, weil die Downtime auf Client-Seite nun nicht mehr nur von der reinen Failover-Zeit abhängt, sondern zusätzlich von der Dauer der DNS Replikation zwischen den DNS Servern. (Rufen Sie sich in Erinnerung, dass wir gerade von Multi-Subnet Clustering sprechen). Außerdem ist zu berücksichtigen, wie schnell die Clients die aktualisierten DNS Informationen abfragen. Spezielle Konfigurationen für Node Heartbeat, HostRecordTTL (Host Record Time-to-Live) und Intersite Replication Frequeny für Active Directory Sites und Services werden notwendig. Default TTL für Windows Server 2008 R2: 20 Minuten Empfohlene Einstellung: 1 Minute DNS Update Replication Frequency in Windows Umgebung: 180 Minuten Empfohlene Einstellung: 15 Minuten (minimaler Wert)   Betrachtet man diese Werte, muss man feststellen, dass selbst eine optimale Konfiguration die rigiden SLAs (Service Level Agreements) heutiger geschäftskritischer Anwendungen für HA und DR nicht erfüllen kann. Denn dies impliziert eine auf der Client-Seite erlebte Failover-Zeit von insgesamt 16 Minuten. Hierzu ein Auszug aus der SQL Server 2012 Online Dokumentation: Cons: If a cross-subnet failover occurs, the client recovery time could be 15 minutes or longer, depending on your HostRecordTTL setting and the setting of your cross-site DNS/AD replication schedule.    Wir sind hier an einem Punkt unserer Überlegungen angelangt, an dem sich erklärt, weshalb ich zuvor das "Windows was the God ..." Zitat verwendet habe. Die unbedingte Abhängigkeit zu Windows wird zunehmend zum Problem, da sie die Komplexität einer Microsoft-basierenden Lösung erhöht, anstelle sie zu reduzieren. Und Komplexität ist das Letzte, was sich CIOs heutzutage wünschen.  Zur Ehrenrettung des SQL Server 2012 und AlwaysOn muss man sagen, dass derart lange Failover-Zeiten kein unbedingtes "Muss" darstellen, sondern ein "Kann". Doch auch ein "Kann" kann im unpassenden Moment unvorhersehbare und kostspielige Folgen haben. Die Unabsehbarkeit ist wiederum Ursache vieler an der Implementierung beteiligten Komponenten und deren Abhängigkeiten, wie beispielsweise drei Cluster-Lösungen (zwei von Microsoft, eine 3rd Party Lösung). Wie man die Sache auch dreht und wendet, kommt man an diesem Fakt also nicht vorbei - ganz unabhängig von der Dauer einer Downtime oder Failover-Zeiten. Im Gegensatz zu AlwaysOn und der hier vorgestellten Version eines Stretch-Clusters, vermeidet eine entsprechende Oracle Implementierung eine derartige Komplexität, hervorgerufen duch multiple Abhängigkeiten. Den Unterschied machen Datenbank-integrierte Mechanismen, wie Fast Application Notification (FAN) und Fast Connection Failover (FCF). Für Oracle MAA Konfigurationen (Maximum Availability Architecture) sind Inter-Site Failover-Zeiten im Bereich von Sekunden keine Seltenheit. Wenn Sie dem Link zur Oracle MAA folgen, finden Sie außerdem eine Reihe an Customer Case Studies. Auch dies ist ein wichtiges Unterscheidungsmerkmal zu AlwaysOn, denn die Oracle Technologie hat sich bereits zigfach in höchst kritischen Umgebungen bewährt.   Availability Groups (Verfügbarkeitsgruppen) Die sogenannten Availability Groups (Verfügbarkeitsgruppen) sind - neben FCI - der weitere Baustein von AlwaysOn.   Hinweis: Bevor wir uns näher damit beschäftigen, sollten Sie sich noch einmal ins Gedächtnis rufen, dass eine SQL Server Datenbank nicht die gleiche Bedeutung besitzt, wie eine Oracle Datenbank, sondern eher einem Oracle Schema entspricht. So etwas wie die SQL Server Northwind Datenbank ist vergleichbar mit dem Oracle Scott Schema.   Eine Verfügbarkeitsgruppe setzt sich zusammen aus einem Set mehrerer Benutzer-Datenbanken, die im Falle eines Failover gemeinsam als Gruppe behandelt werden. Eine Verfügbarkeitsgruppe unterstützt ein Set an primären Datenbanken (primäres Replikat) und einem bis vier Sets von entsprechenden sekundären Datenbanken (sekundäre Replikate).       Es können jedoch nicht alle SQL Server Datenbanken einer AlwaysOn Verfügbarkeitsgruppe zugeordnet werden. Der SQL Server Spezialist Michael Otey zählt in seinem SQL Server Pro Artikel folgende Anforderungen auf: Verfügbarkeitsgruppen müssen mit Benutzer-Datenbanken erstellt werden. System-Datenbanken können nicht verwendet werden Die Datenbanken müssen sich im Read-Write Modus befinden. Read-Only Datenbanken werden nicht unterstützt Die Datenbanken in einer Verfügbarkeitsgruppe müssen Multiuser Datenbanken sein Sie dürfen nicht das AUTO_CLOSE Feature verwenden Sie müssen das Full Recovery Modell nutzen und es muss ein vollständiges Backup vorhanden sein Eine gegebene Datenbank kann sich nur in einer einzigen Verfügbarkeitsgruppe befinden und diese Datenbank düerfen nicht für Database Mirroring konfiguriert sein Microsoft empfiehl außerdem, dass der Verzeichnispfad einer Datenbank auf dem primären und sekundären Server identisch sein sollte Wie man sieht, eignen sich Verfügbarkeitsgruppen nicht, um HA und DR vollständig abzubilden. Die Unterscheidung zwischen der Instanzen-Ebene (FCI) und Datenbank-Ebene (Availability Groups) ist von hoher Bedeutung. Vor kurzem wurde mir gesagt, dass man mit den Verfügbarkeitsgruppen auf Shared Storage verzichten könne und dadurch Kosten spart. So weit so gut ... Man kann natürlich eine Installation rein mit Verfügbarkeitsgruppen und ohne FCI durchführen - aber man sollte sich dann darüber bewusst sein, was man dadurch alles nicht abgesichert hat - und dies wiederum für Desaster Recovery (DR) und SLAs (Service Level Agreements) bedeutet. Kurzum, um die Kombination aus beiden AlwaysOn Produkten und der damit verbundene Komplexität kommt man wohl in der Praxis nicht herum.    Availability Groups und WSFC AlwaysOn hängt von Windows Server Failover Clustering (WSFC) ab, um die aktuellen Rollen der Verfügbarkeitsreplikate einer Verfügbarkeitsgruppe zu überwachen und zu verwalten, und darüber zu entscheiden, wie ein Failover-Ereignis die Verfügbarkeitsreplikate betrifft. Das folgende Diagramm zeigt de Beziehung zwischen Verfügbarkeitsgruppen und WSFC:   Der Verfügbarkeitsmodus ist eine Eigenschaft jedes Verfügbarkeitsreplikats. Synychron und Asynchron können also gemischt werden: Availability Modus (Verfügbarkeitsmodus) Asynchroner Commit-Modus Primäres replikat schließt Transaktionen ohne Warten auf Sekundäres Synchroner Commit-Modus Primäres Replikat wartet auf Commit von sekundärem Replikat Failover Typen Automatic Manual Forced (mit möglichem Datenverlust) Synchroner Commit-Modus Geplanter, manueller Failover ohne Datenverlust Automatischer Failover ohne Datenverlust Asynchroner Commit-Modus Nur Forced, manueller Failover mit möglichem Datenverlust   Der SQL Server kennt keinen separaten Switchover Begriff wie in Oracle Data Guard. Für SQL Server werden alle Role Transitions als Failover bezeichnet. Tatsächlich unterstützt der SQL Server keinen Switchover für asynchrone Verbindungen. Es gibt nur die Form des Forced Failover mit möglichem Datenverlust. Eine ähnliche Fähigkeit wie der Switchover unter Oracle Data Guard ist so nicht gegeben.   SQL Sever FCI mit Availability Groups (Verfügbarkeitsgruppen) Neben den Verfügbarkeitsgruppen kann eine zweite Failover-Ebene eingerichtet werden, indem SQL Server FCI (auf Shared Storage) mit WSFC implementiert wird. Ein Verfügbarkeitesreplikat kann dann auf einer Standalone Instanz gehostet werden, oder einer FCI Instanz. Zum Verständnis: Die Verfügbarkeitsgruppen selbst benötigen kein Shared Storage. Diese Kombination kann verwendet werden für lokale HA auf Ebene der Instanz und DR auf Datenbank-Ebene durch Verfügbarkeitsgruppen. Das folgende Diagramm zeigt dieses Szenario:   Achtung! Hier handelt es sich nicht um ein Pendant zu Oracle RAC plus Data Guard, auch wenn das Bild diesen Eindruck vielleicht vermitteln mag - denn alle sekundären Knoten im FCI sind rein passiv. Es existiert außerdem eine weitere und ernsthafte Einschränkung: SQL Server Failover Cluster Instanzen (FCI) unterstützen nicht das automatische AlwaysOn Failover für Verfügbarkeitsgruppen. Jedes unter FCI gehostete Verfügbarkeitsreplikat kann nur für manuelles Failover konfiguriert werden.   Lesbare Sekundäre Replikate Ein oder mehrere Verfügbarkeitsreplikate in einer Verfügbarkeitsgruppe können für den lesenden Zugriff konfiguriert werden, wenn sie als sekundäres Replikat laufen. Dies ähnelt Oracle Active Data Guard, jedoch gibt es Einschränkungen. Alle Abfragen gegen die sekundäre Datenbank werden automatisch auf das Snapshot Isolation Level abgebildet. Es handelt sich dabei um eine Versionierung der Rows. Microsoft versuchte hiermit die Oracle MVRC (Multi Version Read Consistency) nachzustellen. Tatsächlich muss man die SQL Server Snapshot Isolation eher mit Oracle Flashback vergleichen. Bei der Implementierung des Snapshot Isolation Levels handelt sich um ein nachträglich aufgesetztes Feature und nicht um einen inhärenten Teil des Datenbank-Kernels, wie im Falle Oracle. (Ich werde hierzu in Kürze einen weiteren Blogbeitrag verfassen, wenn ich mich mit der neuen SQL Server 2012 Core Lizenzierung beschäftige.) Für die Praxis entstehen aus der Abbildung auf das Snapshot Isolation Level ernsthafte Restriktionen, derer man sich für den Betrieb in der Praxis bereits vorab bewusst sein sollte: Sollte auf der primären Datenbank eine aktive Transaktion zu dem Zeitpunkt existieren, wenn ein lesbares sekundäres Replikat in die Verfügbarkeitsgruppe aufgenommen wird, werden die Row-Versionen auf der korrespondierenden sekundären Datenbank nicht sofort vollständig verfügbar sein. Eine aktive Transaktion auf dem primären Replikat muss zuerst abgeschlossen (Commit oder Rollback) und dieser Transaktions-Record auf dem sekundären Replikat verarbeitet werden. Bis dahin ist das Isolation Level Mapping auf der sekundären Datenbank unvollständig und Abfragen sind temporär geblockt. Microsoft sagt dazu: "This is needed to guarantee that row versions are available on the secondary replica before executing the query under snapshot isolation as all isolation levels are implicitly mapped to snapshot isolation." (SQL Storage Engine Blog: AlwaysOn: I just enabled Readable Secondary but my query is blocked?)  Grundlegend bedeutet dies, dass ein aktives lesbares Replikat nicht in die Verfügbarkeitsgruppe aufgenommen werden kann, ohne das primäre Replikat vorübergehend stillzulegen. Da Leseoperationen auf das Snapshot Isolation Transaction Level abgebildet werden, kann die Bereinigung von Ghost Records auf dem primären Replikat durch Transaktionen auf einem oder mehreren sekundären Replikaten geblockt werden - z.B. durch eine lang laufende Abfrage auf dem sekundären Replikat. Diese Bereinigung wird auch blockiert, wenn die Verbindung zum sekundären Replikat abbricht oder der Datenaustausch unterbrochen wird. Auch die Log Truncation wird in diesem Zustant verhindert. Wenn dieser Zustand längere Zeit anhält, empfiehlt Microsoft das sekundäre Replikat aus der Verfügbarkeitsgruppe herauszunehmen - was ein ernsthaftes Downtime-Problem darstellt. Die Read-Only Workload auf den sekundären Replikaten kann eingehende DDL Änderungen blockieren. Obwohl die Leseoperationen aufgrund der Row-Versionierung keine Shared Locks halten, führen diese Operatioen zu Sch-S Locks (Schemastabilitätssperren). DDL-Änderungen durch Redo-Operationen können dadurch blockiert werden. Falls DDL aufgrund konkurrierender Lese-Workload blockiert wird und der Schwellenwert für 'Recovery Interval' (eine SQL Server Konfigurationsoption) überschritten wird, generiert der SQL Server das Ereignis sqlserver.lock_redo_blocked, welches Microsoft zum Kill der blockierenden Leser empfiehlt. Auf die Verfügbarkeit der Anwendung wird hierbei keinerlei Rücksicht genommen.   Keine dieser Einschränkungen existiert mit Oracle Active Data Guard.   Backups auf sekundären Replikaten  Über die sekundären Replikate können Backups (BACKUP DATABASE via Transact-SQL) nur als copy-only Backups einer vollständigen Datenbank, Dateien und Dateigruppen erstellt werden. Das Erstellen inkrementeller Backups ist nicht unterstützt, was ein ernsthafter Rückstand ist gegenüber der Backup-Unterstützung physikalischer Standbys unter Oracle Data Guard. Hinweis: Ein möglicher Workaround via Snapshots, bleibt ein Workaround. Eine weitere Einschränkung dieses Features gegenüber Oracle Data Guard besteht darin, dass das Backup eines sekundären Replikats nicht ausgeführt werden kann, wenn es nicht mit dem primären Replikat kommunizieren kann. Darüber hinaus muss das sekundäre Replikat synchronisiert sein oder sich in der Synchronisation befinden, um das Beackup auf dem sekundären Replikat erstellen zu können.   Vergleich von Microsoft AlwaysOn mit der Oracle MAA Ich komme wieder zurück auf die Eingangs erwähnte, mehrfach an mich gestellte Frage "Wann denn - und ob überhaupt - Oracle etwas Vergleichbares wie AlwaysOn bieten würde?" und meine damit verbundene (kurze) Irritation. Wenn Sie diesen Blogbeitrag bis hierher gelesen haben, dann kennen Sie jetzt meine darauf gegebene Antwort. Der eine oder andere Punkt traf dabei nicht immer auf Jeden zu, was auch nicht der tiefere Sinn und Zweck meiner Antwort war. Wenn beispielsweise kein Multi-Subnet mit im Spiel ist, sind alle diesbezüglichen Kritikpunkte zunächst obsolet. Was aber nicht bedeutet, dass sie nicht bereits morgen schon wieder zum Thema werden könnten (Sag niemals "Nie"). In manch anderes Fettnäpfchen tritt man wiederum nicht unbedingt in einer Testumgebung, sondern erst im laufenden Betrieb. Erst recht nicht dann, wenn man sich potenzieller Probleme nicht bewusst ist und keine dedizierten Tests startet. Und wer AlwaysOn erfolgreich positionieren möchte, wird auch gar kein Interesse daran haben, auf mögliche Schwachstellen und den besagten Teufel im Detail aufmerksam zu machen. Das ist keine Unterstellung - es ist nur menschlich. Außerdem ist es verständlich, dass man sich in erster Linie darauf konzentriert "was geht" und "was gut läuft", anstelle auf das "was zu Problemen führen kann" oder "nicht funktioniert". Wer will schon der Miesepeter sein? Für mich selbst gesprochen, kann ich nur sagen, dass ich lieber vorab von allen möglichen Einschränkungen wissen möchte, anstelle sie dann nach einer kurzen Zeit der heilen Welt schmerzhaft am eigenen Leib erfahren zu müssen. Ich bin davon überzeugt, dass es Ihnen nicht anders geht. Nachfolgend deshalb eine Zusammenfassung all jener Punkte, die ich im Vergleich zur Oracle MAA (Maximum Availability Architecture) als unbedingt Erwähnenswert betrachte, falls man eine Evaluierung von Microsoft AlwaysOn in Betracht zieht. 1. AlwaysOn ist eine komplexe Technologie Der SQL Server AlwaysOn Stack ist zusammengesetzt aus drei verschiedenen Technlogien: Windows Server Failover Clustering (WSFC) SQL Server Failover Cluster Instances (FCI) SQL Server Availability Groups (Verfügbarkeitsgruppen) Man kann eine derartige Lösung nicht als nahtlos bezeichnen, wofür auch die vielen von Microsoft dargestellten Einschränkungen sprechen. Während sich frühere SQL Server Versionen in Richtung eigener HA/DR Technologien entwickelten (wie Database Mirroring), empfiehlt Microsoft nun die Migration. Doch weshalb dieser Schwenk? Er führt nicht zu einem konsisten und robusten Angebot an HA/DR Technologie für geschäftskritische Umgebungen.  Liegt die Antwort in meiner These begründet, nach der "Windows was the God ..." noch immer gilt und man die Nachteile der allzu engen Kopplung mit Windows nicht sehen möchte? Entscheiden Sie selbst ... 2. Failover Cluster Instanzen - Kein RAC-Pendant Die SQL Server und Windows Server Clustering Technologie basiert noch immer auf dem veralteten Aktiv-Passiv Modell und führt zu einer Verschwendung von Systemressourcen. In einer Betrachtung von lediglich zwei Knoten erschließt sich auf Anhieb noch nicht der volle Mehrwert eines Aktiv-Aktiv Clusters (wie den Real Application Clusters), wie er von Oracle bereits vor zehn Jahren entwickelt wurde. Doch kennt man die Vorzüge der Skalierbarkeit durch einfaches Hinzufügen weiterer Cluster-Knoten, die dann alle gemeinsam als ein einziges logisches System zusammenarbeiten, versteht man was hinter dem Motto "Pay-as-you-Grow" steckt. In einem Aktiv-Aktiv Cluster geht es zwar auch um Hochverfügbarkeit - und ein Failover erfolgt zudem schneller, als in einem Aktiv-Passiv Modell - aber es geht eben nicht nur darum. An dieser Stelle sei darauf hingewiesen, dass die Oracle 11g Standard Edition bereits die Nutzung von Oracle RAC bis zu vier Sockets kostenfrei beinhaltet. Möchten Sie dazu Windows nutzen, benötigen Sie keine Windows Server Enterprise Edition, da Oracle 11g die eigene Clusterware liefert. Sie kommen in den Genuss von Hochverfügbarkeit und Skalierbarkeit und können dazu die günstigere Windows Server Standard Edition nutzen. 3. SQL Server Multi-Subnet Clustering - Abhängigkeit zu 3rd Party Storage Mirroring  Die SQL Server Multi-Subnet Clustering Architektur unterstützt den Aufbau eines Stretch Clusters, basiert dabei aber auf dem Aktiv-Passiv Modell. Das eigentlich Problematische ist jedoch, dass man sich zur Absicherung der Datenbank auf 3rd Party Storage Mirroring Technologie verlässt, ohne Integration zwischen dem Windows Server Failover Clustering (WSFC) und der darunterliegenden Mirroring Technologie. Wenn nun im Cluster ein Failover auf Instanzen-Ebene erfolgt, existiert keine Koordination mit einem möglichen Failover auf Ebene des Storage-Array. 4. Availability Groups (Verfügbarkeitsgruppen) - Vier, oder doch nur Zwei? Ein primäres Replikat erlaubt bis zu vier sekundäre Replikate innerhalb einer Verfügbarkeitsgruppe, jedoch nur zwei im Synchronen Commit Modus. Während dies zwar einen Vorteil gegenüber dem stringenten 1:1 Modell unter Database Mirroring darstellt, fällt der SQL Server 2012 damit immer noch weiter zurück hinter Oracle Data Guard mit bis zu 30 direkten Stanbdy Zielen - und vielen weiteren durch kaskadierende Ziele möglichen. Damit eignet sich Oracle Active Data Guard auch für die Bereitstellung einer Reader-Farm Skalierbarkeit für Internet-basierende Unternehmen. Mit AwaysOn Verfügbarkeitsgruppen ist dies nicht möglich. 5. Availability Groups (Verfügbarkeitsgruppen) - kein asynchrones Switchover  Die Technologie der Verfügbarkeitsgruppen wird auch als geeignetes Mittel für administrative Aufgaben positioniert - wie Upgrades oder Wartungsarbeiten. Man muss sich jedoch einem gravierendem Defizit bewusst sein: Im asynchronen Verfügbarkeitsmodus besteht die einzige Möglichkeit für Role Transition im Forced Failover mit Datenverlust! Um den Verlust von Daten durch geplante Wartungsarbeiten zu vermeiden, muss man den synchronen Verfügbarkeitsmodus konfigurieren, was jedoch ernstzunehmende Auswirkungen auf WAN Deployments nach sich zieht. Spinnt man diesen Gedanken zu Ende, kommt man zu dem Schluss, dass die Technologie der Verfügbarkeitsgruppen für geplante Wartungsarbeiten in einem derartigen Umfeld nicht effektiv genutzt werden kann. 6. Automatisches Failover - Nicht immer möglich Sowohl die SQL Server FCI, als auch Verfügbarkeitsgruppen unterstützen automatisches Failover. Möchte man diese jedoch kombinieren, wird das Ergebnis kein automatisches Failover sein. Denn ihr Zusammentreffen im Failover-Fall führt zu Race Conditions (Wettlaufsituationen), weshalb diese Konfiguration nicht länger das automatische Failover zu einem Replikat in einer Verfügbarkeitsgruppe erlaubt. Auch hier bestätigt sich wieder die tiefere Problematik von AlwaysOn, mit einer Zusammensetzung aus unterschiedlichen Technologien und der Abhängigkeit zu Windows. 7. Problematische RTO (Recovery Time Objective) Microsoft postioniert die SQL Server Multi-Subnet Clustering Architektur als brauchbare HA/DR Architektur. Bedenkt man jedoch die Problematik im Zusammenhang mit DNS Replikation und den möglichen langen Wartezeiten auf Client-Seite von bis zu 16 Minuten, sind strenge RTO Anforderungen (Recovery Time Objectives) nicht erfüllbar. Im Gegensatz zu Oracle besitzt der SQL Server keine Datenbank-integrierten Technologien, wie Oracle Fast Application Notification (FAN) oder Oracle Fast Connection Failover (FCF). 8. Problematische RPO (Recovery Point Objective) SQL Server ermöglicht Forced Failover (erzwungenes Failover), bietet jedoch keine Möglichkeit zur automatischen Übertragung der letzten Datenbits von einem alten zu einem neuen primären Replikat, wenn der Verfügbarkeitsmodus asynchron war. Oracle Data Guard hingegen bietet diese Unterstützung durch das Flush Redo Feature. Dies sichert "Zero Data Loss" und beste RPO auch in erzwungenen Failover-Situationen. 9. Lesbare Sekundäre Replikate mit Einschränkungen Aufgrund des Snapshot Isolation Transaction Level für lesbare sekundäre Replikate, besitzen diese Einschränkungen mit Auswirkung auf die primäre Datenbank. Die Bereinigung von Ghost Records auf der primären Datenbank, wird beeinflusst von lang laufenden Abfragen auf der lesabaren sekundären Datenbank. Die lesbare sekundäre Datenbank kann nicht in die Verfügbarkeitsgruppe aufgenommen werden, wenn es aktive Transaktionen auf der primären Datenbank gibt. Zusätzlich können DLL Änderungen auf der primären Datenbank durch Abfragen auf der sekundären blockiert werden. Und imkrementelle Backups werden hier nicht unterstützt.   Keine dieser Restriktionen existiert unter Oracle Data Guard.

    Read the article

  • Sharepoint (active directory account creation mode) - Using STSADM

    - by vivek m
    This question is regarding using STSADM command to create new site collection in Active Directory Account creation mode. My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and is also the Database Server. The other VPC is the domain member and it is the IIS web server, POP/SMTP server and it has WSS 3.0 installed. I created a new site using the GUI in Central Admin page. For creating a site collection under the newly created site, I needed to use the STSADM command line tool since it cannot be done from Central Admin page in Active Directory Account creation mode. Thats where i got into a problem- stsadm.exe -o createsite -url http://vivek-c5ba48dca:1111/sites/Sales -owneremail [email protected] -sitetemplate STS#1 The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC) The following is the output from the SHarepoint LOG- * stsadm: Running createsite 9e7d Medium Initializing the configuration database connection. 95kp High Creating site http://vivek-c5ba48dca:1111/sites/Sales in content database WSS_Content_Sharepoint_1111 95kq High Creating top level site at http://vivek-c5ba48dca:1111/sites/Sales 72jz Medium Creating site: URL "/sites/Sales" 72e1 High Unable to get domain DNS or forest DNS for domain sharepointsvc.com. ErrorCode=1212 8jvc Warning #1e0046: Adding user "spsalespadmin" to OU "sharepoint_ou" in domain "sharepointsvc.com" FAILED with HRESULT -2147023684. 72k1 High Cannot create site: "http://vivek-c5ba48dca:1111/sites/Sales" for owner "@\@", Error: , 0x800704bc 8e2s Medium Unknown SPRequest error occurred. More information: 0x800704bc 95ks Critical The site /sites/Sales could not be created. The following exception occured: The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC). 72ju High stsadm: The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC) Callstack: at Microsoft.SharePoint.Library.SPRequest.CreateSite(Guid gApplicationId, String bstrUrl, Int32 lZone, Guid gSiteId, Guid gDatabaseId, String bstrDatabaseServer, String bstrDatabaseName, String bstrDatabaseUsername, String bstrDatabasePassword, String bstrTitle, String bstrDescription, UInt32 nLCID, String bstrWebTemplate, String bstrOwnerLogin, String bstrOwnerUserKey, String bstrOwnerName, String bstrOwnerEmail, String bstrSecondaryContactLogin, String bstrSecondaryContactUserKey, String bstrSecondaryContactName, String bstrSecondaryContactEmail, Boolean bADAccountMode, Boolean bHostHeaderIsSiteName) at Microsoft.SharePoint.Administration.SPSiteCollection.Add(SPContentDataba... 72ju High ...se database, String siteUrl, String title, String description, UInt32 nLCID, String webTemplate, String ownerLogin, String ownerName, String ownerEmail, String secondaryContactLogin, String secondaryContactName, String secondaryContactEmail, String quotaTemplate, String sscRootWebUrl, Boolean useHostHeaderAsSiteName) at Microsoft.SharePoint.Administration.SPSiteCollection.Add(String siteUrl, String title, String description, UInt32 nLCID, String webTemplate, String ownerLogin, String ownerName, String ownerEmail, String secondaryContactLogin, String secondaryContactName, String secondaryContactEmail, Boolean useHostHeaderAsSiteName) at Microsoft.SharePoint.StsAdmin.SPCreateSite.Run(StringDictionary keyValues) at Microsoft.SharePoint.StsAdmin.SPStsAdmin.RunOperation(SPGlobalAdmi... 72ju High ...n globalAdmin, String strOperation, StringDictionary keyValues, SPParamCollection pars) 8wsw High Now terminating ULS (STSADM.EXE, onetnative.dll) * Seems to me that the trouble started with this - Unable to get domain DNS or forest DNS for domain sharepointsvc.com. ErrorCode=1212 Network connection to the sharepointsvc.com domain seems to be fine. C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o getproperty -pn ADAccountDomain <Property Exist="Yes" Value="sharepointsvc.com" /> C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o getproperty -pn ADAccountOU <Property Exist="Yes" Value="sharepoint_ou" /> C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>nslookup sharepointsvc.com Server: vm-winsrvr2003.sharepointsvc.com Address: 192.168.0.5 Name: sharepointsvc.com Addresses: 192.168.0.21, 192.168.0.5 Is there any way of checking the domain connection from within Sharepoint (like using some getproperty of the STSADM tool) Does anyone have any clue about this ? (any pointers would be very helpful) Thanks.

    Read the article

  • C++ virtual functions.Problem with vtable

    - by adivasile
    I'm doing a little project in C++ and I've come into some problems regarding virtual functions. I have a base class with some virtual functions: #ifndef COLLISIONSHAPE_H_ #define COLLISIONSHAPE_H_ namespace domino { class CollisionShape : public DominoItem { public: // CONSTRUCTOR //------------------------------------------------- // SETTERS //------------------------------------------------- // GETTERS //------------------------------------------------- virtual void GetRadius() = 0; virtual void GetPosition() = 0; virtual void GetGrowth(CollisionShape* other) = 0; virtual void GetSceneNode(); // OTHER //------------------------------------------------- virtual bool overlaps(CollisionShape* shape) = 0; }; } #endif /* COLLISIONSHAPE_H_ */ and a SphereShape class which extends CollisionShape and implements the methods above /* SphereShape.h */ #ifndef SPHERESHAPE_H_ #define SPHERESHAPE_H_ #include "CollisionShape.h" namespace domino { class SphereShape : public CollisionShape { public: // CONSTRUCTOR //------------------------------------------------- SphereShape(); SphereShape(CollisionShape* shape1, CollisionShape* shape2); // DESTRUCTOR //------------------------------------------------- ~SphereShape(); // SETTERS //------------------------------------------------- void SetPosition(); void SetRadius(); // GETTERS //------------------------------------------------- cl_float GetRadius(); cl_float3 GetPosition(); SceneNode* GetSceneNode(); cl_float GetGrowth(CollisionShape* other); // OTHER //------------------------------------------------- bool overlaps(CollisionShape* shape); }; } #endif /* SPHERESHAPE_H_ */ and the .cpp file: /*SphereShape.cpp*/ #include "SphereShape.h" #define max(a,b) (a>b?a:b) namespace domino { // CONSTRUCTOR //------------------------------------------------- SphereShape::SphereShape(CollisionShape* shape1, CollisionShape* shape2) { } // DESTRUCTOR //------------------------------------------------- SphereShape::~SphereShape() { } // SETTERS //------------------------------------------------- void SphereShape::SetPosition() { } void SphereShape::SetRadius() { } // GETTERS //------------------------------------------------- void SphereShape::GetRadius() { } void SphereShape::GetPosition() { } void SphereShape::GetSceneNode() { } void SphereShape::GetGrowth(CollisionShape* other) { } // OTHER //------------------------------------------------- bool SphereShape::overlaps(CollisionShape* shape) { return true; } } These classes, along some other get compiled into a shared library. Building libdomino.so g++ -m32 -lpthread -ldl -L/usr/X11R6/lib -lglut -lGLU -lGL -shared -lSDKUtil -lglut -lGLEW -lOpenCL -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86 -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/TempSDKUtil/lib/x86 -L"/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86" -lSDKUtil -lglut -lGLEW -lOpenCL -o build/debug/x86/libdomino.so build/debug/x86//Material.o build/debug/x86//Body.o build/debug/x86//SphereShape.o build/debug/x86//World.o build/debug/x86//Engine.o build/debug/x86//BVHNode.o When I compile the code that uses this library I get the following error: ../../../lib/x86//libdomino.so: undefined reference to `vtable for domino::CollisionShape' ../../../lib/x86//libdomino.so: undefined reference to `typeinfo for domino::CollisionShape' Command used to compile the demo that uses the library: g++ -o build/debug/x86/startdemo build/debug/x86//CMesh.o build/debug/x86//CSceneNode.o build/debug/x86//OFF.o build/debug/x86//Light.o build/debug/x86//main.o build/debug/x86//Camera.o -m32 -lpthread -ldl -L/usr/X11R6/lib -lglut -lGLU -lGL -lSDKUtil -lglut -lGLEW -ldomino -lSDKUtil -lOpenCL -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86 -L/home/adrian/AMD-APP-SDK-v2.4-lnx32/TempSDKUtil/lib/x86 -L../../../lib/x86/ -L"/home/adrian/AMD-APP-SDK-v2.4-lnx32/lib/x86" (the -ldomino flag) And when I run the demo, I manually tell it about the library: LD_LIBRARY_PATH=../../lib/x86/:$AMDAPPSDKROOT/lib/x86:$LD_LIBRARY_PATH bin/x86/startdemo After reading a bit about virtual functions and virtual tables I understood that virtual tables are handled by the compiler and I shouldn't worry about it, so I'm a little bit confused on how to handle this issue. I'm using gcc version 4.6.0 20110530 (Red Hat 4.6.0-9) (GCC) Later edit: I'm really sorry, but I wrote the code by hand directly here. I have defined the return types in the code. I apologize to the 2 people that answered below. I have to mention that I am a beginner at using more complex project layouts in C++.By this I mean more complex makefiles, shared libraries, stuff like that.

    Read the article

  • httpd keeps crashing without any reference to why in the logs

    - by Fred
    I have the logs set to debug in the hopes of tracking down what's causing the crash, but I can't find anything. Here is the error_log. [Thu Jan 06 10:27:35 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 19999 for (*) [Thu Jan 06 14:47:04 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Jan 06 14:47:04 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:04 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:04 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:04 2011] [notice] Digest: generating secret for digest authentication ... [Thu Jan 06 14:47:04 2011] [notice] Digest: done [Thu Jan 06 14:47:04 2011] [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0xb9dc2480 rmm=0xb9dc24b0 for VHOST: server.fredfinn.com [Thu Jan 06 14:47:04 2011] [info] APR LDAP: Built with OpenLDAP LDAP SDK [Thu Jan 06 14:47:04 2011] [info] LDAP: SSL support available [Thu Jan 06 14:47:05 2011] [info] Init: Seeding PRNG with 256 bytes of entropy [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu Jan 06 14:47:05 2011] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4266 indexes [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(623): division_offset = 64 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(625): division_size = 15998 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(627): queue_size = 1604 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(629): index_num = 133 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(631): index_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(633): index_size = 12 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(637): cache_data_size = 14386 [Thu Jan 06 14:47:05 2011] [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory() [Thu Jan 06 14:47:05 2011] [info] Shared memory session cache initialised [Thu Jan 06 14:47:05 2011] [info] Init: Initializing (virtual) servers for SSL [Thu Jan 06 14:47:05 2011] [info] Server: Apache/2.2.3, Interface: mod_ssl/2.2.3, Library: OpenSSL/0.9.8e-fips-rhel5 [Thu Jan 06 14:47:05 2011] [warn] pid file /etc/httpd/run/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26527 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26527 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26528 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26528 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26529 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26529 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26530 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26530 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26532 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26532 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26533 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26533 for (*) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26534 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26534 for (*) [Thu Jan 06 14:47:05 2011] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Jan 06 14:47:05 2011] [info] Server built: Aug 30 2010 12:32:08 [Thu Jan 06 14:47:05 2011] [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 26531 for worker proxy:reverse [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [Thu Jan 06 14:47:05 2011] [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 26531 for (*) The logs are setup as: ErrorLog logs/error_log LogLevel debug LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log common CustomLog logs/access_log combined ServerSignature On

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >