Search Results

Search found 10859 results on 435 pages for 'raid controller'.

Page 35/435 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • RAID 1 after install and two controlers

    - by jfreak53
    I have question regarding RAID 1. Can I setup software RAID 1 after having installed the first drive and setup ubuntu 12? I know that during server install and partitioning I can select RAID and setup then, but what I am not clear on is how in the world to setup RAID 1 after the fact? Can someone provide directions for this? Also, can I RAID 1 two drives one being 500GB and the mirror drive being 1TB? Of course the mirror drive would have a 500GB partition but that's my point. Lastly, can one drive be on IDE and the other on a SATA controller? I know speed will be an issue, that doesn't matter, I just need to know if it will work without corrupting data and if it's the same process? Thanks.

    Read the article

  • RAID 10, how layout works ?

    - by Bastien974
    I'm trying to figure out how exactly works the RAID 10 in linux with mdadm. I want to create a RAID 10 out of 4 partitions, let's say a, b, c and d. a and b are on the array 1, c and d array 2. So what I want is to have the couple a and b, c and d in RAID 0. Then on top of that, a RAID 1. The option in the mdadm command to configure the layout is -p, --layout with option : near, far, offset see here I want to keep my data safe if the array 1 fails for example, that would mean that every chunk of data are always copied on both arrays. How do I have to set my RAID 10, near or far ?

    Read the article

  • Law of Demeter in MVC regarding Controller-View communication

    - by Antonio MG
    The scenario: Having a Controller that controls a view composed of complex subviews. Each one of those subviews is a separated class in a separate file. For example, one of those subviews is called ButtonsView, and has a bunch of buttons. The Controller has to access those buttons. Would accessing those buttons like this: controllerMainView.buttonsView.firstButton.state(); be a violation of the LOD? On one hand, it could be yes because the controller is accessing the inner hierarchy of the view. On the other, a Controller should be aware of what happens inside the view and how is composed. Any thoughts?

    Read the article

  • How to recover data from Dell dimension 9150 RAID 0 on another system?

    - by Adam
    I have a Dell Dimension 9150 which has failed. I'm trying to recover the data. It had two SATA 250GB drives in RAID 0 configuration. I'm trying to use a shuttle PC running Windows7 to recover the data from the drives which contained an XP boot volume. I just want the data, it doesn't have to boot. What program would I need to rebuild / interrogate the drives (one of them is failing the onboard hardware test)? What drivers would I need to install? Windows7 sees the drive as one partition but doesn't see volume information. Ubuntu can see the drive as one partition and also can tell what the drive is called, but can't access the data Any help appreciated!! :)

    Read the article

  • How does one enable --write-mostly with Linux RAID?

    - by user76871
    Unfortunately the mdadm and mdadm.conf man pages are not quite up to par. I would like to enable the --write-mostly flag for my RAID, but neither the man pages nor the internet will tell me how. I am not aware of any place to put default arguments for mdadm, nor aware of when it would be launched and by what. It seems the logical place to add this information is mdadm.conf, but the flag is unmentioned in man mdadm.conf. Where and how can I enable --write-mostly? Thank you.

    Read the article

  • mdadm raid1 fails to resync

    - by JuanD
    Hello, I'm trying to solve this problem I'm having with an mdadm raid1. I have an ubuntu 9.04 server running on a software 2-drive raid1 with mdadm. Yesterday, one of the drives failed, and so I replaced it with a brand new drive of the same size. I removed the faulty drive, copied the partition from the remaining good drive to the new drive and then added it to the raid. It re-synced and the system worked fine, until the drive that hadn't failed, was also labeled failed. Now I had the raid running solely on the new drive. So I purchased another drive and repeated the procedure above. So now I had 2 brand new drives and the raid was syncing. However, after a few minutes I checked /proc/mdstat and the raid was no longer syncing. mdadm --detail /dev/md1 shows: (sdb is the first new drive, and sdc is the second new drive) root@dola:/home/jjaramillo# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Sat Dec 20 00:42:05 2008 Raid Level : raid1 Array Size : 974711680 (929.56 GiB 998.10 GB) Used Dev Size : 974711680 (929.56 GiB 998.10 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Jun 2 10:09:35 2010 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : bba497c6:5029ba0b:bfa4f887:c0dc8f3d Events : 0.5395594 Number Major Minor RaidDevice State 2 8 35 0 spare rebuilding /dev/sdc3 1 8 19 1 active sync /dev/sdb3 I've tried removing and re-adding the drive a few times, but the same thing happens. The raid fails to resync. I've looked at /var/log/messages, and found the following: Jun 2 07:57:36 dola kernel: [35708.917337] sd 5:0:0:0: [sdb] Unhandled sense code Jun 2 07:57:36 dola kernel: [35708.917339] sd 5:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 2 07:57:36 dola kernel: [35708.917342] sd 5:0:0:0: [sdb] Sense Key : Medium Error [current] [descriptor] Jun 2 07:57:36 dola kernel: [35708.917346] Descriptor sense data with sense descriptors (in hex): Jun 2 07:57:36 dola kernel: [35708.917348] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 Jun 2 07:57:36 dola kernel: [35708.917357] 00 43 9e 47 Jun 2 07:57:36 dola kernel: [35708.917360] sd 5:0:0:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed So it looks like there's some kind of error on sdb (the first new drive). My question is, what would be the best approach to get the raid up and running again? I've thought about dd'ing the /dev/md1 to a blank hard drive, then re-doing the raid from scratch and loading the data back, but there could be an easier solution.. Any help would be appreciated.

    Read the article

  • Can a RAID disk setup crash if only 1 hard disk fails?

    - by Steve Rodrigue
    I am a web developer. I have not much experience in hardware. For this reason, I use managed servers. This morning, one of the drives in our setup failed. However, the full site went down. I asked my web host what happened and he replied that the hard disk failed in such a way that the RAID controller couldn't work properly. Do you guys ever seen that before? Is it possible? Thanks for any help on this guys. I need to know if my web host is honest with me. Steve

    Read the article

  • What is the procedure to replace a failing hard drive in a RAID array?

    - by slayton
    3 years ago a co-worker setup a software RAID-6 array on Ubuntu 9.04 and I'm getting messages from the OS that the drive has bad sectors and should be replaced. I'd like to remove this drive and replace it with a new drive, however, I have never done this before and I'm terrified that in the process of fixing the array I'm going to end up ruining it. I know the device ID of the array and I know the device IDs of the individual drives in the array. Additionally I physically have the bad drive. What are the steps to replace the bad drive with a new drive and get the array running again?

    Read the article

  • One controller per page or many pages in one controller?

    - by Rushino
    I just wanted some advice regarding the MVC way of doing things. I am using codeigniter and I was wondering if it's better to have one controller per page for a website or to have one controller for all the pages? Let's say I have a simple website where you can visit the homepage, login, create an account and contact the admin. Would it be better to have these controllers: frontend(index), login, account, contact OR having one controller called frontend or whatever with the actions such login, createAccount, contact? When do you know if its better to use one controller in a situation?

    Read the article

  • Can different drive speeds and sizes be used in a hardware RAID configuration w/o affecting performance?

    - by R. Dill
    Specifically, I have a RAID 1 array configuration with two 500gb 7200rpm SATA drives mirrored as logical drive 1 (a) and two of the same mirrored as logical drive 2 (b). I'd like to add two 1tb 5400rpm drives in the same mirrored fashion as logical drive 3 (c). These drives will only serve as file storage with occasional but necessary access, and therefore, space is more important than speed. In researching whether this configuration is doable, I've been told and have read that the array will only see the smallest drive size and slowest speed. However, my understanding is that as long as the pairs themselves aren't mixed (and in this case, they aren't) that the array should view and use all drives at their actual speed and size. I'd like to be sure before purchasing the additional drives. Insight anyone?

    Read the article

  • xbox 360 controller problems

    - by DNice
    Hello everyone another new Linux user here.Most things are going well except for the 360 controller.There are so many posts about this it gets a little confusing on which to follow.Anyways someone told me that ubuntu 12.04 comes with a 360 wireless receiver and its just plug and play.When i plug my receiver in and run jstest-gtk 4 generic xbox pad come up in the joystick preferences window.now the controller itself isnt even on,and when it is on it doesnt sync.All four lights just flash.What am I doing incorrectly? Before it is asked yes this controller & receiver both work in Windows 7

    Read the article

  • 24TB RAID 6 configuration

    - by Phil
    I am in charge of a new website in a niche industry that stores lots of data (10+ TB per client, growing to 2 or 3 clients soon). We are considering ordering about $5000 worth of 3TB drives (10 in a RAID 6 configuration and 10 for backup), which will give us approximately 24 TB of production storage. The data will be written once and remain unmodified for the lifetime of the website, so we only need to do a backup one time. I understand basic RAID theory, however I am not experienced with it. My question is, does this sound like a good configuration? What potential problems could this setup cause? Also, what is the best way to do a one-time backup? Have two RAID 6 arrays, one for offsite backup and one for production? Or should I backup the RAID 6 production array to a JBOD? EDIT: The data server is running Windows 2008 Server x64. EDIT 2: To reduce rebuild time, what would you think about using two RAID 5's instead of one RAID 6?

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • How to change the default domain controller when querying AD in a different site?

    - by Linefeed
    We have 2 different locations, and at both site we have multiple domain controllers (Win2008). In our application we use Serverless Binding to execute our LDAP queries http://msdn.microsoft.com/en-us/library/ms677945(v=vs.85).aspx. If we look at de DnsHostName of the LDAP://RootDse on site B we always get the default domain controller of site A. Therefor all LDAP queries go much slower. Is there a way to change the default domain controller per site ?

    Read the article

  • How to communicate with "Microsoft ACPI-Compliant Embedded Controller" driver?

    - by YT
    I'd like to communicate with an Embedded Controller device in a Notebook through I/O ports 62/66. When running on XP, the communication might collide with "Microsoft ACPI-Compliant Embedded Controller" driver which does the same thing. Therefore, I’d like to know whether (and how) I can communicate with I/O ports 62/66 using this driver. In addition, any informative link about what this driver is doing and how, will be highly appreciated.

    Read the article

  • How to change the default domain controller when querying AD in a different site ?

    - by Linefeed
    We have 2 different locations, and at both site we have multiple domain controllers (Win2008). In our application we use Serverless Binding to execute our LDAP queries http://msdn.microsoft.com/en-us/library/ms677945(v=vs.85).aspx. If we look at de DnsHostName of the LDAP://RootDse on site B we always get the default domain controller of site A. Therefor all LDAP queries go much slower. Is there a way to change the default domain controller per site ?

    Read the article

  • Juju Openstack bundle: Can't launch an instance

    - by user281985
    Deployed bundle:~makyo/openstack/2/openstack, on top of 7 physical boxes and 3 virtual ones. After changing vip_iface strings to point to right devices, e.g., br0 instead of eth0, and defining "/mnt/loopback|30G", in Cinder's block-device string, am able to navigate through openstack dashboard, error free. Following http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/running-an-instance.html instructions, attempted to launch cirros 0.3.1 image; however, novalist shows the instance in error state. ubuntu@node7:~$ nova --debug boot --flavor 1 --image 28bed1bc-bc1c-4533-beee-8e0428ad40dd --key_name key2 --security_group default cirros REQ: curl -i http://keyStone.IP:5000/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "openstack"}}}' INFO (connectionpool:191) Starting new HTTP connection (1): keyStone.IP DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 None RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:02 GMT', 'transfer-encoding': 'chunked', 'vary': 'X-Auth-Token', 'content-type': 'application/json'} RESP BODY: {"access": {"token": {"expires": "2014-06-11T00:01:02Z", "id": "3eefa1837d984426a633fe09259a1534", "tenant": {"description": "Created by Juju", "enabled": true, "id": "08cff06d13b74492b780d9ceed699239", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239", "publicURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:9696", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:9696", "publicURL": "http://nova.cloud.controller:9696"}], "endpoints_links": [], "type": "network", "name": "quantum"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:3333", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:3333", "publicURL": "http://nova.cloud.controller:3333"}], "endpoints_links": [], "type": "s3", "name": "s3"}, {"endpoints": [{"adminURL": "http://i.p.s.36:9292", "region": "RegionOne", "internalURL": "http://i.p.s.36:9292", "publicURL": "http://i.p.s.36:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239", "region": "RegionOne", "internalURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239", "publicURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:8773/services/Cloud", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:8773/services/Cloud", "publicURL": "http://nova.cloud.controller:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "ec2"}, {"endpoints": [{"adminURL": "http://keyStone.IP:35357/v2.0", "region": "RegionOne", "internalURL": "http://keyStone.IP:5000/v2.0", "publicURL": "http://i.p.s.44:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "b3730a52a32e40f0a9500440d1ef1c7d", "roles": [{"id": "e020001eb9a049f4a16540238ab158aa", "name": "Admin"}, {"id": "b84fbff4d5554d53bbbffdaad66b56cb", "name": "KeystoneServiceAdmin"}, {"id": "129c8b49d42b4f0796109aaef2069aa9", "name": "KeystoneAdmin"}], "name": "admin"}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd HTTP/1.1" 200 719 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:03 GMT', 'x-compute-request-id': 'req-7f3459f8-d3d5-47f1-97a3-8407a4419a69', 'content-type': 'application/json', 'content-length': '719'} RESP BODY: {"image": {"status": "ACTIVE", "updated": "2014-06-09T22:17:54Z", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}, {"href": "http://External.Public.Port:9292/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "OS-EXT-IMG-SIZE:size": 13147648, "name": "Cirros 0.3.1", "created": "2014-06-09T22:17:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 HTTP/1.1" 200 418 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:04 GMT', 'x-compute-request-id': 'req-2c153110-6969-4f3a-b51c-8f1a6ce75bee', 'content-type': 'application/json', 'content-length': '418'} RESP BODY: {"flavor": {"name": "m1.tiny", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 0, "id": "1"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers -X POST -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" -d '{"server": {"name": "cirros", "imageRef": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "key_name": "key2", "flavorRef": "1", "max_count": 1, "min_count": 1, "security_groups": [{"name": "default"}]}}' INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "POST /v1.1/08cff06d13b74492b780d9ceed699239/servers HTTP/1.1" 202 436 RESP: [202] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-41e53086-6454-4efb-bb35-a30dc2c780be', 'content-type': 'application/json', 'location': 'http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43', 'content-length': '436'} RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-DCF:diskConfig": "MANUAL", "id": "2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "bookmark"}], "adminPass": "oFRbvRqif2C8"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 HTTP/1.1" 200 1349 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-d91d0858-7030-469d-8e55-40e05e4d00fd', 'content-type': 'application/json', 'content-length': '1349'} RESP BODY: {"server": {"status": "BUILD", "updated": "2014-06-10T00:01:05Z", "hostId": "", "OS-EXT-SRV-ATTR:host": null, "addresses": {}, "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "bookmark"}], "key_name": "key2", "image": {"id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "links": [{"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}]}, "OS-EXT-STS:task_state": "scheduling", "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "flavor": {"id": "1", "links": [{"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}]}, "id": "2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "security_groups": [{"name": "default"}], "OS-EXT-AZ:availability_zone": "nova", "user_id": "b3730a52a32e40f0a9500440d1ef1c7d", "name": "cirros", "created": "2014-06-10T00:01:04Z", "tenant_id": "08cff06d13b74492b780d9ceed699239", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "config_drive": "", "metadata": {}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 HTTP/1.1" 200 418 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-896c0120-1102-4408-9e09-cd628f2dd699', 'content-type': 'application/json', 'content-length': '418'} RESP BODY: {"flavor": {"name": "m1.tiny", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 0, "id": "1"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd HTTP/1.1" 200 719 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-454e9651-c247-4d31-8049-6b254de050ae', 'content-type': 'application/json', 'content-length': '719'} RESP BODY: {"image": {"status": "ACTIVE", "updated": "2014-06-09T22:17:54Z", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}, {"href": "http://External.Public.Port:9292/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "OS-EXT-IMG-SIZE:size": 13147648, "name": "Cirros 0.3.1", "created": "2014-06-09T22:17:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}} +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-EXT-STS:task_state | scheduling | | image | Cirros 0.3.1 | | OS-EXT-STS:vm_state | building | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | flavor | m1.tiny | | id | 2eb5e3ad-3044-41c1-bbb7-10f398f83e43 | | security_groups | [{u'name': u'default'}] | | user_id | b3730a52a32e40f0a9500440d1ef1c7d | | OS-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 0 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | | status | BUILD | | updated | 2014-06-10T00:01:05Z | | hostId | | | OS-EXT-SRV-ATTR:host | None | | key_name | key2 | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | name | cirros | | adminPass | oFRbvRqif2C8 | | tenant_id | 08cff06d13b74492b780d9ceed699239 | | created | 2014-06-10T00:01:04Z | | metadata | {} | +-------------------------------------+--------------------------------------+ ubuntu@node7:~$ ubuntu@node7:~$ nova list +--------------------------------------+--------+--------+----------+ | ID | Name | Status | Networks | +--------------------------------------+--------+--------+----------+ | 2eb5e3ad-3044-41c1-bbb7-10f398f83e43 | cirros | ERROR | | +--------------------------------------+--------+--------+----------+ ubuntu@node7:~$ var/log/nova/nova-compute.log shows the following error: ... 2014-06-10 00:01:06.048 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Attempting claim: memory 512 MB, disk 0 GB, VCPUs 1 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total Memory: 3885 MB, used: 512 MB 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Memory limit: 5827 MB, free: 5315 MB 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total Disk: 146 GB, used: 0 GB 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Disk limit not specified, defaulting to unlimited 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total CPU: 2 VCPUs, used: 0 VCPUs 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] CPU limit not specified, defaulting to unlimited 2014-06-10 00:01:06.051 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Claim successful 2014-06-10 00:01:06.963 WARNING nova.network.quantumv2.api [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] No network configured! 2014-06-10 00:01:08.347 ERROR nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Instance failed to spawn 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Traceback (most recent call last): 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1118, in _spawn 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] self._legacy_nw_info(network_info), 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 703, in _legacy_nw_info 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] network_info = network_info.legacy() 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] AttributeError: 'list' object has no attribute 'legacy' 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] 2014-06-10 00:01:08.919 AUDIT nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Terminating instance 2014-06-10 00:01:09.712 32223 ERROR nova.virt.libvirt.driver [-] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] During wait destroy, instance disappeared. 2014-06-10 00:01:09.718 INFO nova.virt.libvirt.firewall [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Attempted to unfilter instance which is not filtered 2014-06-10 00:01:09.719 INFO nova.virt.libvirt.driver [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Deleting instance files /var/lib/nova/instances/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 2014-06-10 00:01:10.044 ERROR nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 864, in _run_instance\n set_access_ip=set_access_ip)\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1123, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', ' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1118, in _spawn\n self._legacy_nw_info(network_info),\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 703, in _legacy_nw_info\n network_info = network_info.legacy()\n', "AttributeError: 'list' object has no attribute 'legacy'\n"] 2014-06-10 00:01:40.951 32223 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-06-10 00:01:41.072 32223 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2861 2014-06-10 00:01:41.072 32223 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 146 2014-06-10 00:01:41.073 32223 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1 2014-06-10 00:01:41.262 32223 INFO nova.compute.resource_tracker [-] Compute_service record updated for node5:node5.maas ... Can't seem to find any entries in quantum.conf related to "legacy". Any help would be appreciated. Cheers,

    Read the article

  • How to write a test for accounts controller for forms authenticate

    - by Anil Ali
    Trying to figure out how to adequately test my accounts controller. I am having problem testing the successful logon scenario. Issue 1) Am I missing any other tests.(I am testing the model validation attributes separately) Issue 2) Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() and Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() test are not passing. I have narrowed it down to the line where i am calling _membership.validateuser(). Even though during my mock setup of the service i am stating that i want to return true whenever validateuser is called, the method call returns false. Here is what I have gotten so far AccountController.cs [HandleError] public class AccountController : Controller { private IMembershipService _membershipService; public AccountController() : this(null) { } public AccountController(IMembershipService membershipService) { _membershipService = membershipService ?? new AccountMembershipService(); } [HttpGet] public ActionResult LogOn() { return View(); } [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (_membershipService.ValidateUser(model.UserName,model.Password)) { if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } return RedirectToAction("Index", "Overview"); } ModelState.AddModelError("*", "The user name or password provided is incorrect."); } return View(model); } } AccountServices.cs public interface IMembershipService { bool ValidateUser(string userName, string password); } public class AccountMembershipService : IMembershipService { public bool ValidateUser(string userName, string password) { throw new System.NotImplementedException(); } } AccountControllerFacts.cs public class AccountControllerFacts { public static AccountController GetAccountControllerForLogonSuccess() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(true); return controller; } public static AccountController GetAccountControllerForLogonFailure() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(false); return controller; } public class LogOn { [Fact] public void Get_ReturnsViewResultWithDefaultViewName() { // Arrange var controller = GetAccountControllerForLogonSuccess(); // Act var result = controller.LogOn(); // Assert Assert.IsType<ViewResult>(result); Assert.Empty(((ViewResult)result).ViewName); } [Fact] public void Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, null); var redirectresult = (RedirectToRouteResult) result; // Assert Assert.IsType<RedirectToRouteResult>(result); Assert.Equal("Overview", redirectresult.RouteValues["controller"]); Assert.Equal("Index", redirectresult.RouteValues["action"]); } [Fact] public void Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var redirectResult = (RedirectResult) result; // Assert Assert.IsType<RedirectResult>(result); Assert.Equal("someurl", redirectResult.Url); } [Fact] public void Put_ReturnsViewIfInvalidModelState() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); controller.ModelState.AddModelError("*","Invalid model state."); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); } [Fact] public void Put_ReturnsViewIfLogonFailed() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); Assert.Equal(false,viewResult.ViewData.ModelState.IsValid); } } }

    Read the article

  • Failed Software RAID0 on Linux - Attempting to recover data

    - by Gizmo_the_Great
    I have a two disk RAID0 software raid (not hardware raid) that is reported to have failed during boot and my OS won't start. Using a Live CD, I get the following output : sudo mdadm -E /dev/sdc1 /dev/sdd1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1465145328 (698.64 GiB 750.15 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : ad427cd2:9f885f57:7f41015f:90f8f6af Update Time : Sun Jun 8 12:35:11 2014 Checksum : a37407ff - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 976771056 (465.76 GiB 500.11 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : 2ea0199d:cb08d9e7:0830448a:a1e1e348 Update Time : Sun Jun 8 13:06:19 2014 Checksum : 8883c492 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) GParted lists both disks, detects the flags as 'Raid' and lists the data usage. Can anyone please help me re-assemble just so that I can copy some of the data off that I have not backed up recently? Thanks

    Read the article

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >