Search Results

Search found 12144 results on 486 pages for 'old ixfoxleigh'.

Page 42/486 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • 1 Million IOPS

    - by GrumpyOldDBA
    As a keen follower of storage performance I couldn't help but be drawn to this article in The Register http://www.theregister.co.uk/2010/04/14/lsi_million_iops/ this morning. I gave my 5 year old laptop a new lease of life with a SSD and in combination with the old drive made external managed to reduce the time of a demo query from 50 odd mins down to 6 mins. I also have 4 Silicon Power 32GB SSDs set up as a raid 0 on my home server, an overblown PC. http://www.futurestorage.co.uk/index.asp?selmanuf...(read more)

    Read the article

  • Error while running bash script that moves files

    - by K.K Patel
    I am new to bash scripting and want to create bash script that moves some days old files between source and destination as per days defined in script. When I run this script I get error line 16: syntax error near unexpected token `do' #!/bin/bash echo "Enter Your Source Directory" read soure echo "Enter Your Destination Directory" read destination echo "Enter Days" read days do find $soure -mtime +$days mv $soure $destination {} \; echo "Files $days old moved from $soure to $destination" done please help me to create this script.

    Read the article

  • New Location for .NET 4 GAC

    - by Ricardo Peres
    .NET 4 newcomers may have realised that the old GAC location (%WINDIR%\Assembly) does not contain .NET 4 global assembly cache assemblies. Indeed, they have moved to %WINDIR%\Microsoft.NET\Assembly. It is worth noting that this folder does not use the shell extension that the older one uses, which prevents us from directly looking at the folder's contents, which, IMO, is nice (I mean, the new behavior). The old folder continues to host pre-.NET 4 assemblies.

    Read the article

  • remap an xml feed to the address of a wordpress rss feed

    - by cboettig
    I used to have a blog based on Wordpress and moved to one based on Jekyll. I can create a new feed in Jekyll by building an atom page in XML with a bit of Liquid code, like this The trouble is, the location of the new feed is http://carlboettiger.info/atom.xml, while the old feed from the wordpress site is http://carlboettiger.info/feed, with no extension. how can I configure the Jekyll-created feed such that followers who have pointed their readers to the old feed address from wordpress will start to get the new content? (Site's Jekyll source here)

    Read the article

  • I cannot save php.ini in ubuntu 12.04

    - by Ashok KS
    I want to change the upload_max_filesize = 2M to 50M, then I started edit on php.ini, but when try to save it, it displays error message below Could not create a backup file while saving /etc/php5/apache2/php.ini gedit could not back up the old copy of the file before saving the new one. You can ignore this warning and save the file anyway, but if an error occurs while saving, you could lose the old copy of the file. Save anyway?

    Read the article

  • Friday Fun: Play Your Favorite 8-Bit NES Games Online

    - by Mysticgeek
    We finally made it to another Friday and once again we bring you some NES fun to waste the rest of the day before the weekend. Today we take a look at a site that contains a lot of classic NES games you can play online. vNES VirtualNES.com contains hundreds of vintage NES games you can play online. If you’re old enough to remember, when the NES came out, it breathed life back into home console gaming. Here we will take a look at a few of the games they offer that will certainly bring back memories. Super Mario Bros 3 which is a personal favorite from the 8-bit era.   Play Super Mario Bros 3 Excite Bike was one of the coolest dirt bike racing games at the time as it even allowed you to create your own tracks.   Play ExciteBike Of course The Legend of Zelda was one of the first fantasy games many an hour have been spent on. Play The Legend of Zelda We’d be remiss if we didn’t bring up Pac-man since the game recently celebrated it’s 30th anniversary. Play Pac-Man If you don’t like the default keyboard controls you can change them on the Options page. Join their forum and more…this site will definitely bring you back to the good old 8-bit NES days.   The site contains hundreds of different games for you to get on your old school NES fix. If you’re sick of waiting for the whistle to blow, this site will bring you back to the good old days when you had nothing to do but mash buttons all day. Play NES Games at virtualnes.com Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanFriday Fun: Five More Time Wasting Online GamesFriday Fun: Online Flash Games to Usher in the WeekendFriday Fun: Online Sports Flash Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version

    Read the article

  • Run the system configuration once the system has been installed

    - by dierre
    Hi guys, the problem is the following. I have an old computer that mounts a SATA Dvd Burner. The old MoBo (an AsRock P4VT8+) is not able to recognize the freaking burner when booting. So I had to convert my IDE HD to USB HD and mount it on my laptop and install Ubuntu from there. The problem now is that I'm obviously getting kernel panic every now and then so I was wondering if it is possibile to rerun only the system and the hardware configuration.

    Read the article

  • Raw Materials - Og, Sumerian DBA, Part 2

    A disruptive innovation raises an old, old question. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • Windows Server 2003 network boogey men every DBA should know

    - by merrillaldrich
    Recently I was again visited by my old friends TCP Chimney and SynAttackProtect . (Yeah, sometimes I feel like I mostly blog about 5-year old problems, but many of us as DBA's have to work on older versions or older systems, and so repeat older problems :-). This has been written about before, but as I BinGoogled around I noticed you are more likely to find the documents if you search for the cause, and not the symptoms. Most people who face a problem, of course, know the symptoms but not the cause....(read more)

    Read the article

  • Flexible Keyboard starts too late

    - by user289237
    So I just managed to revive an old Windows XP machine that I am getting ready to format into Ubuntu 14.04. However the USB keyboard I have plugged in, powers down with the machine (obviously) but doesn't power up until the Windows loading screen (After my only chance to select boot device, for which I have a USB) It is really frustrating as this renders the f12 key useless and me stuck with a decade + 2 old machine. Thanks for any help :D

    Read the article

  • Why is my system freezing when I switch users

    - by ZeroDivide
    Hello I've recently upgraded from 13.04 to 13.10 64bit. I'm running AMD graphics with the proprietary drivers. I have two user accounts. Mine(administrator) and my girlfriend's(standard) My girlfriend clicks "switch user" from my lock screen and logs in fine. I then try to click "switch user" from her lock screen and everything goes black. Then the monitor blinks on and off with just a single cursor. I have no way to access the terminal, the system is unresponsive and I have to hit the power button. Even ctrl + alt + f4 or ctrl + alt + t doesn't get me a terminal. When I press the power button on my system, it does start printing out the shutdown sequence on the monitor. Here is my .xsession-errors Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. Here is hers: init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd main process ended, respawning init: at-spi2-registryd respawning too fast, stopped init: logrotate main process (4726) killed by TERM signal init: upstart-dbus-session-bridge main process (4865) terminated with status 1 init: gnome-settings-daemon main process (4843) terminated with status 1 init: gnome-session main process (4852) terminated with status 1 init: unity-panel-service main process (4863) killed by KILL signal I found some advice in a forum to look for at-spi2-registryd in my system logs. Perhaps it will be useful. executing this: sudo grep -r at-spi2-registryd /var/log/* produces this: /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-1-greeter.log:** (at-spi2-registryd:4384): WARNING **: Unable to register client with session manager /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-2-greeter.log.old:** (at-spi2-registryd:7447): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log:** (at-spi2-registryd:1378): WARNING **: Unable to register client with session manager /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files /var/log/lightdm/x-0-greeter.log.old:** (at-spi2-registryd:1357): WARNING **: Unable to register client with session manager Any ideas what is going on?

    Read the article

  • Why Hire a Web Page Developer?

    Death they say is a good leveler. But today it is the web. Service providers of the past decade started with the internet and are reaping the benefits of conducting business over the internet. Old monolith brick-and-mortar service providers have to now transit into this web-world or become morphed. Hence large or small, old or new all service providers require a service - that of a web page developer.

    Read the article

  • Code Contracts: How they look after compiling?

    - by DigiMortal
    When you are using new tools that make also something at code level then it is good idea to check out what additions are made to code during compilation. Code contracts have simple syntax when we are writing code at Visual Studio but what happens after compilation? Are our methods same as they look in code or are they different after compilation? In this posting I will show you how code contracts look after compiling. In my previous examples about code contracts I used randomizer class with method called GetRandomFromRangeContracted. public int GetRandomFromRangeContracted(int min, int max) {     Contract.Requires<ArgumentOutOfRangeException>(         min < max,         "Min must be less than max"     );       Contract.Ensures(         Contract.Result<int>() >= min &&         Contract.Result<int>() <= max,         "Return value is out of range"     );       return _generator.Next(min, max); } Okay, it is nice to dream about similar code when we open our assembly with Reflector and disassemble it. But… this time we have something interesting. While reading this code don’t feel uncomfortable about the names of variables. This is disassembled code. .NET Framework internally allows these names. It is our compilators that doesn’t accept them when we are building our code. public int GetRandomFromRangeContracted(int min, int max) {     int Contract.Old(min);     int Contract.Old(max);     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Requires<ArgumentOutOfRangeException>(                min < max,                "Min must be less than max", "min < max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     try     {         Contract.Old(min) = min;     }     catch (Exception exception1)     {         if (exception1 == null)         {             throw;         }     }     try     {         Contract.Old(max) = max;         catch (Exception exception2)     {         if (exception2 == null)         {             throw;         }     }     int CS$1$0000 = this._generator.Next(min, max);     int Contract.Result<int>() = CS$1$0000;     if (__ContractsRuntime.insideContractEvaluation <= 4)     {         try         {             __ContractsRuntime.insideContractEvaluation++;             __ContractsRuntime.Ensures(                (Contract.Result<int>() >= Contract.Old(min)) &&                (Contract.Result<int>() <= Contract.Old(max)),                "Return value is out of range",                "Contract.Result<int>() >= min && Contract.Result<int>() <= max");         }         finally         {             __ContractsRuntime.insideContractEvaluation--;         }     }     return Contract.Result<int>(); } As we can see then contracts are not simply if-then-else checks and exceptions throwing. We can see that there is counter that is incremented before checks and decremented after these whatever the result of check was. One thing that is annoying for me are null checks for exception1 and exception2. Is there really some situation possible when null is thrown instead of some instance that is Exception or that inherits from exception? Conclusion Code contracts are more complex mechanism that it seems when we look at it on our code level. Internally there are done more things than we know. I don’t say it is wrong, it is just good to know how our code looks after compiling. Looking at this example it is sure we need also performance tests for contracted code to see how heavy is their impact to system performance when we run code that makes heavy use of code contracts.

    Read the article

  • Windows Server 2003 network boogey men every DBA should know

    - by merrillaldrich
    Recently I was again visited by my old friends TCP Chimney and SynAttackProtect . (Yeah, sometimes I feel like I mostly blog about 5-year old problems, but many of us as DBA's have to work on older versions or older systems, and so repeat older problems :-). This has been written about before, but as I BinGoogled around I noticed you are more likely to find the documents if you search for the cause, and not the symptoms. Most people who face a problem, of course, know the symptoms but not the cause....(read more)

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • Should a URL match the page's title?

    - by Yottatron
    Should the URL of a page match its title? For example: Http://example.com/about-cats.html <title>About Cats</title> Furthermore, if that title were to be changed by the page's author, should the URL change to match and the old URL be redirected (301) to the new URL? Edit Also, if the pages author were to decide to revert his changes after several days, would it be right to remove the redirect and set up an new redirect from the amended URL back to the old URL?

    Read the article

  • Redirecting requests for .html pages in subdirectories to the same page in root with .htaccess

    - by Asherion
    I am porting a site from an old version of a CMS to a newer version which has different page addressing techniques. I'm unfortunately not very good with htaccess at all. URL/blog/sublblog/article.html is now simply URL/article.html Unfortunately, this will destroy any linking programs they have going, and break all the old links. I need a way to use .htaccess say: if request = /(any subdirectory)/(string).html then redirect to /(string).html If that makes any sense.

    Read the article

  • MediaWiki Google map not showing

    - by user67656
    Dear all I have unguarded MediaWiki:1.9.3 to MediaWiki 1.16.1 in a new server.But the google map is not showing in the link.Its a blank in that page but in the old server with old version it is working fine.I am not a developer so I have no clue on this.Please let me know anybody have any idea on this. you can have a look on the below links http://new.realchicago.org/wiki/index.php/Archer_Heights The first link in which the google map is missing.

    Read the article

  • Installing Ubuntu by writing on the Hard Drive

    - by Alexandros Marinos
    I have a laptop whose CD drive is not operational and is too old to boot from a flash stick. I have bought a new hard drive for it for which I have an enclosure. Is there a way to configure the disk as an external to my current ubuntu setup, copy some form of ubuntu on the hard drive, place the HD in the old laptop, and have ubuntu install ubuntu from there? Effectively what I am asking about is some sort of live cd that installs on itself (since the HD is writeable).

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

  • Google Analytics on Demo Site

    - by Josh Smith
    Will adding the UA code of the live site to a revision site affect anything adversely? They are, technically, two different sites with different metrics. I don't want to lose the old data when I initiate the new site, of course. I would also like to work on setting up the new analytics page while the revision site is in development. Does anyone have any good workflows on setting up a revision site without losing old site data?

    Read the article

  • url changed to www...how much time google take to reindex

    - by user20321
    its been about a week i have changed my url from non www to www version it was done by my host provider .....now i want to ask how much time does google take to remove my sitename.com from indexing and replace it with www.sitename.com (site redirection has been already done by host provider )as it is still showing old ones my new urls are indexed but my main url www.sitename.com is not indexed...or so do i have to remove those old urls personally...its been already about 5-6 days?????

    Read the article

  • ubuntu image size 732 mb - too big for cd

    - by memius
    i have an old pc that can't handle a boot stick install, so i have to create an actual, old fashioned boot cd. however, the image size for ubuntu 12.04 is 732mb, which is too large for cds, which can hold only 700mb. the maintainers of ubuntu 12.04 say the image size will never go over 700mb, and indeed, the download size seemed to be 689mb. Brasero says it won't burn the cd because the file is too big what's going on?

    Read the article

  • Ubuntu 12.04 image size 732MB, will it fit a standard CD?

    - by memius
    I have an old computer that can't handle a boot stick install, so i have to create an actual, old fashioned boot CD. However, the image size for Ubuntu 12.04 is 732MB, which is too large for a CD, which can hold only 700MB. The maintainers of Ubuntu 12.04 say the image size will never go over 700MB, and indeed, the download size seemed to be 689MB. Brasero says it won't burn the CD because the file is too big, what's going on?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >