Search Results

Search found 5747 results on 230 pages for 'backup'.

Page 203/230 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Nvidia RAID 1 Problem. Degraded drives...

    - by Vedat Kursun
    I had a RAID 1 on my system which has a Gigabyte GA 8N SLI motherboard with a Nvidia chipset.(Nvidia Raid IDE ROM BIOS 4.84) When the system was working probably there used to be an icon on the system try which showed my two RAID disks. Bu after my friend accidentally clicked on the "Remove drive safely" icon while trying to disconnect her USB, I noticed that the RAID system wasn't working. After a reboot there was suddenly a failure message during boot screen. When I enter the Nvidia RAID setup utility (F10) I can see that both drives are degraded and that won't change even if I get into them and press R for Rebuild. Other options are only Delete and Exit. When I boot to Windows (XP Pro 32 Bit) I can see both my disks with the same data on each of them but my RAID 1 is broken. It's a relief to see that at least my RAID 1 was active but it's annoying not being able to rebuild it. Is there a way where I can rebuild my RAID 1 without having to delete the array and build it again? Cause I don't want to backup 400 Gigs of data and then recopy it to my drives... (Disks 2 x Seagate ST3500418 AS SATA Drives)

    Read the article

  • Azure Virtual Machines - what fault tolerance do they provide?

    - by Borek
    We are thinking about moving our virtual machines (Hyper-V VHDs) to Windows Azure but I haven't found much about what kind of fault tolerance that infrastructure provides. When I run VHD in Azure, I've got two questions: Is my VHD and all the data in it safe? I think that uploaded VHDs use the "Storage" infrastructure so they should be automatically replicated to multiple disks and geographically distributed but should I still make a full-image backup just to be safe? (Note that of course I will be backing up the actual data inside VMs that I care about; I just want to know if there is a chance greater than 0.0000001% that one day I will receive an email from Microsoft telling me that my VM is gone and that I should create or restore it from scratch). Do I need to worry about other things regarding the availability of my VMs? I mean, when I have an on-premise server I need to worry about the hardware itself, about the host operating system, what would happen if my router failed, if my Hyper-V's C: drive failed etc. Am I right in thinking that with Azure, their infrastructure takes care of all of this? Thanks.

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

  • How do i install apache on my ubuntu 12.04 where it has virtualhost

    - by YumYumYum
    According to the docs https://help.ubuntu.com/10.04/serverguide/httpd.html i have done following, and that is almost how i do always in my Fedora, but Ubuntu looks like its not working. a) DNS to IP $ echo "127.0.0.1 a" > /etc/hosts $ echo "127.0.0.1 b" > /etc/hosts b) Apache virtualhost $ ls 1 2 default default.backup default-ssl $ cat 1 <VirtualHost *:80> ServerName a ServerAlias a DocumentRoot /var/www/html/a/public <Directory /var/www/html/a/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> $ cat 2 <VirtualHost *:80> ServerName b ServerAlias b DocumentRoot /var/www/html/b/public <Directory /var/www/html/b/public> #AddDefaultCharset utf-8 DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> c) load into Apache and restart the service $ a2ensite 1 $ a2ensite 2 $ a2dissite default $ /etc/init.d/apache2 restart d) Browse the new 2 hosts $ firefox http://a Does not work it goes always with http://a or http://b to /var/www/html How do i fix it so that it goes to its own directory e.g: http://a goes to /var/www/html/a/public not /var/www/html?

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • System randomly freezes yet mouse still moves, SSD out of reallocatable sectors, should I replace it?

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • 750Gig Hard Drive shows full with only 315Gigs used

    - by Chris Kelly
    I have a Win7 laptop with a 750Gig C: drive. It came partitioned with 714Gig usable from manufacturer. I installed programs, music files, etc up to 285 gigs. As of a few weeks ago it showed 285 Gigs. Two weeks of house guests later and it shows HD is full. I deleted some files but it still shows 652 Gigs on this drive while there are only 285 Gigs on drive. Relevant details: I am Administrator on laptop and have fair knowledge of what I am doing. I did not restore from backup, restore from mirror, upgrade HD's or anything else that would have touched the partition structure. Just daily use as imaging machine and web. I have checked partitions under disk administrator - no change, still partitioned with 714Gigs usable. Have looked through computer C drive by hand showing Hidden files and folders - no change. I have used JDisk Report to double check - it shows I have only 285 Gigs on C drive. I triple checked with TreeSize run as Administrator and it also shows 285 Gigs on C drive - yet Windows 7 still shows almost full. I used Windows 7 Utilities to Check for Disk Errors, and Defragged the drive. No errors shown and no change after Defrag.

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • Physical Debian to VMWare: vmware-converter, dd-image or otherwise?

    - by Dabu
    we have two debian Lenny production machines, both running larger commercial websites. Now these machines need to be moved, and in the process, they need to be virtualized to VMWare ESX. If you believe the internet information, there are several ways to accomplish this. The easiest for us would be to use our weekly dd backup where the whole disk, however, I have no experience with this kind of technology and if it is really possible. The second best way would be via an application on the source machine virtualizing it and generating an ESX compatible VM. However, the software is beta and unsupported, and after installation, nothing really works (the /etc/init.d/vmware-converter script doesn't actually do anything, start and stop reply with success messages, yet ps shows that there are no new processes). The worst way with the most work would be to install a new machine and set it up manually, copying files and databases as needed. This part is clear in it's execution, and my question(s) do not touch this. Is my 1st way possible? Has anyone done this yet, or better, has a page with instructions? Or is there a help page that explains how to correctly install, run and use the vmware-converter tool using a Debian installation (it's possible that I dod something wrong during installation already)? Thank you.

    Read the article

  • VOIP and internet connection speeds [cable vs. fiber]

    - by microchasm
    Our office is migrating to IP telephony. We have less than 10 employees that will be using the phones. We currently have cable internet, and they just bumped the speeds: There is a data center that was just recently built in our building, and we were considering co-lo'ing there in the near future. As a result, they offered us access to their triple-redundant internet, but it's quite expensive. They are offering 3mbps committed with up to 10mbps burst for $250/month (discounted). We pay ~$120 for our cable (which the plan was to keep--at least for TV). I want the phone system and LAN to be as separate as possible. Was thinking about keeping the cable for LAN, and using the other connection for the phones (until I saw the price). Now I'm thinking it might make sense to add on to our existing cable setup, and change our phone to only have DSL as a backup for the cable. Is there any real benefit to the fiber? Especially for the price? Any other suggestions or ideas? Thanks.

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Windows Server 2008 is stuck at "configuring updates - stage 3 of 3 - 0% complete"

    - by Chris
    This has happened the last two times I've done updates to this system, and I really have no idea what is going on. It is installing a only a month's worth of updates. It only responds to ping and no services are up, so I can't view the system remotely (I have to hook up a monitor to see this message). In the past I've just restarted the system at this point and it eventually finishes updating. I want to know what I can do to avoid this situation, how to diagnose what is going on, and how to get any kind of remote access during the updates. Edit: I can start the machine in safe mode (where I did nothing but backup some files). I restarted and it no longer tries to do a windows update, just goes to the desktop where everything seems extremely broken. I can click on some things, but not launch most programs. I guess all I can do at this point is do a system restore or something. Edit: Re-installed windows on this system yesterday. That's my usual solution to issues I don't feel like diagnosing, like this one.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Recovering data from an external hard drive

    - by CCallaghan
    I have a WD Elements 2GB hard drive (formatted NTFS). I accidentally kicked out the USB cable while writing data to the disk, and now I can't access most of the data. Although this was ostensibly my backup drive, there is a great deal of important material on there which was only on there. I realise how idiotic this makes me. (So, formatting is not an option.) Things I've tried/information I've gathered: Windows Explorer will recognise the drive itself. However, it will not access most directories therein (and will sometimes crash when exploring). I can access all of the directories through the command line, but the dir command will often report that it can't read any files in most of the directories. The situation was similar when I hooked it up to an Ubuntu machine: the file explorer crashed, but I could access directories - but not files in those directories - via terminal commands. Several files I tried to copy out either resulted in an I/O error being reported or resulted in the command line crashing. The Disk Management utility on Windows reports a healthy disk formatted as NTFS and not RAW. It also indicates the correct amount of space used up and its capacity (so it seems that the files are not deleted). I've tried to run chkdsk, but that hangs on Step 2 (checking indexes) at 74%. Step 1 reported no bad sectors. I tried Recuva, but that didn't seem to work (stalled at 0% for half an hour). I should also note that the disk doesn't seem to be spinning smoothly; it seems to be chopping back, like it's reading the same sector over and over again. I noticed this after I kicked out the cable. Any help would be greatly appreciated. Update: It would seem the problem has taken a turn for the worse. The external hard drive now shows up on my computer as a local disk and is not mountable by Linux.

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • Server format & Reinstall while keeping Server & domain ID

    - by Chris
    Hi Everyone, I want to reinstall my 2008 R2 server from scratch, due to multiple Active Dir issues. I have only 1 server running AD and a spare machine to use if necessary. Is there a way to save just the user accounts and the domain SID, so that I can start with a clean server that uses the same name as before? I can reassign file security, but I do not want to have to rejoin all the users to a new domain. Also all users are mapped to folders on the server. What I hope to do is a clean install of the server without having to mess with the users machines. can someone please tell me the procedure to accomplish this? any help appreciated! Thanks guys, but I could be here all day telling you every error I am getting. can we please keep this to the question of how to do a reinstall and keep the same SID? I just want to start over without having to rejoin all the clients to a new domain. Is there such a tool that can backup the Server SID and the AD domain name so that I could restore them, without restoring any other data? I might not be using the correct terminology here, but hopefully you understand what I am asking. Thanks

    Read the article

  • Laptop freezing every few seconds, including screen + sound

    - by zenstealth
    Just a few days ago, my Windows 7 HP dv4170us (1.76Ghz CPU, 1GB Ram) laptop started to freeze every other second where everything on screen and and sound (such as a song playing in iTunes) would just freeze until I bash it violently (without actually breaking the laptop) or wait for a couple of more seconds. I think it started one night when I noticed that a USB mouse of mine stopped working, and it displayed random "Device was not recognized" errors. I just unplugged the mouse and ignored it. Skip forward to the next day, is started freezing, and as of today I can't get my computer to not keep freezing. I tried to backup my files onto an external hdd, but it almost corrupted the drive. I ran 4 complete virus scans using MSSE and MalwareBytes (both quick and full scans), and they all came up clean. In the Task manager, the CPU usage is on a constant max, and so is the RAM (if I have just a few apps running, I only have like 30Mb of free RAM left). Also, on the outside of my laptop, right above where the CPU is located, it's very, very hot. I suspect that something is wrong internally within inside of the computer, but I'm not sure. It also does the same thing when booted into Ubuntu.Does anyone know what could be wrong with it?

    Read the article

  • Hibernation fails; The system cannot find the file specified

    - by GMMan
    Recently I installed Ubuntu 12.04.1 LTS on my Lenovo Y480. Hibernation was working properly after the Ubuntu install, but I was making sure all of the operating systems on my system worked, including OneKey Recovery (recovery partition). It is of note that I installed Windows 7 from scratch with a disk image I downloaded off of my university's DreamSpark program, and further to that I had to image the partition with Paragon Backup & Recovery, repartition to convert the Windows partition to extended, install Ubuntu, and then restore the image. During that process I also used the Windows disc to edit the BCD as to reuse the existing entry for the restored partition. I also used the automated "repair your computer" option. With verification, I noticed that the "repair your computer" option actually wrote to the wrong BCD (the recovery partition), and I mounted the partition and restored the original BCD (from a copy I made earlier), and rebooted. At this point my GRUB broke, and I was able to restore it. At this point hibernation broke. I tried powercfg /h off and powercfg /h on, rebooted, and nothing. Also tried increasing the hibernation file size as directed on this post, but it still doesn't work. Executing shutdown /h yields The system cannot find the file specified.(2). What file? It seems that mounting the system partition sometimes works, but I don't want to keep it mounted in case it gets written to accidentally. How do I permanently fix this?

    Read the article

  • Does anyone know where I could find a 2 input USB voltage meter?

    - by John O
    What we really need is a tiny UPS, of sorts. We'll be hooking up a solar cell and a battery to a single board computer. Currently, that SBC is a custom Pic32 device, and it does it's own UPS and voltage monitoring duties. I've been tasked with trying to replicate all of its features with off the shelf products... and for the most part I've succeeded. But I don't currently have any way to switch between two sources of juice, or monitor when they're getting low. These guys have something: http://www.mini-box.com/picoUPS-100-12V-DC-micro-UPS-system-battery-backup-system I really like it, the price is well within the budget. We might even work it in though it does 12V and I'll probably be using 5V... there are enough engineers on hand to figure out something. But I'd still have no idea what the voltage was for the PV or battery. I was hoping that there was some simple little USB multimeter thing that I could use to monitor this with, but I can't seem to come up with anything. I've found all sorts of cool hardware, but nothing that will help us. Does anyone know of anything?

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >