Search Results

Search found 5762 results on 231 pages for 'backup sqldatabase'.

Page 164/231 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • how to debug mysql has gone?

    - by fefe
    I have a virtual machine(Ubuntu 12.04, MySQL 5.5) running under VMware and is dedicated to host a mysql server. I connect to this server on internal IP. I'm trying to find out why I get mysql server has gone error. One my windows machines apache it stops because of this issue. I have been trying to fine tune my mysql my.cnf with the following parameters but did not bring the desired result. # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # wait_timeout = 180 key_buffer = 384M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 500 table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M how to debug this issue what is missing from configuration to avoid this error?

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Upgrade no raid server to raid

    - by AZee
    I have just learned that our PDC has a single drive with 2 partitions. I also know that this drive has bad blocks as recorded in the event log. What I would like to do is to convert this to a RAID solution with a nice balance between economy and performance. I will admit that I have only configured servers with RAID from scratch, and have no experience upgrading an existing system into a RAID system. In fact, I'm not sure it is even possible. Since this is the PDC for 350+ workstations downtime is important. I'd like to hear from other System Administators how they would tackle this and their recommendations for all devices. At this time it seems to me that I can replace the existing drive and then restore from backup or install a controller, drives, configure the RAID an basically start from scratch. Thank you for taking your time. ~AZee

    Read the article

  • Samba share doesn't have write permissions

    - by blsub6
    alright, I've got one that should be really simple. I want a wide open smb share for my Windows 7 machine. Everyone should be able to access it, regardless of domain or username or anything. My smb.conf has: security = share guest account = nobody Along with: [DC_Backup] path = /Windows_Backups/DC comment = Backup of Domain Controller force user = nobody guest ok = yes public = yes read only = no I can access it, but I cannot write to it. Windows keeps telling me I "need permission to perform this action" Where do I start?

    Read the article

  • Shrinking physical volumes in LVM on a Linux Guest in ESXi 5.0

    - by Stew
    The problem: Linux guest (OpenSuse 12.1), with multiple virtual disks attached. 3 disks are in a logical volume, two of which are exactly 2TB. None of the disks are independent, and due to the backup software we use, cannot be independent. When the two 2TB virtual disks are "dependent", the snapshot fails stating that the file is too large for the datastore. When I put those two disks in independent mode, snapshots work fine (the other disk is 1.8TB). I have therefore concluded that even shrinking the two physical disks by 100GB should solve the problem, however I am having trouble conceptualizing how to go about getting those disks smaller without breaking the LVM entirely. The actual LV has 1.3TB free, so there is plenty of space to shrink with. What I need to accomplish: Deallocate 100GB from the two, 2TB virtual disks within the linux guest. Shrink the two virtual disks by 100GB within vsphere (not as complicated). Are there any vsphere/LVM gurus that can give me a clue?

    Read the article

  • sharepoint 3.0 site restore/import trouble...

    - by Trondh
    Hi, We have some old sharepoint data (from a WSS 3.0 SP1 or SP2 install), that I need to restore. Problem is: This is a time management site, and one of the fields automatically pics up the user name of the user that enters data, and this is used to keep track of who worked when. Now, when I import this into my temporary sharepoint 3.0 server, these fields are blanked, and the creator of the element is replaced by my admin user (the account that ran the import job). So, to the question: Is there any way at all to grab hold of these data before the sharepoint import job "destroys" them? I'm using stsadm -o import. I don't care if I have to pick the database itself apart manually, I just need to know if it's possible to get hold of these fields with data intact from my export files... (Backup you say? It was deleted loong ago. This sharepoint export is all we have...)

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • Mysql migrate huge db from innodb to ndbcluster Err: the table is full

    - by Nguyen Trong Nhan
    I'm trying to migrate old database to mysql cluster (4 data nodes) by using command: ALTER TABLE sample ENGINE=NDBCLUSTER but I'm getting the following error: The table '#sql-7ff3_3' is full There are approximately 300 mil rows in this table. Here are my config file: /mysql-cluster/config.ini [NDBD DEFAULT] NoOfReplicas=2 DataDir=/data/mysql-cluster/ndb/ BackupDataDir=/data/mysql-cluster/backup/ DataMemory=10G IndexMemory=5G TimeBetweenLocalCheckpoints=6 FragmentLogFileSize=256MB NoOfFragmentLogFiles=50 MaxNoOfOrderedIndexes=8000 MaxNoOfConcurrentOperations=100000 MaxNoOfTables = 10000 RedoBuffer=128M MaxNoOfAttributes=5000 MaxNoOfUniqueHashIndexes=1024 /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/data/mysql-cluster/mysqld/ event_scheduler=on default-storage-engine=ndbcluster ndbcluster ndb-connectstring=192.168.x.x,192.168.x.x innodb_file_per_table innodb_buffer_pool_size = 512MB key_buffer = 512M key_buffer_size = 512M sort_buffer_size = 512M table_cache = 1024 read_buffer_size = 512M

    Read the article

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • How to setup VM in KVM? Qcow or LVM etc.

    - by JohnAdams
    Finally, after quite a bit of this vs that, I have chosen to virtualize a couple of my servers with KVM. I did do a test setup as well, but I have a few questions about setting VM's in KVM. Would appreciate pointers. What is the best storage to use - Qcow2 or LVM? I like the fact that I can copy the VM file easily with a Qcow2 but what about LVM, how do I take a backup or make copy on a development server to play with? I know I can clone a LVM, but how do I bring to my development server? How do I setup the guest partitioning? For example, when setting up Ubuntu inside Ubuntu, do I choose LVM for that VM or regular fdisk partitioning? Can I increase the partition size then later, if I need a bigger disk?

    Read the article

  • Updating to Exchange 2013 - any way to do it now?

    - by TomTom
    Exchange 2013 is out, available for some epople already. Got if from the VLC Center, now trying to get an upgrade path that works for some customers. Problem: There is no upgrade. It is "install on new Server, move mailboxes. This means coexistence with Exchagne 2010 for the time to move the Mailbox. Sadly the only compatible Exchange is Exchange 2010 Sp3 - which is not going to be bout for quite some time. Any way to still do an update? Backup, restore to new Server? Any beta of the SP that is good enough to ONLY move the mailboxes? I do not care about the rest - this really is "install Exchange 2013, move mailboxes, UNINSTALL 2010". I am quite - ah - unhappy that at the end the only one who will be able to intall 2013 are new companies right now.

    Read the article

  • Multiple .bkf files created in Backupexec 12.5 or 2010 related to heavy I/O?

    - by syuusuke
    Hey everyone, I was wondering if anyone who has used backupexec 12.5 or 2010 have ever experienced multiple .bkf files created for a single job. To describe what I mean by multiple files, the .bkf are being created with random file sizes under 2GB even though I've assigned the setting to chop the file after 10GB size. Some jobs will create 20x .bkf files in 1 job with file chunks ranging from 50MB to 800MB sizes. Is this is a sign of heavy I/O issues? Bandwidth limitations? I'm not sure, I'm here to seek some advices and suggestions. I've setup another backup server with the same exact settings and they seem to create a new .bkf file when 10GB limit has been reached. Although I am backing up different machines but I know my settings are an exact match to the problematic or atleast I think it's a problem.

    Read the article

  • How to copy Netscape email

    - by Olav
    I think I have the Netscape mail-directory from old computer, how do I copy it to new computer? (Netscape 7.1 Mail, Thunderbird or Seamonkey). I think I have the files in Olduserbackup\xjuwtwtb.slt\Mail I create a new mail account with server pop.superuser.com, and find a directory with that name in C:\Users\myusername\AppData\Roaming\Mozilla\Profiles\default\ou6umlif.slt\Mail I replace the files with those from the backup, but Netscape still shows pop.superuser.com in its interface. Is there some kind of registry setting somewhere I will have to change?

    Read the article

  • Deleting Time Machine in Mac OS X 10.6.4

    - by cappuccino
    Does anyone know how to delete Time Machine in Mac OS X 10.6.4? Before answering: sudo rm -rf /whateverthetimemachineis does not work Disabling the ACL permissions first with sudo fsaclctl -p /whatever -d does not work, sudo: fsaclctl: command not found Use the delete all backup feature in Time Machine... this is slow as hell, would take days. Need a command line solution. No I don't want to reformat the drive, I have other content on it, and no don't say I should have separated on two partition or two drives, I did it this say since partitions cannot be dynamically changed, and two drives is annoying since, whats the point of having a big drive?... plus has no relation to the issue at hand. Already googlied for hours and read everything on Super User, nothing working. and all solutions are the first 4. Any clues?

    Read the article

  • Mac failing (failed?) hard drive - is all hope lost?

    - by Daniel
    It's a 500 GB Seagate laptop hard drive that came with my Macbook Pro. Apple partition format. Already replaced and now have it external, connected via SATA/USB adapter. Trying to get just a few files that I worked on while out of town when it crashed (and thus did not have my time machine backup drive). Drive will not mount, but OS X Disk Utility detects it and can read the capacity, model number, and even the name of the partition, which leads me to believe all hope may not be lost. Failed attempts so far: Disk Utility verify+repair says drive cannot be repaired and that I should back up immediately (lovely) Disk Warrior says it cannot rebuild the directory due to hardware failure Data Rescue quick & deep scans immediately failed PhotoRec says "error reading sector" for every sector (at least for the few minutes I let it run before closing it to explore other options) What else can I try here? Again, I'm just looking for a few, small files (python scripts to be specific) - not a full recovery.

    Read the article

  • Duplicate name exists solution

    - by user978733
    I have about 70 pc's with exactly same hardware. I decided to automate turning on and off. I took 1 PC. Here is what I've done: Changed bios configuration so that now pc's waking when I turn on AC switch Installed Windows XP and configured so that I can turn off remotelly, changed workgroup name to "WG1", and pc name to "ExamPC" Then created acronis backup image of this pc I installed this image in several PC's and tried to test All worked well till windows opened. The problem is, all tested PC's started Windows nearly at the same time, and all of them popped up error Duplicate name exist. I can't figure out any solution. Any suggestions?

    Read the article

  • Mysterious xyz.event files appearing

    - by Pekka
    I am getting mysterious .event files - always empty, created by me a few weeks ago - in several local project directories. They are all Subversion checkouts. They are always named after the directory they reside in, so a directory named pagination will contain a pagination.event file. Does anybody know what this is? Possibly important information: I am working on a Windows 7 Workstation I use NuSphere's PHP IDE (no updates recently) I use TortoiseSVN for version control I set up a Windows 7 backup job recently that ran once, I can' remember when exactly. The event files seem to turn up only in repositories There is no external access to those repositories

    Read the article

  • DPM - Monitoring is green, Protection has error and Latest rec point is old. How do I interpret that?

    - by LosManos
    How do I read the DPM info in this case? Monitoring says Failed but Protection shows Ok while having a Latest recovery point from last year. Under Monitoring tab I have Failed for Source | Computer | Protection group | Start time Computer\System Protection | MyServerName | Recovery point | 2014-06-09 19:00:00 which shows me that something happened last night. But under Protection tab everything is green. Here I have Protection group member | | Protection status Protection group ..name.. Computer: MyServerName Computer\System protection Bare metal recovery OK ... Latest recovery point: 2013-12-12 06:32:54 My guess is that backup failed last night once, but succeeded later. It then found out that there hasn't been any change since sometime last year and leave it be and flags Ok.

    Read the article

  • Our server hosting provider asked for our root password

    - by Andreas Larsson
    I work at a company that develops and hosts a small business critical system. We have an "Elastic cloud server" from a professional hosting provider. I recently got an email from them saying that they've had some problems with their backup solution and that they needed to install a new kernel. And they wanted us to send them the root password so they could do this work. I know that the email came from them. It's not [email protected] or anything like that. I called them and asked them about this, and they were like "yep, we need the password to do this". It just seems odd to send the root password over email like this. Do I have any reason to be concerned?

    Read the article

  • Importing orphaned Outlook 2010 OST file

    - by BigBadJock
    I have a problem with Outlook 2010 and OST files. First my exhange hosting company deleted my exchange account by accident. They've created it on another server, but can't get the data back. Now I did make a copy of the \users\name\appdata\local\outlook directory. So I have the original OST files. I decided to switch hosts to Office 365. During this, I stupidly deleted my account from within outlook and recreated it to point to Office 365. And only then did I learn that you can't import from OST files. Edited to clarify: I have a complete backup of the pc. Which folders would I need to restore to ensure that I can get exhange back it's previous state? I'm prepared to to a complete restore if necessary, but would prefer to localise the changes.

    Read the article

  • Which server requirment for a Redmine, Git and website hosting?

    - by Ephismen
    Me and 9 other students are going to start a project that will last a minimum of 2 years, for this purpose we are looking to host all our work on a server. Here are a few tools we would like to work with: Redmine GIT Hosting a website/blog to show our work Hosting an internal and private development website/blog We haven't decided yet which OS we will install, but we were looking toward Ubuntu or Fedora. Having a limited budget, 300$/year, we would like to have some advices on the following dedicated server specifications: Kimsufi 2G: Hardware: Intel Celeron/Atom, 1.20 Ghz, 64 bits, 2Gb DDR2, HDD 1 To, Backup FTP 100Gb Network: Connection 100 Mbps, Illimited trafic Dedibox SC: Hardware: Dell Nano U2250, 1x 1,6GHz, 64 bits, 2Gb DDR2, HDD 160 Gb Network: Connection 1Gbit/sec, Illimited trafic Will these server be sufficient? Should we host the websites on another platform? Would a virtualized server be more appropriate? Thank you for your answers, Ephismen.

    Read the article

  • Access or import an Outlook 2003 .pst file without Outlook

    - by Nobler
    I have a 450 meg .pst file (MS Outlook 2003 backup file) saved from a PC before it crashed. I would like to break it up into its components i.e. Save attachments to folders on my PC Paste text-emails into a word processor, etc. But I don't want to buy MS Office Professional 2003 or later solely for importing the .pst into MS Outlook 2003+. Outlook Express cannot import .pst files, only Outlook “proper” can. Is there some free email client out there, e.g. Thunderbird, that can import .pst files? Or is there some other way to access the 450meg file?

    Read the article

  • Activating Windows 7 generates error code 0xc004F061

    - by Jon
    I got a new SSD and wanted to start over with Windows 7 on that disk. I did a clean install (my mistake) on the SSD and just went passed the activation part (left the key blank). Now that I have my system all setup, configured, files pulled back from backup, and ready to go, I'd like to activate Windows 7. However, I now get this error: The following failure occurred while trying to use the product key: Code: 0xC004F061 Description: The Software Licensing Service determined that this specified product key can only be used for upgrading, not for clean installations. Do I really need to wipe my system again, install Windows Vista, and then do the Windows 7 upgrade in order to use my upgrade key? Is there some kind of work around?

    Read the article

  • ASUS P5B Plus motherboard - no any drives found - how to restore RAID array?

    - by Moha
    We have a small server machine with an ASUS P5B Plus motherboard and 4 SATA HDDs. The HDDs were configured in a RAID10 array. Up until now, everything worked fine, but now the system doesn't recognize the drives. BIOS is set to RAID, jMicron controller is set to RAID, yet I can't see any of the drives in the BIOS setup, and jMicron BIOS tells me "no any drives found" The HDDs all spin up, I hear no clicking sounds or anything that would suggest HDD error. I did a search on this problem and replaced the SATA cables as suggested, but nothing's changed. What I have in mind is checking the CMOS battery and resetting the BIOS to use IDE mode, but I don't know if it will ruin the RAID system on the HDDs. It is not a critical server and there's only one database running on it (which I have backup of), but I don't want to setup the server from scratch if not necessary. What should I try to restore the RAID array and put the server back to working order?

    Read the article

  • MariaDB, Galera, xtrabackup - do I need the binary log?

    - by bernhardrusch
    We are using a MariaDB Galera Cluster with 3 nodes. For the state transfer we are using xtrabackup. We have some problems with the binary logs - they got too big and crashed the server. We can remove them manually with the purge binary logs command, another way would be to set the expire_logs_days so they would expire. I now that we could use xtrabackup to backup the DB and use the binlog to get to some point in time. But do we really need it for Galera to work ?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >