Search Results

Search found 59196 results on 2368 pages for 'time wastrel'.

Page 449/2368 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • ERROR 0x8007007A when trying to schedule a task

    - by Paul Hollingsworth
    I am getting the error "The data area passed to a system call is too small. (Exception from HRESULT: 0x8007007A)" when trying to create a scheduled task on a particular windows machine. The problem description is identical to that described in this Microsoft KB article I followed their steps to resolve: Stopped the task scheduler service (right-clicked "Task Scheduler" in the Services window from Control Panel and selected "Stop"). Restarted the task scheduler service Waited 15 minutes tried to schedule the task. But the error is persisting. To give more context of how we are creating these scheduled tasks, they are actually generated automatically from a configuration script (we run the script each time we wish to make a change). Each time this happens, it deletes all of the existing tasks and creates new ones. I don't know what else to try.... but surely there is some way to "reset" the task scheduler... How can I stop this error from happening.

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • Coldfusion 8 crash

    - by Denis Topa
    I have a very big problem with coldfusion 8 on Windows server 2008 with IIS7. They are in production server and sometimes the the site is not available, I have to manually end the jrun.exe process from task manager and then the site is available. I realize that the process jrun.exe has abut 1.3Gb of memory used at the time it crash's. It happens 2-3 times a day, I have looking in the coldfusion logs and I did not found anything strange beside some warnings that some job exceeded the 300 sec of time execution. I forgot to mention that coldfusion is a 32bit application but the windows is a 64bit, this may be the problem? I'm not such good in coldfusion, so If someone knows how to troubleshoot please let me know Thanks!

    Read the article

  • Internet Explorer 10 (Metro App) on Windows 8 Pro (RTM) crash

    - by ferpaz
    Internet Explorer 10 (Metro App) on Windows 8 Pro (RTM) does not start and crash with this error: Log Name: Application Source: Application Error Date: 27/08/2012 19:21:29 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: DELL-OPE3.red.aseinfo.com.sv Description: Faulting application name: iexplore.exe, version: 10.0.9200.16384, time stamp: 0x50107ebe Faulting module name: iertutil.dll, version: 10.0.9200.16384, time stamp: 0x50109c90 Exception code: 0xc0000005 Fault offset: 0x0000000000172f0b Faulting process id: 0xadc Faulting application start time: 0x01cd84bb737cfa16 Faulting application path: C:\Program Files\Internet Explorer\iexplore.exe Faulting module path: C:\WINDOWS\system32\iertutil.dll Report Id: b1597df3-f0ae-11e1-be78-88532e15da73 Faulting package full name: Faulting package-relative application ID: Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-08-28T01:21:29.000000000Z" /> <EventRecordID>7612</EventRecordID> <Channel>Application</Channel> <Computer>DELL-OPE3.red.aseinfo.com.sv</Computer> <Security /> </System> <EventData> <Data>iexplore.exe</Data> <Data>10.0.9200.16384</Data> <Data>50107ebe</Data> <Data>iertutil.dll</Data> <Data>10.0.9200.16384</Data> <Data>50109c90</Data> <Data>c0000005</Data> <Data>0000000000172f0b</Data> <Data>adc</Data> <Data>01cd84bb737cfa16</Data> <Data>C:\Program Files\Internet Explorer\iexplore.exe</Data> <Data>C:\WINDOWS\system32\iertutil.dll</Data> <Data>b1597df3-f0ae-11e1-be78-88532e15da73</Data> <Data> </Data> <Data> </Data> </EventData> </Event> Any suggestions?

    Read the article

  • How to use a custom .bashrc file on SSH login

    - by gsingh2011
    I've found that with the new company I'm working with I often have to access linux servers with relatively short lifetimes. On each of these servers I have an account, but whenever a new one is created, I have to go through the hassle of transferring over my .bashrc. It's possible however that in about a months time that server won't be around anymore. I also have to access many other servers for short periods of times (minutes) where it's just not worth it to transfer over my .bashrc but since I'm working on a lot of servers, this adds up to a lot of wasted time. I don't want to change anything on the servers, but I was wondering if there was a way to have a "per-connection" .bashrc, so whenever I would SSH to a server my settings would be used for that session. If this is possible, it would be nice if I could do the same thing with other configuration files, like gitconfig files.

    Read the article

  • Gmail page works but won't stop loading

    - by MLW
    When I open the GMail page in Chrome (9.0.597.102) on OS X (10.6.4), the page works fine and is functional, but it often remains in a "Loading" state, with the loading icon active, for extremely long periods of time. Functionally, there is no problem, since everything is accessible, and the problem is intermittent. Sometimes it immediately completes loading, other times it will claim to be "loading" for hours. My date is correct (as set with time.*.apple.com), and I don't have any Google Labs enabled. Does anyone else have this problem, or know the cause and solution?

    Read the article

  • All the Gear and No Idea: Suggestions for re-designing my home/office/entertainment network

    - by 5arx
    Help/ Advice/ Suggestions please: I have a load of kit that I love but which currently operate in disconnected, sometimes counter-productive way. Because I never really had a masterplan I just added these things one after another and connected them up in ad hoc ways. Since I bought my Macbook I've found I spend much less time on the MacPro that was until then my main machine. Perversely, as my job involves writing .Net software, I spend a lot of Mac time actually inside a Windows 7 VM. I stream media from the HP box to the PS3 and thus to the TV, but its not without its limitations/annoyances. We listen to each other's iTunes libraries but the music files are all over the place and it would be good to know they were all safely in one location (and fully backed up). I need to come up with a strategy that will allow me to use all the kit for work, play (recording live music, making tunes, iMovie work), pushing/streaming media to the TV and sharing files with my other half (she uses a Windows laptop and her iPod touch). Ideally I'd like to be able to work on any of the machines and have a shared homedrive that was visible to all machines so all my current files were synced up wherever i was. It would be great if I could access everything securely and quickly over the web. I'd also like to be able to set up a background backup process. The kit list thus far: Apple MacPro 8GB/3x250GB RAID0 + 1TB Apple MacBook Pro 13" 8GB/250GB - I spend a lot of my work time on a Windows 7 VM on this. Crappy Acer laptop (for children's use - iPlayer, watching movies/tv files) HP Proliant Server 4GB/80GB+160GB+300GB Sun Ultra 10 2 x 80GB (old, but in top-notch condition) PS3 160GB iPod Classic 2 x 8GB iPod Touch Observations: Part of the problem is our dual use of Windows and OS X - we can't go for a pure NT style roaming profile. Because the server is also used for hosting test/beta applications and a SQL Server db, it can't be dedicated to file serving. The two Macs really could do with sharing a roaming profile or similar. I'd love to be able to do something useful with the Ultra 10. My other half has been trying to throw it away for over five years now and regularly ask what function it serves in my study :-( I've got no shortage of 500GB external USB hard drives iMovie files are very large and ideally would be processed on a RAID system. Apple's TimeMachine isn't so great. If anyone could suggest all or part of a setup that would fulfil some of my requirements I'd be very grateful. I am willing to consider purchasing one or two more bits of kit (an Apple TV and a Squeezebox have been moted by friends) if they will help make efficiencies rather than add to the chaos and confusion. Thanks for looking.

    Read the article

  • How to allow wget to overwrite files

    - by Gnanam
    Using wget command, how do I allow/instruct to overwrite my local file everytime, irrespective of how many times I invoke. Let's say, I want to download a file from the location: http://server/folder/file1.html Here, whenever I say wget http://server/folder/file1.html, I want this file1.html to be overwritten in my local system irrespective of the time it is changed, already downloaded, etc. My intention/use case here is that when I call wget, I'm very sure that I want to replace/overwrite the existing file. I've tried out the following options, but each option is intended/meant for some other purpose. -nc = --no-clobber -N = Turn on time-stamping -r = Turn on recursive retrieving

    Read the article

  • Bluehost: 1 Minute Delays?

    - by feklee
    On Bluehost shared hosting (Apache 2.2 + FastCGI + APC), I have the problem that some requests take almost exactly one minute to respond. Yet time spent in PHP is only two seconds. To demonstrate the issue, I created a temporary test page. Sample output: When asking Bluehost support about the issue, I got the following reply: “the fastcgi process don't stay running they will only stay running for a certan period which would explain the timeouts you are seeing it traffic would spawn new ones. [...]” I understand that spawning new FastCGI processes takes some time. But almost exactly one minute? That must be some timeout. But which timeout may that be? What I want in the end: No request should take longer than five seconds to respond, even if it fails. When I asked Bluehost support to set the Apache TimeOut directive accordingly, they told me: “we do not modify the Apache Config File even on a virtual host level.”

    Read the article

  • Sharepoint 2007 Event ID 6482

    - by Dave M
    Our two server SharePoint 2007 SP2 farm has an issue. Event ID 6482 appears in the Application log of the Web front end many times a day. Often many time a minute. The full error is from Office SharePoint Server Event Type: Error Event Source: Office SharePoint Server Event Category: Office Server Shared Services Event ID: 6482 Date: 11/12/2009 Time: 3:05:22 PM User: N/A Computer: XXXXXX Description: Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (36a9b7ef-59aa-4f94-8887-8bf7b56f2f91). Reason: Error during encryption or decryption. System error code 0. Techinal Support Details: System.ArgumentException: Error during encryption or decryption. System error code 0. at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.SynchronizeDefaultContentSource(IDictionary applications) at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize() at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The SharePoint site appears to be functioning normally and Search returns expected results. Any suggestions would be appreciated

    Read the article

  • my Website loss packet in 70% countries, how can i dertermine why its loss packets?

    - by user2511667
    I checked my website on google page speed tester, it show result 90/100. I checked my website on pingdom it shows good result there. When i check my website in cloudmonitor.ca.com, it shows good result in 30% countries and all other countries it show packet loss (100%) How we can determine why my website has packet loss? And what is its solution? Is this problem from my server or from my website? I created new html blank page and set it too my index page, after I tested, it still shows packet loss, guess this means the problem is not in my website. Here is live result When I visit my website in browser, website is working fine. But when i test my domain or IP 198.178.123.219 in command Prompt it shows "Request time out" Why time out in command prompt?

    Read the article

  • Delayed startup of network icon (wi-fi)

    - by Michael Dy
    When my Windows 7 Starter netbook boots up and after everything is loaded, it takes about five minutes before the network icon near the clock becomes functional. When there's an available wi-fi signal (which automatically connects to my netbook as preconfigured) after the boot, my netbook seamlessly connects to it even if the network icon cannot be clicked or accessed. (The icon is shown to be loading for about five minutes.) During this time, when I go to Network and Sharing Center, it takes the same time to load. So even while the icon and the Center cannot be accessed, I can surf the Internet. When my netbook was still new, it didn't have this issue. How do I fix this?

    Read the article

  • Unattended Windows XP Install Stops at Deleting Previous Installation

    - by maik
    I'm not sure if I'm just not asking Google properly or what, but I can't come up with a good answer to this problem. We have MDT 2010 setup and have a Task Sequence for refreshing Windows XP machines. It doesn't seem to happen all the time, but a lot of the time when we start a refresh it goes through the normal motions and when it gets into the first part of Windows XP setup (the blue screen) it stops, telling me a Windows installation already exists at that location and I can press L to continue, erasing everything and using that folder. I've poured over the unattend file and can't find an option that will just delete the old files and keep going, so I'm at a loss. Any ideas?

    Read the article

  • New Windows 7 Libraries created keep disappearing

    - by Sean
    I've just got a new laptop that came pre-installed with Windows 7 Professional edition. One of the new features of Windows 7 is Libraries. I'm familiar with how this works and am trying to create my own library called 'Work' to include all my work folders on my computer. However every time I create a new custom Library, after I rename it, it disappears from my Library menu. Each time I click Libraries in the Explorer, I keep seeing the same 4 default libraries, I.e. Documents, Pictures, Music, and Video. So when I try to create a new Library called 'Work' again, I get a pop up message "Do you want to rename New library to Work (2).Library-Microsoft?" Which means that my original work library still exists but for some reason I can't see it. Can someone please help me figure out why this is happening?

    Read the article

  • WD My Passport Essential SmartWare External Hard Drive

    - by Acer
    Hi Ive been gifted a My Passport SmartWare External Hard Drive(500gb). I used it and it works fine and installed WDSmartWare The Second time i used it,still fine,but I took it out without using the "Safely Remove Hardware" So the third time I used it,there's a bubble come out that says: USB Device Not Recognized One of the USB devices attached to this computer has malfunctioned, and Windows does not recognize it.For assistance in solving the problem, click this message. I tried connecting it to other USB Ports but it didn't work I tried Uninstalling WDSmarWare and connect it but it didn't work Please help me I like this hard drive so much and I spoilt it easily XD P.S I think about 13% has been used up in the Hard Drive P.S.S Other USBs can work fine in the computer P.S.S.S I tried connecting it to other computer and works fine OMG

    Read the article

  • Understanding how Tracert works

    - by iridescent
    From what I gathered so far, Tracert works by sending 3 ICMP echo messages. Starting with a TTL value of 1. For each router the packet encounters, the TTL value will be decremented. For the 1st router, 1-1 = 0, so an ICMP "time exceeded" message will be sent back to the sender machine. Next, the TTL value will be incremented to 2 by the sender machine and the cycle repeats for the 2nd router (2--1--0) and so on. Please correct me if my undestanding is flawed. I am curious as to why the ICMP "time exceeded" message isn't displayed by Tracert in Command Prompt since it is in fact an error message ? The cycle simply proceeds on. Thanks.

    Read the article

  • LVM2 vs MDADM performance

    - by archer
    I've used MDADM + LVM2 on many boxes for quite a while. MDADM was serving for both RAID0 and RAID1 arrays, while LVM2 where used for logical volumes on top of MDADM. Recently I've found that LVM2 could be used w/o MDADM (thus minus one layer, as the result - less overhead) for both mirroring and stripping. However, some guys claims that READ PERFORMANCE on LVM2 for mirrored array is not that fast as for LVM2 (linear) on top of MDADM (RAID1) as LVM2 does not read from 2+ devices at a time, but use 2nd and higher devices in case of 1st device failure. MDADM reads from 2 devices at a time (even in mirrored mode). Who could confirm that?

    Read the article

  • Problem restarting my Ubuntu system

    - by VoY
    Whenever I try to restart my Ubuntu either from the command line by typing reboot or in GNOME the computer goes from X to console, starts the shutdown process and then a message saying "[ some number ] Starting new kernel" appears on the screen and the computer goes back to X login screen. I suspect this must have something to do with nvidia drivers, because it seemed to have appeared around the time I bought a new graphic card. Also, when I reboot the second time I see weird graphical artifacts on the screen. When I boot from ubuntu live cd I can reboot just fine. I used jaunty, recently I switched to karmic with no change. This bug is very annoying, because I have to hard reset my computer in order to reboot. Also not good for the filesystems, I suspect. Can you suggest a way to debug the cause or if not at least the easiest way to go about reinstalling ubuntu without losing customizations/settings/data?

    Read the article

  • Good ways to restart all the computers in a remote cluster?

    - by vgm64
    I have a cluster that I manage and from time to time I get emails from each node (and head node) begging to be restarted after an automatic upgrade. Currently, my best solution so far is a shell script like: $> cat cluster_reboot.sh ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot I end up just typing the root password six times, but it works, I guess. Is there a better way? Can I force the head node to reboot the computers for me?

    Read the article

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

  • MySQL - Why would SHOW SLAVE HOSTS cause a binlog dump?

    - by Rory McCann
    We're getting loads of binlog files in our MySQL 5.0.x. We have a normal master/slave replication thing going with 1 master, 1 slave. Looking at /var/log/mysql.log, nearly 90% of the time the replicator connects and does a SHOW SLAVE HOSTS causes a bin log dump. For example: 7020 Query SHOW SLAVE HOSTS 7020 Binlog Dump Log: 'mysql-bin.029634' Pos: 13273 However when I do a SHOW SLAVE HOSTS on the mysql myself, I get no results. Occasionally when the replicator does a SHOW SLAVE HOSTS, mysql will hang for hours. I see nothing in the /var/log/syslog at the same time... What's going on here? How can I debug this more? For the record the MySQL master and slave servers are ubuntu dapper.

    Read the article

  • How to change start menu location for Windows 7 Programs

    - by user30994
    I keep my start menu in Windows 7 very organized, but every time there's a program update available (Safari for example), the program recreates its shortcut icons in the default start menu location for that program (Safari, for example, recreates start menu short icons in "Start Menu\All Programs\Safari"). So, every time I update a program I have to move it's start menu icons again to keep them organized the way I like. Some programs ask where I would like the start menu icon placed, and that works fine, but for the programs that don't ask... Is there a way to set a default start menu location for programs so that when I update, the shortcuts are placed in the folders I want them to be at? (Safari for example I keep in "Start Menu\All Programs\Web Browsers\Safari.lnk")

    Read the article

  • Company Password Management

    - by Brian Wigginton
    The topic of personal password management has been covered in great detail time after time. This question is aimed at the business or organization that needs to keep track of many unique passwords for many clients. What are some strategies/tools or ideas you all have for accomplishing this task? I was at an Interactive Agency, where we needed to keep track of client DB, ftp, mail... and for different environments for the app so any one client would have up to 3-10 passwords usually. This can get crazy when there are more than 250 clients

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >