Search Results

Search found 16032 results on 642 pages for 'sync framework'.

Page 371/642 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Hourly CRON task running more frequently than one hour

    - by Justin
    I have a cron task that calls a special PHP script via wget. Here is the crontab entry: 0 * * * * wget http://www.... It will work perfect for several days, running on the hour. However, after a few days the cron job will start to be called several times an hour. I have never seen CRON drift like this, so I imagine it can't really be a CRON issue. However, the logs of the script that is called clearly show it running several times an hour. Server details: Ubuntu Luci Apache MySQL PHP5 Time is showing correct @ command line Server is setup to sync with a NTP server In order for the script to run it must be passed a unique 50-character hash key in the URL, so this script isn't being called from any other source accidentally. What might cause CRON to drift like this?

    Read the article

  • mdadm - Recovering a 'split' RAID1 array

    - by Hamza
    I have two drives that used to be part of a single RAID1 volume but it appears that one of them went offline for some time, something I've noticed just now when I rebooted my system. I now seem to have two RAID volumes, as reported by: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc[1] 2096116 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb[0] 2096116 blocks super 1.2 [2/1] [U_] unused devices: <none> Not exactly sure where to go from here. How can I merge and re-sync these volumes without data loss? Thanks.

    Read the article

  • Need an alerting system if my cloning script fails

    - by rahum
    I've configured a nightly rsync to mirror one server to a standby offsite backup server. The total datastore on the primary is 1.5TB. In the course of getting this working, I ran into numerous instabilities with the environment, which I seem to have sorted out, but even though it's now working, I am still nervous. This is intended to be a disaster-scenario standby server, and if disaster strikes and the standby does not have all the proper data synchronized, I'm out of a job. Thus, I want to script a system that will confirm, after each nightly sync, that the destination data matches the source. I realize that rsync does this, but if rsync doesn't complete fully (which was happening during the setup troubleshooting), I need to know. Any suggestions? I'm best with Ruby, if that is relevant for the solution.

    Read the article

  • Linux: disbale USB without disabling power

    - by Ergot
    TLDR I want toggle between the following usages of a usb-port via the terminal: use like a normal usb-port only supply energy to charge Story I recently got me something like a magna doodle that can save your drawings to pdf, which can be moved to your computer via usb afterwards. Now the thing is that you can't save anything while it's plugged in. Because it's the only way to charge it, it bugs me that I can't find a software solution and laziness I want to keep it plugged in and toggle the connection to the computer only when needed. I noticed that it's charging and usable when it is plugged in and the computer is shut down or suspened. So I guess that there's a way to do it. Tech info computer: ThinkPad X201 Linux Kernel: 3.14.5-1-ARCH "Magna doodle": Boogie Board Sync

    Read the article

  • Samba does not reload user group members

    - by xato
    I am running a simple samba server setup where users connect to a share which contains folders for specific user groups. The folders are chmod 2770, so only users which are in the correct group can read/write in them. The problem is that if I change group memberships (i.e. remove user from group / add user to group; changes are in sync between clients and server!) samba does not automatically reload the group memberships for the user, so they can still write to groups that they are no longer a member of etc. I either have to reconnect to the share or to restart samba to apply the changes. Is there any way to prevent group caching and/or enable group membership reload in samba? My smb.conf: https://gist.github.com/anonymous/ca7c10a3b3e2168d7a03

    Read the article

  • SSH rsa key works with external IP not internal IP

    - by Ian
    I am using rackspace cloud hosting. I have 2 servers behind a load balancer. Each server has an external IP and an internal IP. I want to setup a sync job that uses SSH to transfer files. I made an rsa key, and I can successfully SSH from server A into server B, using the external IP of server B, without being prompted for a password. If I try to do the same but use the internal IP, it prompts me for a password. I want to be able to use the key instead of the password. Why is this? Is there something special I have to do during key generation so it works for both IPs? Any help is appreciated.

    Read the article

  • RAID1: Migrate HDD to SSD?

    - by OMG Ponies
    My current workstation uses an Adaptec 5805, with Win2008 mirrored between two 72 GB (10K?) savvio drives. My question is if there's a way to migrate the mirror to use SSDs - I've been looking at 90GB Corsair Force (Sandy Bridge) to replace the existing setup. If it's possible, without installing the OS fresh. If I replaced the mirrored drive with an SSD, would the array sync the drives? Then I could promote the SSD mirror to be the primary, and use the second SSD as the mirror. That'd be too easy... Or should use Ghost to get an image of the existing setup, apply it to the SSD for a new mirror to be setup on?

    Read the article

  • How to play two or more videos side by side in a syncronized fashion in Linux?

    - by Grumbel
    I have two (or more) video files that I want to play side by side. I could do that simply by opening them in two seperate windows, but that would also seperate all the controls (play/pause/forward/...). I want to play them in a synchronized fashion so that pause/forwarding/... works on both videos simultaneously so that they always stay at the same timecode and they don't go out of sync. How would I accomplish that in Linux? This is needed for viewing only, so compositing them into a new video file first should be avoided if possible, but if there isn't an easy way to do that, I welcome answers doing it with composition as well.

    Read the article

  • Poor quality when trying to stream a 720p video to XBox 360 using Media Center Extender

    - by MBraedley
    I have my XBox 360 set up as a Media Center Extender for my Windows 7 desktop. SD quality avi videos stream fine to my XBox, either though the video library or through Media Center Extender, but when I try a 720p mkv file, the frame rate plummets and the A/V sync is completely lost. I don't want to transcode or switch container formats (mkv isn't supported by the 360), but still want to stream. Both my desktop and 360 are plugged into the same gigabit switch, which is plugged into my ISP supplied modem/router. The video plays fine on my machine in a number of programs. Considering that I should have more than enough bandwidth to accommodate this video, why won't it play back properly?

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • Gnome 3 - Unable to change date and time

    - by Chris Harris
    I am running Arch Linux with Gnome 3. Unfortunately, although my time and date settings in /etc/rc.conf show that HARDWARECLOCK='UTC' and TIMEZONE='America/LosAngeles'. I continue to get the timezone of Europe/London. If I try to change the date and time via the GUI. It requires root access. After authorizing root access, the date and time may be changed; however, after closing the GUI window, it automatically reverts back to the previous incorrect timezone. I am able to use pool.ntp.org in order to sync my time to the correct one; however, this works only for the current session and is not fixed. This solution is inconvenient since there is not always network access. What other solutions are available for this problem?

    Read the article

  • NFS Issues in Gnome

    - by Alex
    I mount NFSv4 export via /etc/fstab and mount and use the shared folder in nautilus. There are two issues: When I copy a large file (around 4 GB) to the NFS server, the progress bar rapidly goes to 2 GB and then basically stops moving. But the copy s still in progress - it is just not displayed well When I disconnect from the network without unmounting the nfs share, nautilus freezes. How can I work around that? /etc/export on the server /export/share 192.168.0.0/24(rw,sync,insecure,no_subtree_check,anonuid=1000,anongid=1000) /etc/fstab on the client: server:/share /mnt nfs4 soft,tcp

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Faulting Application installutil.exe

    - by Shahmir Javaid
    This is the Random error im getting Faulting application installutil.exe, version 1.0.3705.6018, stamp 40f6266d, faulting module kernel32.dll, version 5.2.3790.2756, stamp 44c60f39, debug? 0, fault address 0x00015e02 Any ideas any one? why should installuti.exe through a faulting module kernal32.dll Server Version : Microsoft Windows Server 2003 Enterprise Edition Could it be framework issues

    Read the article

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • How to force rsync to use destination directory as root

    - by thepurplepixel
    I have a simple script to one-way-sync files/folders within a directory: #!/bin/bash HOST='<hostname>' USER='<username>' DIR='/downloads/' SOURCE='/srv/torrents' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats --progress -i $SOURCE $HOST:$DIR find $SOURCE -type d -empty -prune -exec rmdir -p \{\} \; However, when this rsync operation runs, it creates a folder, torrents in /downloads on the destination machine. How can I force rsync to put all folders & files from /srv/torrents (remote) into /downloads/ (local) instead of creating /downloads/torrents as a separate directory?

    Read the article

  • What happend to walterzorn.com?

    - by Vinz
    I just wanted to use the function plot from waterzorn.com but it seems like the site is down, I only get 404 responses. Does anybody know what's up with the site and if/when it will be back online? So far I could only find the following thread which doesn't have any answers. If you don't know: Walter Zorn had a JavaScript framework that enabled you to draw vectors with DHTML back before there was canvas.

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • Transfer Win8 user settings between profiles [closed]

    - by GlennFerrieLive
    Possible Duplicate: How do I sync grouped Windows Store apps between devices? Is there a way for me to copy/save/transfer my "start menu" configuration, meaning the grouping and ordering of the elements on the Start screen, between user profiles? Is it in the registry? I am open to manual or "coded" suggestions. UPDATE: I'd like to VETO this closing. I am aware of the "roaming" profile behavior. I want to COPY my configuration BETWEEN profiles on the same machine.... DIFFERENT profile DIFFERENT person. I like the way my start screen is set up. i want to set my wife up with the same layout.

    Read the article

  • Directly printing to remote CUPS/IPP server on Snow Leopard

    - by Martin v. Löwis
    I need to use Kerberos authentication when printing from my OSX machine, however, the machine itself does not have a service account in active directory, so the KDC will not issue a delegation ticket for the local CUPS installation. I think printing could work if the printing framework would directly print to the network CUPS server (or even to the Windows print server), bypassing the local CUPS. Is it possible to setup printing so that it directly accesses the remote print server? (asking for a service ticket for that server would succeed)

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • How do I update the memberOf attributes of existing objects after adding the OpenLDAP Reverse Group Membership Maintenance overlay?

    - by mss
    This is a follow-up to this question: I added the memberof overlay to an existing OpenLDAP 2.4 server. Now I want to update the existing user objects. For new group memberships, the memberOf attribute is updated correctly. But I have a bunch of existing groups which aren't updated automatically. I could remove all users from their groups and re-add them to make sure these entries are in sync. Since this is a Univention Corporate Server which does a lot of magic when you modify the LDAP, I don't want to risk breaking my directory. Is there a way to trick the overlay to update these operational attributes?

    Read the article

  • RAID1: Migrate HHD to SSD?

    - by OMG Ponies
    My current workstation uses an Adaptec 5805, with Win2008 mirrored between two 72 GB (10K?) savvio drives. My question is if there's a way to migrate the mirror to use SSDs - I've been looking at 90GB Corsair Force (Sandy Bridge) to replace the existing setup. If it's possible, without installing the OS fresh. If I replaced the mirrored drive with an SSD, would the array sync the drives? Then I could promote the SSD mirror to be the primary, and use the second SSD as the mirror. That'd be too easy... Or should use Ghost to get an image of the existing setup, apply it to the SSD for a new mirror to be setup on?

    Read the article

  • Connect to a webserver running in Windows 7 using IP from a brower

    - by Optimus Prime
    I'm not sure if this is the proper place to ask this question. Here is my issue I'm using windows 7 and i have installed Zope Server.(Zope is python web framework which has a built-in server). I can connect to this server from my browser by typing, localhost:8080 But if i try to connect this server from another machine using my IP or even from my own system it doesn't work. ie xxx.xxx.xx.xxx:8080

    Read the article

  • Missing menu items for Azure SQL tables within SQL Server Management Studio?

    - by Sid
    I have a table (say Table1) that is replicated via SQL Data Sync Agent across a local SQL Server 2012 as well as an Azure SQL Server (part of Microsoft Azure). Everything about Table1 (schema, table values etc ) is identical to the best of my understanding. However, when I list and right click Table1 from Microsoft SQL Server Management Studio 2012 (SSMS), I get some very different menu options, even for seemingly basic stuff. Lets focus only on the 'Design' menu item: It is visible for Table1 on the local SQL server in SSMS It is missing for Table1 on Azure SQL via SSMS It is visible for Table1 (as Open Table Definition) on Azure SQL when reaching it via Visual Studio 2012 (Server Explorer - Data connections) This is seen in the screenshots below: Now I use scripts from some real stuff (esp when I need to check in the SQL scripts etc) but this difference concerns me to some extent. Am I witnessing just a tools artifact in SQL Server Management Studio when connecting to Azure SQL? or is it something more serious about limitations of Azure SQL itself (although, just seeing the Design surface is so basic!)?

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >