Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 96/232 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • NTPD issue - syncs then slowly loses ground

    - by ethrbunny
    RHEL 5 workstation. Has been running smoothly for years. I did a 'pup' recently and followed with a nice, cleansing reboot. Afterwards the system had some startup issues: namely MySQL refused to start. It just went "...." for 5-10 minutes before I did another boot and skipped that step (using 'interactive'). This was the only service that didn't wan't to start normally. So now that the system is booted I've found that it doesn't want to stay in sync with the NTP master and after 48 hours is refusing any SSH other than root. NTPD: this service starts normally and gets a lock on 4 servers. Almost immediately it starts to lose ground and now (after 3 days) is almost 40 hours behind. If I stop/start the service it gets the lock, resets the system clock and starts losing ground again. The 'hwclock' is set properly and maintains its time. Login: when I (re)start the ntp server I am able to login normally. I assume this problem is due to losing sync with LDAP. This appears to be verified by LDAP errors in /var/log/messages. Suggestions on where to look? ADDENDA: Tried deleting the 'drift' file. After a bit it gets recreated with 0.000. from /var/log/messages: Jan 17 06:54:01 aeolus ntpdate[5084]: step time server 129.95.96.10 offset 30.139216 sec Jan 17 06:54:01 aeolus ntpd[5086]: ntpd [email protected] Tue Oct 25 12:54:17 UTC 2011 (1) Jan 17 06:54:01 aeolus ntpd[5087]: precision = 1.000 usec Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, 0.0.0.0#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, ::#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, ::1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, fe80::213:72ff:fe20:4080#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, 127.0.0.1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, 10.127.24.81#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: kernel time sync status 0040 Jan 17 06:54:02 aeolus ntpd[5087]: frequency initialized 0.000 PPM from /var/lib/ntp/drift Jan 17 06:54:02 aeolus ntpd[5087]: system event 'event_restart' (0x01) status 'sync_alarm, sync_unspec, 1 event, event_unspec' (0xc010) You can see the 30 second offset. This was after about one minute of operation.

    Read the article

  • Permission settings for apache2 web content directories with several users?

    - by John
    Hi there. I've got a Debian VPS set up with a LAMP-stack. My apache2 instance runs on the user account 'www-data'. In addition to the root account and the service accounts I have several user accounts belonging to friends, family and myself that includes FTP-access. This is to allow the users to drop files to the root of their domain which is located in their home folder. I am having issues with setting the correct permissions so that Apache is able to serve the content ("403 Forbidden"). I could just do a 'chmod -R 755 *' on the entire www-directory for each domain, but from what I gather that's not a good idea. Here's an example of the structure: apache2 is run by 'www-data' User 'john' has this home folder structure /home/john/domains/somedomain.com/www /home/john/domains/sub.somedomain.com/www How can I keep things safe while still allowing users to upload content via FTP, and allow for file-uploads in lets say Wordpress?

    Read the article

  • How to speed up rsync?

    - by Jakobud
    I'm running rsync to sync a directory onto my external USB HDD. It's about 150 gigs of data. 50000+ files I would guess. It's running it's first sync at the moment, but its copying files at a rate of only 1-5 MB/s. That seems incredibly slow for a USB 2.0 enclosure. There are no other transfers happening on the drive either. Here are the options I used: rsync -avz --progress /mysourcefolder /mytargetfolder I'm running Ubuntu Server 9.10.

    Read the article

  • crontab not executing all lines

    - by kiasecto
    I have a sudo crontab like this to sync time: # m h dom mow dow command 0 6 * * * ntpdate 10.3.3.3 >> /var/mylog/ntp.log 0 7 * * * /var/mylog/backup.sh >> /var/mylog/backup.log The problem I am having is that the first line (ntpdate) never seems to execute. If I run it manually with sudo that line works. cron does run the backup.sh at the 7, but it never executes then ntp sync at 6. The syslog doesn't seem to show anything. System is Ubuntu 10.04 LTS.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • new monitor won't work on win3.1

    - by Rick Workover
    My old system monitor for my Windows 3.1 machine failed and the new flat screen has a sync error. The new monitors do not support the refresh/screen size it is set to. It boots up into Windows and the monitor is blank with a sync error. Without Windows 3.1 working I don't know how to change the refresh/size to get it to work. I haven't used Win3.1 in so long that I can't remember how to fix this. So, how do I reset the refresh rate and screen resolution it is set to, without being able to actually see anything?

    Read the article

  • Windows 7 won't read from NAS on LAN

    - by Alfy
    I've got a Linkstation NAS drive on a local network. Having just got a new laptop with Windows 7 Home Professional, I can no longer read anything of the drive. I've tried accessing the drive using \192.168.1.55\share, using ftp programs such as WinSCP, filezilla and even using firefox to hit ftp://192.168.1.55. The really annoying thing is that through these methods I can see the files on the drive, counting out any kind of connection issues. I can navigate through the NAS file system, but as soon as I try and copy a file off the NAS, things just stop working. Accessing the drive through a WindowsXP machine works fine. So far I've tried: Disabling firewalls Adding the LmCompatibilityLevel key to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa Using the 40 - 56 bit encryption instead of the 128 bit. Has anyone got any suggestions of what I can check or try. This is driving me crazy and I'm totally out of ideas? Thanks

    Read the article

  • Changing Network Path of Offline Files

    - by Adam
    Many of our users have their Home folder set as Available Offline. Their Windows 7 laptops will not be back on our network for a few weeks. In the mean time, we're setting up new servers and reorganizing our files, so the network path to the Home folder is going to be completely different. Based on some testing I did, when the users return, any files they've created or modified while offline will be gone, and the new Home folder will be there and not set to sync. The offline cache of the old Home folder is still accessible through the Sync Center, but they're not going to want to dig through that and try to find what's missing. Avoiding this would involve keeping the old server around and moving everyone to the new location in person, so we know for sure they're synced first. Is there any way to avoid this that isn't as tedious, like a quick registry edit or something that will point the old offline cache to the new location?

    Read the article

  • rsync chown warning

    - by Ted Kim
    I try to sync two directories using rsync. the source is on Linux, and the other is on windows. So, I mount the directory on windows using the command mount -t cifs ..... in Linux system. Then I execute rsync .... Everything is OK, but rsync prints out rsync: chown "/mnt/windows/A/." failed: Permission denied (13) rsync: chown "/mnt/windows/A/readme.txt" failed: Permission denied (13) I want to sync the directories without changing ownership. How can I do? please let me know. Thanks in advance.

    Read the article

  • ProFTPD mod_tls is not loaded properly?

    - by develroot
    The server is running CentOS 5 with DirectAdmin. I am trying to get ProfFTPD work over TLS, however it seems that proftpd is lacking mod_tls support, even though it was compiled with mod_tls. # proftpd -l Compiled-in modules: mod_core.c mod_xfer.c mod_auth_unix.c mod_auth_file.c mod_auth.c mod_ls.c mod_log.c mod_site.c mod_delay.c mod_facts.c mod_ident.c mod_ratio.c mod_readme.c mod_cap.c As you can see there is no mod_tls.c, however, the DirectAdmin configuration file for proftpd suggests that it was built with TLS support: # cat /usr/local/directadmin/custombuild/configure/proftpd/configure.proftpd #!/bin/sh install_user=ftp \ install_group=ftp \ ./configure \ --prefix=/usr \ --sysconfdir=/etc \ --localstatedir=/var/run \ --mandir=/usr/share/man \ --without-pam \ --disable-auth-pam \ --enable-nls \ --with-modules=mod_ratio:mod_readme:mod_tls And all I get when I try to connect over FTPS using FileZilla is: Raspuns: 220 ProFTPD 1.3.3c Server ready. Comanda: AUTH TLS Raspuns: 500 AUTH not understood Comanda: AUTH SSL Raspuns: 500 AUTH not understood Am I missing something? thanks.

    Read the article

  • How to change aging AD password while connected over VPN from Mac

    - by Franek Kuciapa
    I am connecting to the office from mac via VPN, Cisco AnyConnect Secure Mobility Client. I do not know what to do when my AD password on the firm side will age and approach expiration to ensure that my Mac and VPN continue to work afterwards. Is the proper thing to do in this case to connect via VPN and then change the password on Mac via System Preferences, Users & Groups? Will this update the AD on the server side? Will it sync the PointSec as well that is running on the Mac? Or is a better procedure to RDP to a Windows box while connected over VPN and change the password there hoping the Mac will somehow sync up ?? Running Mountain Lion on the Mac.

    Read the article

  • firehol (firewall) with bridge: how to filter

    - by Leon
    I have two interfaces: eth0 (public address) and lxcbr0 with 10.0.3.1. I have a LXC guest running with ip 10.0.3.10 This is my firehol config: version 5 trusted_ips=`/usr/local/bin/strip_comments /etc/firehol/trusted_ips` trusted_servers=`/usr/local/bin/strip_comments /etc/firehol/trusted_servers` blacklist full `/usr/local/bin/strip_comments /etc/firehol/blacklist` interface lxcbr0 virtual policy return server "dhcp dns" accept router virtual2internet inface lxcbr0 outface eth0 masquerade route all accept interface any world protection strong #Outgoing these protocols are allowed to everywhere client "smtp pop3 dns ntp mysql icmp" accept #These (incoming) services are available to everyone server "http https smtp ftp imap imaps pop3 pop3s passiveftp" accept #Outgoing, these protocols are only allowed to known servers client "http https webcache ftp ssh pyzor razor" accept dst "${trusted_servers}" On my host I can connect only to "trusted servers" on port 80. In my guest I can connect to port 80 on every host. I assumed that firehol would block that. Is there something I can add/change so that my guest(s) inherit the rules of the eth0 interface?

    Read the article

  • rsync creating thousands of ..ds_store files from mounted volume

    - by daniel Crabbe
    I've been using rsync on OS X to sync all our website admins. It was working fine until the OS X 10.6.3 update! Now it creates thousands of empty (0-kb) folders. It only does it when synching to a mounted network drive (which we need to do) as when I sync to my local drive it works as usual! I've tried excludes which don't seem to be working... also tried a different version of rsync so it's an OS X issue. echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up KINEMASTIK" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Users/dan/Dropbox/documents/WORK/kinemastik/WEBSITE/youradmin/ echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up CHRIS BROOKS YOURADMIN" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Volumes/Groups/Projects/516_ChrisBrooks/website/youradmin/ Has anyone experienced the same problem?

    Read the article

  • how to pipe data to sftp connection?

    - by JMW
    ftp supports the put "|..." "remote-file.name" command to pipe data to an ftp connection. Is there something similar available for sftp? In sftp i get the following error: sftp 'jmw@backupsrv:/uploads' sftp> put "| tar -cx /storage" "backup-2012-06-19--17-51.tgz" stat | tar -cv /storage: No such file or directory as above the sftp client doesn't obviously execute the command. i want to use the pipe command to directly redirect the file stream to sftp. (because there is not enough space left to create a backup file on the same disk before uploading it to sftp server.)

    Read the article

  • How to determine the used size of device associated's buffer

    - by dubbaluga
    Hi, when mounting a device without the "sync" option, e. g. by invoking the following: mount -o async /dev/sdc1 /mnt a buffer is associated with a device to optimize (speed) read/write operations. Is there a way to determine the size of this buffer? Another question that comes into my mind is, if it's possible to find out how much of it is used currently. This can be interesting to determine the time it would take to "sync" or "umount" slow devices, such as flash-based media. Thanks in advance for your answers, Rainer

    Read the article

  • Netboot Debian (wheezy) from NFS v4

    - by bara
    Is it possible to boot Debian Wheezy from NFS v4? Bootwing with NFS v3 works just fine. NFS v4 not. This is in my /etc/exports: /nfs 192.168.100.0/24(ro,sync,insecure,no_root_squash,no_subtree_check,fsid=0) /nfs/root 192.168.100.0/24(ro,nohide,sync,insecure,no_root_squash,no_subtree_check) /nfs/root/www contains the root of the webserver. The commandline is: rootfstype=nfs4 root=/dev/nfs4 nfsroot=192.168.100.1:/root/www fails with mount call failed - server replied: Permission denied. Mounting from the busybox in the initrd fails: mount -t nfs4 192.168.100.1:/nfs/root/www /root mounting .. failed: Invalid argument Do I need to modify the initrd?

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • how can i move my events in my ipod calendar to google? it won't let me move items from ipod to goog

    - by Johnny S.
    I've followed googles recommendations and steps for syncing my ipod touch, newest OS, with my google calendar. Sync works great when google calendar events are added or deleted on the marked for syncing calendar. They show up on my ipod. But when I make changes on the native Ipod touch calendar they are not reflected in the google calendar marked for syncing. What gives? I also have been unable to do an initial sync that would move my Ipod calendar events to my google calendar. Any suggestions?

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc.

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • Best and Proper Permissions Settings for Directory

    - by Dr. DOT
    I am interested in knowing the proper, yet security-conscious settings for a directory. Here's my scenario: I have a username for FTP access to my server called "user". For the purpose of the scenario, PHP runs as "nobody" on my server. I have a directory off the document root called "sample". The "sample" directory is chmod'd at 0755 (drwxr-xr-x) "Sample" is owned by "user" and the group is set to "user" The above is all very straight forward and standard. So I want to have a script be able to create (mkdir) and delete (rmdir) directories under "sample". Yet, I don't want to obviously overly expose my server by opening up the permissions (I could easily chmod sample to 0777 and make it world write-able). What is the best combination of permissions, owner settings and/or group settings to allow my script to create and delete directories under "sample" while retaining the ability for "user" to continue to FTP into the directory? Thanks.

    Read the article

  • Transfer files using ssh

    - by zozo
    Good day to all. I am using ssh (winSCP) to transfer some files from a server to my workstation. The problem is that at some files I get disconnected. Always same files. I am the owner of the directory so I guess the file permissions is not a problem. (Also I set the permissions to 777). Is there a size limit or something like that? Thank you for your time. Protocol is SFTP, server is 32bit machine. Files are 100MB tops. Added: Worked with filezilla using ftp. This temporarily fix the problem but is not exactly a solution since maybe next time I won't have root access to create a ftp account

    Read the article

  • Modify .htm files on wireless router

    - by mdeitrick
    What am I trying to do? I am attempting to access the .htm files on my wireless router to modify the look and feel of the Netgear GENIE webpage(s). What have I done? I've read several articles on eHow and Instructables that detail how to setup your router as an FTP server, since I figured the best way to access the files would be through FileZilla. Setting up an FTP server through my router doesn't sound like what I should do to accomplish my task... or perhaps it is? I've also read the documents provided by Netgear for getting started and setting up functionality on the router. Maybe I overlooked something? My specifications Netgear router WNR2000v3 FileZilla v3.8.1 UPDATE: Since someone voted to close this question I'll clarify what I'm asking for... At the present I am dissatisfied with the current UI/webpage look when logging into routerlogin.net. Furthermore I would like to make changes to the admin dashboard.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >