Search Results

Search found 5793 results on 232 pages for 'ftp sync'.

Page 154/232 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Using AutoMySQLBackup on Rackspace Cloud

    - by xref
    Since Rackspace Cloud only allows FTP access it makes using AutoMySQLBackup a little trickier, and while it is at least creating DB dumps I get errors in the backup log: ###### WARNING ###### Errors reported during AutoMySQLBackup execution.. Backup failed Error log below.. .../backups/automysqlbackup: line 1791: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1855: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 803: /usr/bin/find: Permission denied .../backups/automysqlbackup: line 1972: /usr/bin/du: Permission denied Since files are being created I'm assuming the find command failing has to do with actually rotating out and deleting the old backups? Line 803: find "${CONFIG_backup_dir}/${subfolder}${subsubfolder}" -mtime +"${rotation}" -type f -exec rm {} \; Any ideas for alternatives?

    Read the article

  • Lighttpd not starting - no error

    - by Furism
    I recently installed Lighttpd on Ubuntu Server 10.04 x86_64 and created several websites. What I do is include /etc/lighttpd/vhost.d/*.conf and put a configuration file for each website in that directory. The problem I have is when I "service lighttpd start" I get the message that the service started, there is no error message: root@178-33-104-210:~# service lighttpd start Syntax OK * Starting web server lighttpd [ OK ] But then if I take a look at the services listening, Lighttpd is nowhere to be seen: root@178-33-104-210:~# netstat -tap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 localhost:mysql *:* LISTEN 829/mysqld tcp 0 0 *:ftp *:* LISTEN 737/vsftpd tcp 0 0 *:ssh *:* LISTEN 739/sshd tcp6 0 0 [::]:ssh [::]:* LISTEN 739/sshd So I'm looking at ways I could troubleshoot this. I checked in /var/log/lighttpd/error.log and there's nothing in it. Edit: Sorry, I indicated I use CentOS but it's actually Ubuntu Server (I usually use CentOS but had to go with Ubuntu for that one).

    Read the article

  • Not able to delete file from server with permissions of 644 via PHP script

    - by letseatfood
    I am trying to delete JPEG files that were uploaded to the server via FTP. The files are uploaded and written with permissions of 644. The owner and group of the upload directory are mike and mike. I have tried changing the owner and group to www-data, but that does not seem to work. I am trying to delete the files with a PHP script using unlink(). This works on the production server (which is a hosting service), but not my development server, which is a LAMP setup. This leads me to believe it has something to do with permissions on my development server.

    Read the article

  • Get songs off of Windows iPod and onto a Mac

    - by Justing
    I have been using iTunes on a Dell running Windows XP to sync about 60 gigs of songs with my old school iPod. Well, the Dell's hard drive died the other day, and the only place that I have my music is on the iPod (I do have the CDs, but really don't want to have to re-rip 60 gigs of music). So, now I have a shiny new MacBook Pro. Is there a way to get my songs off of my iPod and onto the MacBook? I googled, and found Senuti. But, I'm leery of accidentally formating the iPod and losing my songs, and I can't tell if it is Snow Leopard compatible yet. Has anyone recently gone through this process? Please provide suggestions for copying songs from an iPod formated to work with Windows onto a MacBook Pro running Snow Leopard. Thanks!

    Read the article

  • Website deployment - managing user-uploaded content?

    - by Legion
    I'm a programmer by trade, "server administrator" by company necessity. We're looking at dumping the old painful "update site by FTP upload" style of deployment. Having the webserver check out the latest code base from version control into a folder and having a "current" symlink point to the latest checkout (allowing for easily stepping back to an older version by changing the symlink) seems to be the way we want to go. But I have a question: what's a good practice for dealing with user-uploaded content? This stuff isn't in version control. I have a couple of ideas for dealing with this, but what is the smart, accepted practice?

    Read the article

  • Apache mpm-itk Performance

    - by Matt Beckman
    I manage a bunch of VPSs with memory ranging from 1GB to 8GB. Most of these websites are Joomla websites, and the servers must support multiple sites/users/S-FTP. I use mpm-itk almost exclusively (mostly due to it's convenience in these shared environments). However, I'm aware it isn't known for performance, so I need some advice on making it faster. Due to the lack of documentation when I first went the way of mpm-itk, I included only one setting in the config, and that was to limit each user to 50 clients (the rest I left up to defaults): <IfModule mpm_itk_module> MaxClientsVHost 50 </IfModule> Are there any better alternatives available? Are there any settings supported in mpm-prefork or mpm-worker that are also supported in mpm-itk? Thanks!

    Read the article

  • need recommendation for running PHP/Zend based optimizer

    - by senorsmile
    Firstly, I must admit that I don't know much about setting up PHP beyond the basics. I have an Ubuntu 10.04 server system (hosted) running primarily as an FTP store for a commercial store software. The server that the commercial store is installed on is unfortunately not very reliable, and would like to move that to this Ubuntu 10.04 server. (We've already received permission from the store vendor to do this.) My problem is that they use Zend optimizer which is only compatible with PHP 5.2. I have tried a couple of "hacks" to downgrade PHP to 5.2, but it breaks so many other things that it doesn't seem worth it. My idea is to install some sort of container of Ubuntu 8.04 (like OpenVZ) on the server to house a native install of PHP 5.2 to meet the dependency of the store software. However, it appears that OpenVZ is no longer supported on Ubuntu. Is there another solution similar that I could run on a hosted server to installed a separate "container-like" 8.04 system?

    Read the article

  • my linux problems and solutions [closed]

    - by Delirium tremens
    I read somewhere in StackOverflow or StackOverflow Meta that if I had a problem, then solved it myself, I can share the problem and solution with you. How do I? in Linux: remove unneeded packages using apt-get play spc and psf update the system using apt-get in Mint: install lamp install and configure xdebug enable xdebug for cakephp install bazaar colo rename a repository directory when bazaar explorer fails init a repository when bazaar explorer fails use ssh key with launchpad uninstall firefox 3 when synaptic fails install minefield make pearltrees load when flash fails edit clojure documents install compojure create a new compojure project in Kubuntu enable phpmyadmin after installing lamp stop MySQLdb module error in webpy in Ubuntu stop the mouse pointer from disappearing fix the color stop sync read-only filesystem error stop download prompt instead of site enable phpmyadmin after installing lamp enable mod_rewrite after installing lamp

    Read the article

  • Vantec NexStar NAS Encloser - Writing large files

    - by peter
    Hi, I have one of these 'Vantec NexStar LX - NST-475LX-BK' drive enclosures. It is a NAS drive. When I write a file to the device using eSata, or a SMB share I cannot write files over 2GB. I think this is because the drive is formatted with FAT32. But when I access the device using FTP it doesn't matter. I can write files of any size. E.g. I wrote one on there last night which was 30GB. Does this make any sense? Why? I guess the most important thing for me is data integrity.

    Read the article

  • Battery backed write cache behavior upon disk change

    - by Halfgaar
    We use 3ware Inc 9650SE SATA-II RAID PCIe RAID controllers with battery backed write cache. Our spare hardware has the same controller. I was wondering; are these controllers smart enough not to sync the cache when the disks have been changed? For example, if I deploy one of those spare machines by putting in the disks of another machine and that spare machine still has pending writes, will it be smart enough not to perform those writes on the replaced array? Edit: my scenario is not really made clear, so let me give an example: server1 goes down because of power supply failure. I put the disks in server2 and start. I repair server1 I put the disks back from server2 in server1 (it's not relevant right now that in reality I would probably keep server2 running). If server1 doesn't have safeguards, it will write to the array, thinking it's simply powering up again, corrupting it.

    Read the article

  • How to close the logon process named NtLmSsp ?

    - by Aristos
    I have a windows 2003 server and time to time I am getting many login failures like this one. Logon Failure: Reason: Unknown user name or bad password User Name: administrator Domain: xx.xx.xx.xx Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: XLHOST Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 173.45.70.100 <- hacker Source Port: 4722 AND Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: user Source Workstation: XLHOST Error Code: 0xC0000064 The question is, how can I close this process of login ?, what I have left open and some one can try to login ? Some notes: I login to the server using tunneling, nothing is open except dns, email, and web ports, not even ftp, and all default ports are change and hidden. I also monitor port scan and capture any one that try to find the hidden ports. Probably it is something open... Thank you in advanced.

    Read the article

  • Starting with administering a linux box

    - by Josh K
    I'm rather new to this, and I'd like to play with administering a linux box. Things I need to know how to do: Setup subdomains Setup FTP accounts Setup full domains / add domains MySQL setup / install / management LAMP setup / install / management This is probably going on a CentOS distro. I'd like links or a break down on how I can learn to do this. I am comfortable with a command line, but I'm trying to move from shared hosting to a VPS and would like to have some idea on how deep the water is before I do.

    Read the article

  • Questions about Domains and DNS

    - by ShoX
    Hi, I am totally new to the DNS and server hosting world and not quite sure what I need. I want to get a domain, forward it to my own server, so that the user sees example.com in the url bar and example.com/foo/bar will work. Depending on what subdomain it is, it should do different things (another base-directory at webserver, ftp, etc). Also my email should be able to be sent to and received by that server. What irritates me, is the fact, that in the A-record I can only list IP-addresses and no ports. So do I have to set up a nameserver on my own server? Or do I accomplish this via vhosts on my webserver? I would appreciate any help or link to a tutorial. I know how DNS works, know some basic apache-stuff, etc... so no need to explain that. Thanks

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • IIS7 response size thresholds

    - by DanielM
    I have a customer who is attempting video playback via HTTP progressive download of very large files ( 1 GB). There is no problem once a file is cached at the edge via my CDN, but hits to my origin (first hits prior to edge-cache population) experience stalling and loss of sync between audio and video about an hour and a half into playback. This occurs pretty reliably at that point, suggesting that some threshold somehwere is getting hit. Are there IIS configuration knobs governing HTTP Response size? Other data points: I am unable to replicate this problem. I am looking at client bandwidth and last mile issues. I am looking at possible encoding recipe dependencies. But this problem never came up when we were using a "push" cache configuration (CDN-hosted origin), so something funky serverside at my origin seems like a likely culprit. Thanks ...

    Read the article

  • Why do all my sites randomly stop in iis?

    - by Jen
    I have a GoDaddy Account with a virtual hosting environment. For some reason every few weeks, all of our sites (both websites and FTP sites) are stopped. I restart them and they start normally, but I would like to prevent this from happening in the future. It is an iis thing. When you open up iis you can see that each of the sites has been stopped in iis. Also, I have tried looking at the event viewer and I don't see anything, but I am not sure what I am looking for.

    Read the article

  • Exchange ActiveSync does not work for one user

    - by jshin47
    One particular user in our system is unable to connect to Exchange ActiveSync via her iPhone. When I try to connect using my own credentials on her iPhone it works (everything begins syncing), but when I input her credentials, the Settings app verifies the credentials are correct but nothing syncs. For example, if I open Mail, no items are shown. When I attempt to force a sync, it says "Cannot connect to server." In Exchange 2010 Management Console the user is no different than the others. Exchange ActiveSync is set as "Enable" in Mailbox Features. EDIT: Alternatively, if there is some easy way to create a new useraccount/mailbox and copy all of the contents of the old one over, I bet it would work, and that would be fine as well. She is a Mac user so we do not have to worry about her Active Directory account.

    Read the article

  • I want to play one podcast only on iTouch, but they play in series one after the other.

    - by Eddy
    I download multiple podcasts from one source, eg, five episodes of Naked Scientists. I want to listen to one only, but when the first finishes it automatically goes to the second, and third etc. I want to listen to just one episode at a time and have the iTouch turn off when it is finished (this is a sleep-aid for my insomnia...I don't want it playing all night !) A "Genius" suggested making playlists, but although you can make a podcast playlist, it does NOT sync. Please help. thanks, Ed3339

    Read the article

  • Wordpress Automatic Updating/Installing Plugins Permissions

    - by karmic
    I am using the latest Wordpress and I have always had issues with the automatic updater. For the files in the wordpress directory, i set them to permission 770, and add the webserver user 'www-data' as the group owner. I use lighttpd. However, the automatic updating plugins or installing plugins does not work. It works if I chmod 777 the files or if I set the actual owner to the web server as well. What are the best permission settings for security while still allowing the updating feature to work properly in wordress? Also, by 'not work' i mean, it will go to the screen that asks me for FTP credentials when I try to update.

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • IIS Reverse Proxy support for multiple protocols

    - by Abraxas
    I have a server 2012 machine running IIS. It's in my DMZ and I would like to use it to do reverse proxy for several services. I can get it to route traffic on port 80 to 2 separate internal servers running web apps but there are some issues when I try to forward SSH (not port 80/443) and then when I try to forward OWA (Micrsoft exchange's 'webmail' services) to the internal mail server I run in to issues with guides (like this: http://blogs.technet.com/b/exchange/archive/2013/07/19/reverse-proxy-for-exchange-server-2013-using-iis-arr-part-1.aspx) when they say to have all traffic forwarded to the server farm created for OWA. My question for you all is this - given that there is no more Threat Management Gateway (only runs on server 2008) and ISA 2006 is also dead - is it possible to support multiple types of reverse proxies with different protocols (ftp, ssh, web, ssl-web) in IIS, or would it be better to install a different DMZ OS like a nginx server and use linux firewalls + nginx reverse proxy? Thanks for any help!

    Read the article

  • Download folders from dev server to local drive

    - by Niall Collins
    I am developing a .net web application on a local environment. I have a dev server that the application is installed on. Within the web application on the dev server I have four folders that I dont have locally and that are controlled by another application. In my day to day development I require the four folders on local PC. I would like to automate the process of pulling the folders from the dev server to my local drive, so I can keep thing in sync. Ideally something like this Run file from main folder (be it a bat file, powershell, some sort of job, open to recommendations) Download 4 folders supplied to it. First download bring everything down, from them on only pull the changes Not sure where to start with achieving this but would appreciate any help would with. I know there are apps out there that do something like this but would like to give a go writing something to do this before I resort to using one of them.

    Read the article

  • Secure Apache Virtual Hosts?

    - by Dr Hydralisk
    I am going to host a few small sites on VPS, and each of them are going to run my own custom PHP scripts. I am fairly certain that they are secure (did everything in the book, plus some of which is not in the book) to make sure they can't be exploited. But just to be safe I want to know how I could secure each of the virtual hosts so that they can't escape from there virtual host (if a hacker uploaded a shell they could not go above the www folder a legitimate user can't do in ftp no matter how many times they click ..) folder on Debian and Apache.

    Read the article

  • What is the simplest and fastest way to transfer large file through a Windows network?

    - by Sake
    I have a Window Server 2000 machine running MS SQL Server that stores over 20GB of data. The database is backed-up every day to the second harddrive. I want to transfer those backup files to another computer to build another test server and for recovery practicing. (the backup never actually got restored for almost 5 years. Don't tell my boss about that!) I have trouble transfering that huge file through the network. I've tried plain network copy, apache download, and ftp. Any method I tried end up failing when the amount of data transfered reach 2GB. The last time that I successfully transfered the file, it was through a usb attached external harddrive. But I want to perform this task routinely and preferably automatically. Wonder what is the most pragmatic approach for this situation ?

    Read the article

  • One-Way Backup Service? [closed]

    - by Jon Rodriguez
    Up until a month ago, my girlfriend has used MobileMe to backup all the files on her MacBook. This turned out terribly when a quirk of MobileMe caused it to erase all of her files on MobileMe, and then sync the newly-erased MobileMe down to her computer, erasing everything. A week's worth of college essays and CS homework were gone. Now, I am terrified of any commercial cloud-backup solutions because of the possibility of this happening. Going off the list provided in these answers, could you please help me find a good backup service that is completely one-way? I want a service where there is literally not a single line of code that has the possibility of writing to my computer's drive. I want a pure one-way backup service.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >