Search Results

Search found 16616 results on 665 pages for 'home sharing'.

Page 223/665 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

  • Vacation sends autoreply message to the recipient as well

    - by elitalon
    Hi, I have configured my Postfix server with vacation for a domain. Sending a message to [email protected] causes two events: The message is delivered to the recipient ([email protected]) An auto-reply message is sent to the sender, alerting that [email protected] should be used instead. Everything works well except for one particular drawback: the auto-reply is also sent to the recipient, so it receives two messages in the end. What can I do to avoid that? I'm only using the $TO variable in the custom vacation.msg message. And here is Postfix's master.cf vacation line: autoreply unix - n n - - pipe flags=Rhu user=vacation argv=/usr/bin/vacation -j -m /home/vacation/.vacation.msg -f /home/vacation/.vacation.db vacation I know using the -j is a little bit risky according to man page, but I'm kind of testing here.

    Read the article

  • Hard Copies VS Soft Copies

    - by Garet Claborn
    Where do you draw the line and say, "OK, I'm actually going to print out this piece of code, spec, formula, or other info and carry it around but these pieces can stay on disk." Well, more importantly why do you draw the line there? I've encountered this a number of times and have some sort of vague conceptions beyond "oh now I'm REALLY stuck, better print this out." I've also found some quicksheets of basic specs to be handy. Really though, I have no particular logic behind what is useful to physically have available in the design and development process. I have a great pile of 'stuff' papers that seemed at least partially relevant at the time, but I only really use about a third of them ever and often end up wishing I had different info on hand. Edit: So this is what I'm hearing in a nutshell: Major parts of the design pattern Common, fairly static and prominently useful code (reference or specs) Some representation of data useful in collaborating or sharing with team Extreme cases of tough problem solving Overwhelmingly,almost never print anything.

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • Ubuntu 12.04 installation on GPT + RAID going into grub rescue

    - by Proy
    I have two 2TB disks. I am installing Ubuntu 12.04 using the alternate version of the server cd. On the partitioning page I have done my partitioning as follows /dev/sda1 - 32 MB - bios_grub /dev/sda2 - 50 GB -raid device /dev/sda3 - 8 GB -raid device /dev/sda4 - Balance full GB - raid device /dev/sdb1 - 32 MB - bios_grub /dev/sdb2 - 50 GB -raid device /dev/sdb3 - 8 GB -raid device /dev/sdb4 - Balance full GB - raid device After this I have setup raid devices /dev/md0 for /(/dev/sda2 + /dev/sdb2) for / ext4 /dev/md1 for swap( /dev/sda3 + /dev/sdb3)for swap /dev/md2 for /home(/dev/sda4 + /dev/sdb4)for /home ext4 The installation finishes it shows that it is installing grub to /dev/sda and /dev/sdb. But once the system reboots it falls into grub rescue mode. on doing ls I can not see the md devices only hd once. I also tried booting into rescue mode with the install cd and doing grub-install /dev/sda and /dev/sdb. What am I doing wrong ? Why is grub2 not detecting the raid revices ? UPDATE: I just did the same steps with Ubuntu 10.04 and it worked perfectly fine. I wiped out the RAID and partitions and everything and did it from scratch. I think the issue is with Ubuntu 12.04 and the way it partitions 2 TB disks

    Read the article

  • Oracle Secure Global Desktop - Business Continuity During Snowstorm!

    - by Mohan Prabhala
    Capgemini, one of the world's largest management consulting, outsourcing and professional services companies, is an Oracle Secure Global Desktop customer and uses it to provide secure, remote access to 1) corporate applications centralized in the datacenter and 2) desktops hosted on Oracle VDI. Earlier this month, one of Capgemini's government customers in Holland were advised to avoid traveling to work, due to a heavy snowstorm. This resulted in a lot of employees working from home. Thankfully due to their deployment of the Oracle Secure Global Desktop gateway, employees were able to easily access their corporate applications and desktops from home and anywhere outside of their office. Capgemini reports that during the days of the snowstorm, a record number of users leveraged Oracle Secure Global Desktop (servers and gateway). Despite this record usage, Oracle Secure Global Desktop remained perfectly stable and allowed users to seamlessly access their applications and desktops. This is a great example of how Oracle Secure Global Desktop allows employee productivity and business continuity even during severe weather conditions such as snowstorms. We are delighted to have enabled business continuity for Capgemini's customers, and look forward to our continued relationship with Capgemini. This blog has been approved for posting by Capgemini.

    Read the article

  • Can I buy a wireless access point that also acts as a DNS nameserver?

    - by Brabster
    Hi, I was wondering if I buy a wireless access point/router that also acts as a DNS nameserver for DHCP clients. I can see the hostnames of my home devices in the DHCP clients table of the router I have, it doesn't seem like a great leap of the imagination to have a local nameserver on there, something like hostname.home that automatically publishes those entries to a local zone. But - I can't find one that does that. Is there a reason why this shouldn't/can't be done? Or is my Google-Fu just weak? Cheers,

    Read the article

  • VNC grey screen and start on boot 12.04

    - by Siriss
    I have 12.04 LTS installed and I am trying to get VNC to work. I want to be able to connect to existing sessions, and have it start on boot. I followed this guide and have left a comment to try and fix my problems but no dice. I have also tried all solutions I have found on google, including the one here, but I could not get it to work (I am missing something easy I am sure). When I connect to the VNC session I get a grey screen with three checkboxes: Accept clipboard from viewers Send clipboard to viewers Send primary selection to viewers Here is my xstartup: #!/bin/sh # Uncomment the following two lines for normal desktop: unset SESSION_MANAGER # exec /etc/X11/xinit/xinitrc gnome-session -session=gnome-classic & [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & I have also edited my to include: /usr/bin/vncserver -geometry 1024x768 It does not start on boot, but when I run the command it starts, but I get the grey screen. Any help would be greatly appreciated. Thank you!!

    Read the article

  • what is difference in section and subreport, where to use multiple sections?

    - by haansi
    I am new Crystal reports, will highly appriciae if you please can share your knoweldge about these CR concepts. I want to know what is difference in section and sub report. I know about default sections and we can add new sections into report. What is purpose of a subreport ? why to use a subreport instead of a section ? Where to use multiple detail sections in report ? Are sections used to carry a "Can grow" filed that will be brining data dynamically ? thanks for guiding and sharing your experience.

    Read the article

  • How to setup an iTunes library to use between two Macs?

    - by stead1984
    As you can tell this is no where near work related. I have an iMac G5 where my itunes is currently hosted, I have also just got a new MacBook Pro. What I want to be able to do is sync my itunes library from my iMac to my MacBook Pro, that way it can be accessible away from my home network, then if I make any changes to the itunes library (like change a track name) it will sync these changes back once I connect back to the home network. My current itunes contains music, videos, podcasts, playlists and iPhone apps, I would also like iTunes to track play counts collectively between the iMac and the MacBook Pro.

    Read the article

  • Outlook 2010: How do I mark one recurring event public?

    - by goober
    My office utilizes Outlook 2010 and Exchange for e-mail, and our calendars show free/busy information by default. Background I work from home once a week, so I have created an event that lists me as tentative for the entire workday, titled "Working from Home - Available Remotely". However, those attempting to schedule a meeting with me won't see this title, and therefore won't think they can schedule an event. As much as I'd like to get out meetings (!) it's important that folks be able to schedule with me. Question Is there a way to make the title/details public for this one recurring event so that when others attempt to schedule a meeting with me, Attempted Solutions I've tried creating a public calendar and sharing all the details of that calendar. However, all of my calendars are not included when someone wants to schedule with me, and so I'm shown as free unless someone specifically looks at my public calendar. I've Googled around, to no avail.

    Read the article

  • How to install Ubuntu 13.10 on Hybrid Disk alongside Windows 8.1

    - by user205691
    I am having trouble installing Ubuntu 13.10 on HP Envy 4-1046tx ultrabook. When i bought this, it came with windows 7 pre-installed, but i upgraded it to 8 and now recently to 8.1. But somehow, i feel 8.1 is slower or something went wrong with the upgrade and made my system slow. I want to try Dual booting Ubuntu 13.10 with windows 8.1 The system recovery drive has windows 7 recovery files. SSD has 4GB allocated to windows 8 (i think for hibernation/rapid start). 25GB of SSD is free and i want to install ubuntu on this SSD pointing it to "/" I will also shrink the windows partition (the only other partition available apart from recovery & SSD) to free up 100GB and allocate this space to "/home" during ubuntu installation. I tried the above steps while on windows 8, but not successful. Ubuntu installation went fine, but the grub was not loaded. I tried to deploy linux via EasyBCD, but after that also, selecting linux in the boot would load grub on command prompt and do nothing. While ubuntu installation, i also deleted the raid drivers with sudo dmraid -rE, but still ubuntu didnt recognize my windows. I think i am missing some steps, so this time i want to do it right with proper info before starting the process. My requirements: dual boot Ubuntu with windows 8.1 c:\ shrinked windows with 300GB on sda1, 100GB for /home on sda1 & ubuntu installed on 25GB SSD volume sda2 (this is mSata i think) GRUB or EFI that helps me load both OS properly without breaking anything SWAP partition can be added if needed on sda1 (4gb?)? I have backed up my drive and have a 16GB usb3.0 with ubuntu loaded. I hope i have mentioned everything i need and know.. All i need now is some guidance and what to do right so that this installation goes as planned :)

    Read the article

  • Google's process for publishing/modifying pages [closed]

    - by Glenn Dayton
    I'm assuming that a group of people at Google have control of certain sections of google.com, but how does Google make sure that employees don't accidentally or intentionally sabotage the website? Does Google use Adobe Contribute or some similar product for sharing/publishing the website. Do employees use WebDAV, FTP, SFTP, or SSH to publish the site. Since Google has hundreds of thousands of servers it probably takes some time for its servers to update. Do they transmit the new copy of the website to all servers before publishing at once? This question does not apply to Google editing a database and having a page reflect the database's changes. It applies to employees editing the source code and/ or back end of the site.

    Read the article

  • My VS 2010 and ASP.NET 4 Talks Online

    The past 7 years Ive done an annual all day event in Arizona organized by the most excellent Scott Cate (who always does a phenomenal job organizing the event and making it a great one). Earlier this month I visited and presented 4+ hours of content covering VS 2010, ASP.NET 4 and ASP.NET MVC 2. The talks are demo-heavy and show off a ton of new features. NextSlide.com a .NET shop local to Arizona who has a great product for sharing presentations volunteered to record the talks and publish...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • My VS 2010 and ASP.NET 4 Talks Online

    The past 7 years Ive done an annual all day event in Arizona organized by the most excellent Scott Cate (who always does a phenomenal job organizing the event and making it a great one). Earlier this month I visited and presented 4+ hours of content covering VS 2010, ASP.NET 4 and ASP.NET MVC 2. The talks are demo-heavy and show off a ton of new features. NextSlide.com a .NET shop local to Arizona who has a great product for sharing presentations volunteered to record the talks and publish...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ubuntu 12.04 Server ping gateway responds with destination host unreachable

    - by blckblttkd
    I consider myself fairly avid with Ubuntu and Linux, but this one has me stumped. I built up a Xen Server using Ubuntu 12.04 as the base operating system. It has multiple domUs running on it. My home network has a statically defined network where I got all the network connectivity going peachy. The server was moved to a permanent home this morning. So, the network configuration on the main system had to change. Again, another static network, but now I can't ping the upstream gateway from the host. As the VMs use this NIC over a bridge, they too are broken. Ping responds with "destination host unreachable." I simplified the networking down to a simple static network as seen below (no bridge or anything) just to get it to work. Here's the contents of my /etc/network/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 216.7.188.228 gateway 216.7.188.225 netmask 255.255.255.240 broadcast 216.7.188.255 network 216.7.188.0 dns-nameservers 8.8.8.8 8.8.4.4 Here's the contents of route -n 0.0.0.0 216.7.188.225 0.0.0.0 UG 100 0 0 eth0 216.7.188.224 0.0.0.0 255.255.255.240 U 0 0 0 eth0 And the results of pinging the gateway: PING 216.7.188.225 (216.7.188.225) 56(84) bytes of data. From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable Again, this worked in one network flawlessly (obviously with different parameters in the interfaces file). I did try using eth1 (as there are two NICS on the server (in case the MAC address got flipped on bootup). No success there. Yes, the cable is in the right port now :) Any thoughts? I appreciate the help!

    Read the article

  • Jail Linux user to directory for FTP login

    - by Greg
    I'm planning on using vsftpd to act as a secure ftp server, but I am having difficulty controlling the linux users that will be used as ftp logins. The users are required to be "jailed" into a specific directory (and subdirectories) and have full read/write access. Requirements: - User account "admin_ftp" should be jailed to /var/www directory. - Other accounts will be added as needed, for each site... e.g: - User account "picturegallery_ftp" should be jailed to /var/www/picturegallery.com directory. I have tried the following, but to no avail: # Group to store all ftp accounts in. groupadd ftp_accounts # Group for single user, with the same name as the username. groupadd admin_ftp useradd -g admin_ftp -G ftp_accounts admin_ftp chgrp -R ftp_accounts /var/www chmod -R g+w /var/www When I log into FTP using account admin_ftp, I am given the error message: 500 OOPS: cannot change directory:/home/admin_ftp But didn't I specify the home directory? Extra internets for a guide how to do this specifically for vsftpd :)

    Read the article

  • Virtual Developer Day: Oracle WebLogic Server & Java EE (#OTNVDD)

    - by Justin Kestelyn
    Virtual Developer Day is back with a vengeance! On Feb. 1, login to learn how Oracle WebLogic Server enables a whole new level of productivity for enterprise developers. Also hear the latest on Java EE 6 and the programming tenets that have made it a true platform breakthrough, and get hands-on with our VirtualBox virtual machine image! Even better, you never have to leave your desk - you'll get access to live sessions with chat support, and even 1-1 desktop sharing upon request. It's a no-brainer, get registered!

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Google I/O 2012 - Deep Dive into the Next Version of the Google Drive API

    Google I/O 2012 - Deep Dive into the Next Version of the Google Drive API Ali Afshar, Ivan Lee This session discusses a number of best practices with the new Google Drive API. We'll cover how to properly sync files, how to manage sharing, and how to make your applications faster and more efficient than ever before. We'll go through an entire working application that exposes best practices. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 17 0 ratings Time: 45:50 More in Science & Technology

    Read the article

  • After closing the ssh terminal, the thin server is down

    - by Keating Wang
    I have a rails project run on the thin server(1.3.1) on a ubuntu server. I ssh to the server and start thin with command 'thin start -C config/thin.yml', following the thin.yml, port: 3000 log: log/thin.log timeout: 30 chdir: /home/byht/56platform/dev/tracker environment: production servers: 1 daemonize: true After thin starts successfully, I visit the project and it works well. Then, I close the terminal, I can also visit the pages that have been visited, but when I visit the pages that not been visited before closing ssh terminal, a "500" error appears on the page. I didn't find the error messages in the log file. I have tried start thin with nohup and sudo, but they are useless. I sign in the ubuntu server locally, then the problem disappears. But I need sign in the server to stat thin with ssh when I'm home.

    Read the article

  • Possible to get OpenDNS to dereference Host on VPN?

    - by Scott P
    I recently changed ISPs for my home internet. I am now having some trouble getting back into the corporate network from home over the VPN. I have figured out the OpenDNS is resolving the Hosts on the VPN incorrectly when I am using TCP/IP. When I browse to one of the hosts on corporate network, i.e. \host1, from the file manager this succeeds. However, when I ping the host, i.e. ping host1, the IP address is resolving to the OpenDNS name server instead of the actual Host IP address. Does anyone know how to make this work? On a hunch, I turned off type correction. But, this did not help.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >