Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 115/127 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Could git do not store history of specific folders when working with git-svn?

    - by Timofey Basanov
    In short: Is there a way to disable storing full history for specific folders in git-svn repo? We have pretty large SVN repo with big checkout. I would like to migrate it to Git for my local development, because Git speeds up update and status commands orders of magnitude. When I simply do git svn clone it creates very big repo. Big enough to be bigger then my whole HDD. The problem lies in binary directories for which history is too large. Latest binaries are required for proper local build, but history is not required at all for my development process. I will never change them myself. I would like to store only latest versions for specific folders, or may be a history, but for no more than a week. I could only found filter for git svn fetch, which excludes specific folders at all. This is not exactly what I need. It's OK with me to have Cron task which deletes history from specific folders, but I do not know how to make one. Also Cron does not solve problem of first git svn clone. P.S. SVN repository structure could not be changed by any means.

    Read the article

  • Configuring visual subversion to exclude

    - by douglasrahn
    I have been searching for any documentation on how to exclude files for visual svn but have not found any. All the documentation I find seems to not match my file structure or I am missing some files/directories referenced. For example the only file I find with configuration items in it seems to be completely commented out and missing the miscellaneous section as well as any auto properties enable - etc... Ultimately I need to exclude some files so that my development can continue without SVN errors. I am constantly receiving errors for pbuser and other project files and would like to make sure this is not causing some of my other headaches. Here is information I would love to use but cant as it doesnt match: How to “fix” Subversion in XCode 3 Posted on December 10, 2008 by Rodney Aiglstorfer in Xcode If you don’t take the necessary steps to prepare for subversion, you will run into problems using it in XCode. This is because XCode produces files that “confuse” Subversion because it either thinks they are text files when they are really binary files or the reverse. To overcome these limitations, you need to make some simple changes to the subversion configuration file in your user home directory. Here are some steps you can follow to ensure that you will be able to use Subversion within XCode without any issues. Step 1. Open the subversion configuration file ~/.subversion/config NOTE: If the “.subversion” directory doesn’t exist yet then run this command which fails but will create the necessary files to get you started: svn status Step 2. Enable “global-ignores” and add new things to ignore Find the line that contains the text “global-ignores” and append the following text: build *~.nib *.so *.pbxuser *.mode* *.perspective* What I am looking for really is how to exclude the files I know I need to for Visual subversion - it shouldnt be different really from regular svn as it claims to use the same product just places a gui on it.

    Read the article

  • Virtualmin - Domain / Subdomain confusion

    - by Dan
    Hi, I've just purchased a new VPS and setup Virtualmin on it. I pointed my domain at the IP of the server and set up a new virtual server through Virtualmin. I uploaded a simple php script to check it was working and it seemed to be fine. I then created a subserver and also pointed test.mydomain.com to my IP, with the intension of having mydomain.com and test.mydomain.com both pointing to different directories on my VPS. However, once I added the subserver, I received a 403 Forbidden message if I tried to access either the new subdomain OR the existing site. I uploaded a similar php script to the subfolder and it then displayed correctly. However, I now get sent to the same script regardless of what URL I type in. I did try posting this message on the Virtualmin forum, but their filters picked it up as spam and wouldn't allow it. Can anyone help me? I'm sure it's just something simple, but this is driving me crazy! Any advice appreciated. Thanks.

    Read the article

  • Background loading javascript into iframe without using jQuery/Ajax?

    - by user210099
    I'm working on an offline only help system which requires loading a large amount of search-related data into an iframe before the search functionality can be used. Due to the folder structure of the project, I am unable to use Ajax-related background load methods, since the files I need are loaded a few directories "up and over." I have written some code which delays the loading of the help data until the rest of the webpage is loaded. The help data consists of a bunch of javascript files which have information about the terms, ect that exist in the help books which are installed on the system. The webpage works fine, until I start to load this help data into a hidden iframe. While the javascript files are loading, I can not use any of the webpage. Links that require a small files be downloaded for hover over effects don't show up, javascript (switching tabs on the page) has no effect. I'm wondering if this is just a limitation of the way javascript works, or if there's something else going on here. Once all the files are loaded for the help system, the webpage works as expected. function test(){ var MGCFrame = eval("parent.parent"); if((ALLFRAMESLOADED == true)){ t2 = MGCFrame.setTimeout("this.IHHeader.frames[0].loadData()",1); } else{ t1 = MGCFrame.setTimeout("this.IHHeader.frames[0].test()",1000); } } Load data simply starts the data loading process. Thanks for any help you can provide.

    Read the article

  • How to change the default branch to push in mercurial?

    - by timmfin
    I like creating named branches in Mercurial to deal with features that might take a while to code, so when I push I do a hg push -r default to insure I'm only pushing changes to the default branch. However, it is a pain to have to remember -r default every since time I do do a push or outgoing command. So I tried fix this by adding this config to my ~/.hgrc: [defaults] push = push -r default outgoing = outgoing -r default The problem is, those config lines are not really defaults, they are aliases. They work as intended until I try to do a hg push -r <some revision>. And the "default" I've setup just obliterates the revision I passed in. (I see that defaults are deprecated, but aliases have the same problem). I tried looking around, but I can't find anything that will allow me to set a default branch to push AND allow me to override it when necessary. Anyone know of something else I could do? ps: I do realize that I could have separate clones for each branch, but I would rather not do that. It's annoying to have to switch directories, particularly when you have shared configuration or editor workspaces.

    Read the article

  • Why should I install Python packages into `~/.local`?

    - by Matthew Rankin
    Background I don't develop using OS X's system provided Python versions (on OS X 10.6 that's Python 2.5.4 and 2.6.1). I don't install anything in the site-packages directory for the OS provided versions of Python. (The only exception is Mercurial installed from a binary package, which installs two packages in the Python 2.6.1 site-packages directory.) I installed three versions of Python, all using the Mac OS X installer disk image: Python 2.6.6 Python 2.7 Python 3.1.2 I don't like polluting the site-packages directory for my Python installations. So I only install the following five base packages in the site-packages directory. For the actual method/commands used to install these, see SO Question 4324558. setuptools/ez_setup distribute pip virtualenv virtualenvwrapper All other packages are installed in virtualenvs. I am the only user of this MacBook. Questions Given the above background, why should I install the five base packages in ~/.local? Since I'm installing these base packages into the site-packages directories of Python distributions that I've installed, I'm isolated from the OS X's Python distributions. Using this method, should I be concerned about Glyph's comment that other things could potentially break (see his comment below)? Again, I'm only interested in where to install those five base packages. Related Questions/Info I'm asking because of Glyph's comment to my answer to SO question 4314376, which stated: NO. NEVER EVER do sudo python setup.py install whatever. Write a ~/.pydistutils.cfg that puts your pip installation into ~/.local or something. Especially files named ez_setup.py tend to suck down newer versions of things like setuptools and easy_install, which can potentially break other things on your operating system. Previously, I asked What's the proper way to install pip, virtualenv, and distribute for Python?. However, no one answered the "why" of using ~/.local.

    Read the article

  • Reading a directory

    - by paleman
    Hi, I'm trying to solve exercise from K&R, it's about reading directories.This task is system dependent because it uses system calls.In the book example authors say that their example is written for Version 7 and System V UNIX systems and that they used the directory information in the header < sys/dir.h,which looks like this: #ifndef DIRSIZ #define DIRSIZ 14 #endif struct direct { /* directory entry */ ino_t d_ino; /* inode number */ char d_name[DIRSIZ]; /* long name does not have '\0' */ }; On this system they use 'struct direct' combined with 'read' function to retrieve a directory entry, which consist of file name and inode number. ..... struct direct dirbuf; /* local directory structure */ while(read(dp->fd, (char *) &dirbuf, sizeof(dirbuf) == sizeof(dirbuf) { ..... } ..... I suppose this works fine on UNIX and Linux systems, but what I want to do is modify this so it works on Windows XP. Is there some structure in Windows like 'struct direct' so I can use it with 'read' function and if there is what is the header name where it is defined? Or maybe Windows requires completely different approach?

    Read the article

  • "The breakpoint will not currently be hit" error while debugging a mixed mode application (c# and unmanaged c++)

    - by user1678403
    While debugging a mixed mode application in VS2010, the breakpoint set on a line of code contained in an unmanaged c++ dll source file (called from a managed c# wrapper class) shows the infamous "The breakpoint will not currently be hit. No symbols have been loaded for this document" info message when hovering the mouse over the breakpoint on the line in question. The breakpoint itself is a red circle with a yellow info triangle instead of the usual solid red orb. Of course, the breakpoint isn't hit when the debugger is executed. Most answers I've found for this warning indicate the breakpoint hasn't been set properly, or that the expected dll is not being loaded, or that the associated pdb file is not located in the correct location, etc. etc. This is not the problem. The application does load and execute the referenced dll correctly. I've verified that the correct pdb file, with the same file date as its dll, is located in the executable's working directory along with the target dll itself. The debugger simply doesn't load the symbols for the dll, and the dll doesn't show in the Modules list. None of the solutions I've found online work for this problem. The dll doesn't show in the modules list available from 'Debug-Windows-Modules' menu selection... even though it is, in fact, loaded. Breakpoints set in the wrapper class work correctly. Deleting the bin and obj directories, cleaning and rebuilding the solution also doesn't help.

    Read the article

  • Nginx1.6.1: Installing from source not running corectly

    - by Maca
    I just installed Nginx 1.6.1 from source but it didn't seem to installed correctly. Nginx is running if I service nginx status but when I do nginx -v it outputs command not found. Regular HTML page shows fine and there is no error in the error logs. I am on AWS Ec2 linux AMI. Here is my /etc/init.d/nginx script #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /usr/local/nginx/logs/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/usr/local/nginx/logs/nginx.lock make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

    Read the article

  • passenger-status - ERROR: Phusion Passenger doesn't seem to be running

    - by Casual Coder
    My server is: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:44:11 My ruby version ruby 1.9.2p136 (2010-12-25 revision 30365) [x86_64-linux]. I've installed passenger 3.0.7 via RubyGems. I've run passenger-install-apache2-module and everything went fine. I've modified configuration (load module, edit virtualhost etc.) and restarted Apache. Module is loading fine (apache does not complain). But Passenger is obviously not working: sudo passenger-status ERROR: Phusion Passenger doesn't seem to be running. How can I get it working ? Edit 1: /etc/apache2/mods-enabled/passenger.load LoadModule passenger_module /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7/ext/apache2/mod_passenger.so Root of passenger: passenger-config --root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7 Apache VirtualHost sub URI configuration in /etc/apache2/sites-enabled/railsapps: <VirtualHost <IP ADDRESS>:80> ServerAdmin webmaster@localhost ServerName my.server.name PassengerRoot /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby RailsEnv development DocumentRoot /www/vhosts/railsapps <Directory /www/vhosts/railsapps> Options FollowSymlinks -MultiViews AllowOverride None Order allow,deny Allow from all </Directory> RailsBaseURI /siteA <Directory /www/vhosts/railsapps/siteA> Options -MultiViews AllowOverride All Order allow,deny Allow from all </Directory> RailsBaseURI /siteB <Directory /www/vhosts/railsapps/siteB> AllowOverride All Options -MultiViews Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/railsapps_error.log CustomLog /var/log/apache2/railsapps_access.log combined </VirtualHost> Of course as in 'users guide apache.html' siteA and siteB are symlinks to siteA/public and siteB/public absolute paths respectively. Edit 2: In logs there is nothing related to passenger. Permissions are also fine (read and executable) on directories in paths. Even if it was some misconfiguration or permission problem isn't passenger suppose to be running? I mean sudo passenger-status should at least output --- general information ---. When I place some test html file in railsapps directory it is served fine. /var/log/apache2/railsapps_error.log [Sun Jun 19 12:19:08 2011] [error] [client <IP>] Directory index forbidden by Options directive: /www/vhosts/railsapps/siteA/ [Sun Jun 19 12:19:08 2011] [error] [client <IP>] File does not exist: /www/vhosts/railsapps/favicon.ico

    Read the article

  • Install MegaCli to Monitor Perc 5/i in Nexentastor 3

    - by Peter Valadez
    I have a Dell 2950 with a Perc 5/i Raid controller that we've already installed Nexentastor 3 Community Edition on. We setup a raid-10 array that and put a ZFS pool on top of the hardware. As I understand, in this configuration ZFS/Nexentastor will not be able to tell when a disk fails in the array. Obviously, this is not optimal. Since the Dell Perc 5/i controller is a rebranded LSI controller, you should be able to use the MegaCli utility to manage the array and monitor its condition. I had seen in a separate forum that the Perc 5/i is very similar to the LSI MegaRAID 8480E, so I tried installing the MegaCli utility at the link below. However, I have not been able to successfully install the utility. http://www.lsi.com/support/products/Pages/MegaRAIDSAS8480E.aspx Here is what happened when I tried to install MegaCli: root@Nexenta2:/files# pkgadd -d MegaCli.pkg Warning: unable to relocate '$BASEDIR' mv: cannot move `solmegacli-8.02.16/' to a subdirectory of itself, `solmegacli-8.02.16//var/lib/dpkg/alien/solmegacli/reloc/solmegacli-8.02.16' mv: cannot move `solmegacli-8.02.16/' to a subdirectory of itself, `solmegacli-8.02.16//opt/solmegacli-8.02.16' 822-date: warning: This program is deprecated. Please use 'date -R' instead. 822-date: warning: This program is deprecated. Please use 'date -R' instead. solmegacli_8.02.16-1_all.deb generated (Reading database ... 41397 files and directories currently installed.) Preparing to replace solmegacli 8.02.16-1 (using solmegacli_8.02.16-1_all.deb) ... Unpacking replacement solmegacli ... Setting up solmegacli (8.02.16-1) ... In /var/logs/dpkg.log: 2012-03-23 20:40:19 status unpacked solmegacli 8.02.16-1 2012-03-23 20:40:19 configure solmegacli 8.02.16-1 8.02.16-1 2012-03-23 20:40:19 status unpacked solmegacli 8.02.16-1 2012-03-23 20:40:19 status half-configured solmegacli 8.02.16-1 2012-03-23 20:40:19 status installed solmegacli 8.02.16-1 So... I've got three questions: Is it possible to install and use MegaCli in Nexentastor 3? If so, how can I install MegaCli on Nexentastor 3? Suggestions welcome!!! If not, is there a better way to monitor the condition of the Perc 5/i hardware raid? Our 2950 does have a DRAC card, so can I use that to monitor the raid condition?

    Read the article

  • No network upsets gnome

    - by Darren Cook
    An issue that has been bothering me for over a year now. My notebook, running ubuntu 10.04, is almost all the time using a wired connection, with static IP address. And a remote DNS server. Network is configured with entries in /etc/network/interfaces and /etc/resolv.conf, rather than whatever the gnome UI tool was (*) But if I'm out, or simply unplug the network cable, a few things get weird. Specifically the gnome-panel stops working - it is still there, but isn't updating. And opening a nautilus window (e.g. to look at files on the local disk) has huge time-outs. By that I mean it will not open the window for something like 30 or 60 seconds; but when it does finally open it I can see the files and it is perfectly usable. Everything else works fine, alt-tab between windows, etc. I use the commandline to find the pid of gnome-panel, kill it, wait a couple of seconds, and it opens up a fresh panel which is normally usable. (Something like 10 minutes later it will have locked/crashed again; the same for the nautilus windows.) I'm guessing this is a DNS issue? Would setting up a local DNS server help? Guess number 2 was related to having a file server mount (samba, though running on another linux box), and symbolic links to files and directories on that file server on my desktop. My question is a bit vague... Does anyone recognize these symptoms, and have a suggestion? Or do you have some troubleshooting suggestions for narrowing down the problem? My /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 127.0.0.1 testsite.local #Other test website URLs here UPDATE: Some timings to open some desktop folder icons. This is after pulling out the network cable. A sub-directory of the desktop took 23 secs to open up. Content appears immediately (just 8 files, it has no further subdirectories). The home directory icon took 12 seconds to open up, but then took about 30 seconds for the files to appear. I closed it and tried again. This time it took 18 seconds to open up, but then 70 seconds before anything appeared. *: I couldn't work out how to use the gnome network tool for my needs, which include 3-4 static IPs for testing virtual hosts locally.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Ubuntu - Ruby Daemon script creates two processes - sh and ruby - PID file points at sh, not ruby

    - by Jonathan Scoles
    The PID file for a ruby process I have running as a daemon is getting the wrong PID. It appears that running /etc/init.d/sinatra start creates two processes - sh and ruby, and the PID that ends up in the PID file is that of the sh process. This means that when I then run /etc/init.d/sinatra stop or /etc/init.d/sinatra restart, it is killing sh and leaving the ruby process still running. I'd like to know a) why is my script launching two processes - sh and ruby, and not just ruby, and b) how do I fix it to just launch ruby? Details of the setup: I have a small Sinatra server set up on an ubuntu server, running as a daemon. It is set to automatically at server startup run a script named sinatra in /etc/init.d that launches the a control script control.rb, which then runs a ruby daemon command to start the server. The script is run under the 'sinatrauser' account, which has permissions for the directories the script needs. contents of /etc/init.d/sinatra #!/bin/bash # sinatra Startup script for Sinatra server. sudo -u sinatrauser ruby /var/www/sinatra/control.rb $1 RETVAL=$? exit $RETVAL To install this script, I simply copied it to /etc/init.d/ and ran sudo update-rc.d sinatra defaults contents of /var/www/sinatra/control.rb require 'rubygems' require 'daemons' pwd = Dir.pwd Daemons.run_proc('sinatraserver.rb', {:dir_mode => :normal, :dir => "/opt/pids/sinatra"}) do Dir.chdir(pwd) exec 'ruby /var/www/sinatra/sintraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1' end portion of output from ps -A 6967 ? 00:00:00 apache2 10181 ? 00:00:00 sh <--- PID file gets this PID 10182 ? 00:00:02 ruby <--- Actual ruby process running Sinatra 12172 ? 00:00:00 sshd The PID file gets created in /opt/pids/sinatra/sinatraserver.rb.pid, and always contains the PID of the sh instance, which is always one less than the PID of the ruby process EDIT: I tried micke's solution, but it had no effect on the behavior I am seeing. This is the output from ps -A f. This output looks the same whether I use sudo -u sinatrauser ... or su sinatrauser -c ... in the service start script in /etc/init.d. 1146 ? S 0:00 sh -c ruby /var/www/sinatra/sinatraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1 1147 ? S 0:00 \_ ruby /var/www/sinatra/sinatraserver.rb

    Read the article

  • MariaDB doesn't upgrade, 2 versions are installed

    - by zahorak
    I have a server running on Debian wheezy with MaraiDB and OwnCloud. Few days ago, I wanted to update the packages because of the OwnCloud updates but something went wrong. Usually in this case I'd probably try to remove and again install the problematic packages, but on a server which is used by different people it doesn't seem like a valid solution anymore. Here you can see my console output: user@server:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libmariadbclient18 : Depends: libmysqlclient18 (= 10.0.4+maria-1~wheezy) but 10.0.5+maria-1~wheezy is installed libmysqlclient18 : Depends: libmariadbclient18 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server : Depends: mariadb-server-10.0 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed E: Unmet dependencies. Try using -f. user@server:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be upgraded: libmariadbclient18 mariadb-server-10.0 owncloud 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 7 not fully installed or removed. Need to get 0 B/37.2 MB of archives. After this operation, 3,565 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Preconfiguring packages ... (Reading database ... 35901 files and directories currently installed.) Preparing to replace libmariadbclient18 10.0.4+maria-1~wheezy (using .../libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb) ... Unpacking replacement libmariadbclient18 ... dpkg: error processing /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb (--unpack): trying to overwrite '/usr/lib/mysql/plugin/dialog.so', which is also in package mariadb-server-10.0 10.0.4+maria-1~wheezy Errors were encountered while processing: /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I tried removing the 10.0.4 package version of libmariadbclient18 but I wasn't really successful in doing that. So my last hope is here, do you have any ideas how exactly I could fix this issue? Thx very much

    Read the article

  • How to create a Windows 7 installation usb media from linux ? (to install Windows 7) - Help need to know better method

    - by Abel Coto
    I have been reading some web pages and posts here and in other forums about how to create a Windows 7 installation Usb media (to install windows 7 using a usb) from linux. I asked in technet about this , and they give me general ideas about how to do it I personally am not very familiar with linux, but basicaly all that you need to do... in whatever way you do it is the following: Format a usb flash drive, either fat32 or ntfs create a partition that is large enough to host the windows installation (give or take 3GB for 64bit, aroudn 2.5gb for 32bit) and mark that partition as active/bootable. Since this can be done with windows, but just as well with a tool like gparted, you should be able to do the same in debian. Once you have created that partition, mount the iso that you download, and copy all files starting from the root, into the root of the usb flash drive. That's all there's to it. There is a method that i found in various places,that is almost the same that the man of technet has said. But,there is a step,that in that method is done,that i don't know if it is really necessary,or not. Not allways dd works.Basically, the missing step was to write a proper boot sector to the usb stick, which can be done from linux with ms-sys. This works with the Win7 retail version. Here is the complete rundown again: Install ms-sys Check what device your usb media is asigned - here we will assume it is /dev/sdb. Delete all partitions, create a new one taking up all the space, set type to NTFS, and set it bootable: *# cfdisk /dev/sdb* Create NTFS filesystem: *# mkfs.ntfs -f /dev/sdb1* Mount iso and usb media: *# mount -o loop win7.iso /mnt/iso # mount /dev/sdb1 /mnt/usb* Copy over all files: *# cp -r /mnt/iso/* /mnt/usb/* Write Windows 7 MBR on usb stick: *# ms-sys -7 /dev/sdb* ...and you're done. Shouldn't the usb work without doing the last step "# ms-sys -7 /dev/sdb" or to make the usb bootable , is a must , not only to mark the partition as bootable ? Would be better use rsync instead of cp -r ? All this steps should be done as root, i suppose , or if not , chmod to 664 and chown the directories where are mounted the usb and the iso, no ? But i suppose that the easier thing is to copy the data as root , and that this will not affect to the data. Has anyone tried this method or some similar like copying the iso with dd ?

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

  • Postfix not delivering email using Maildir

    - by Greg K
    I've followed this guide to get postfix set up. I've not completed it yet, as from sending test emails, email is no longer being delivered since switching to Maildir from mbox. I have created a Maildir directory with cur, new and tmp sub directories. ~$ ll drwxrwxr-x 5 greg greg 4096 2012-07-07 16:40 Maildir/ ~$ ll Maildir/ drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 cur drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 new drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 tmp Send a test email. ~$ netcat mail.example.com 25 220 ubuntu ESMTP Postfix (Ubuntu) ehlo example.com 250-ubuntu 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from: [email protected] 250 2.1.0 Ok rcpt to: [email protected] 250 2.1.5 Ok data 354 End data with <CR><LF>.<CR><LF> Subject: test email Hi, Just testing. . 250 2.0.0 Ok: queued as 56B541EA53 quit 221 2.0.0 Bye Check the mail queue. ~$ mailq Mail queue is empty Check if mail has been delivered. ~$ ls -l Maildir/new total 0 Some postfix settings: ~$ sudo postconf home_mailbox home_mailbox = Maildir/ ~$ sudo postconf mailbox_command mailbox_command = /var/log/mail.log Jul 7 16:57:33 li305-246 postfix/smtpd[21039]: connect from example.com[178.79.168.xxx] Jul 7 16:58:14 li305-246 postfix/smtpd[21039]: 56B541EA53: client=example.com[178.79.168.xxx] Jul 7 16:58:33 li305-246 postfix/cleanup[21042]: 56B541EA53: message-id=<20120707155814.56B541EA53@ubuntu> Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 56B541EA53: from=<[email protected]>, size=321, nrcpt=1 (queue active) Jul 7 16:58:33 li305-246 postfix/smtp[21043]: 56B541EA53: to=<[email protected]>, relay=none, delay=30, delays=30/0.01/0/0, dsn=5.4.6, status=bounced (mail for example.com loops back to myself) Jul 7 16:58:33 li305-246 postfix/cleanup[21042]: 1F68B1EA55: message-id=<20120707155833.1F68B1EA55@ubuntu> Jul 7 16:58:33 li305-246 postfix/bounce[21044]: 56B541EA53: sender non-delivery notification: 1F68B1EA55 Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 1F68B1EA55: from=<>, size=1999, nrcpt=1 (queue active) Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 56B541EA53: removed Jul 7 16:58:33 li305-246 postfix/smtp[21043]: 1F68B1EA55: to=<[email protected]>, relay=none, delay=0, delays=0/0/0/0, dsn=5.4.6, status=bounced (mail for example.com loops back to myself) Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 1F68B1EA55: removed Jul 7 16:58:36 li305-246 postfix/smtpd[21039]: disconnect from domain.me[178.79.168.xxx] Jul 7 17:10:38 li305-246 postfix/master[20878]: terminating on signal 15 Jul 7 17:10:39 li305-246 postfix/master[21254]: daemon started -- version 2.8.5, configuration /etc/postfix Any ideas?

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • fink hangs while compiling Octave on OS X

    - by Mark Bennett
    Disclaimer: I'm totally new to Fink. I'm trying to install Octave (Matlab open source clone) on Mountain Lion using Fink, following instructions at http://wiki.octave.org/Octave_for_MacOS_X It's a new installation of Fink, and I've also installed X11 per instructions. I'm using this command (which I believe is correct since everything's 64 bit now): sudo fink install octave-atlas It's hanging after a while, showing this as it's last output: ... Setting up xft2-dev (2.2.0-2) ... Clearing dependency_libs of .la files being installed Reading buildlock packages... All buildlocks accounted for. /sw/bin/dpkg-lockwait -i /sw/fink/dists/stable/main/binary-darwin-x86_64/x11/xinitrc_1.5-1_darwin-x86_64.deb (Reading database ... 14871 files and directories currently installed.) Preparing to replace xinitrc 1.5-1 (using .../xinitrc_1.5-1_darwin-x86_64.deb) ... Unpacking replacement xinitrc ... Setting up xinitrc (1.5-1) ... I did notice the process name on the terminal's tab was "sort", so the second time I hit this I tried Control-D (End-of-File), and this did seem to unstick it. I'm wondering if there's some misformed command and sort was trying to read from stdin? Questions: 1: Has anybody else seen this? Google wasn't helpful 2: Fink outputs a LOT of warnings and errors.... is that normal? 3: wondering if anybody's got Fink to compile Octave on Mountain Lion specifically? And whether they used just "octave" or "octave-atlas". Or if you got it working with MacPorts or Homebrew? 4: later Fink failed with "Failed: phase compiling: gnuplot-minimal-4.6.1-1 failed". I haven't started googling that yet... but wondering if anybody's see that? Also tried MacPorts but got other errors with Octave. And reading online it looks like HomeBrew also has issues with Octave on Mountain Lion. 5: Generally looking for anybody who's got Octave running on Mountain Lion with any of the package managers. I've been at this for a couple days ;-)

    Read the article

  • So I want to separate my Program Files from the hard disk with the other system files. What is the b

    - by grg-n-sox
    So I am running Windows 7 as my only OS. I have two hard drives on my computer. The first one is a 74GB Western Digital 10K RPM Raptor. The second one is a 1TB Seagate Barracuda (couldn't remember if it was a 7200.12 or some other decimal after the 7200). The OS in installed to the Raptor and I am just using the Barracuda for storage. With this setup, in case you couldn't guess already, the Raptor fills up quick and I am constantly having to maintain file locations. And although it is nice to have that quicker boot time and program loading, the time spent maintaining the drive makes me waste more time overall. So I am looking for a way to try to keep it clear while still keeping up system loading speeds. A performance hit on games and such is easily acceptable and as long as I can guarantee a 5GB space on the Raptor, I can always just temporarily move the disc image there. So I am figuring that having games installed like Boarderlands and Mass Effect, as well as having large files such as linux distro DVD disc images in My Documents, I probably should be moving my personal files and Program Files directories to the Barracuda. I currently have folders on the Barracuda for this, but this means routinely copying files over and I can't really do anything with the Program Files folder that already exists. The best I can do is remember to designate the install directory of any program installation to the alternative install directory, which I can't seem to get to ever work right with Steam. With that in mind, is there a way that is not too drastic to let me just change some folders and system settings once and everything works fine afterwards for my setup? I have considered just reinstalling Windows 7 to the Barracuda but that would defeat the purpose of the Raptor except for running disc images off of. I am also heard a bit about being able to use symlinks to fix this, but I have also heard that symlinks in Windows are not necessarily the same and not as well supported on Windows. An example a friend mentioned was something about how if you have a symlink in Windows on a small hard drive to a large hard drive and the contents the symlink points to is larger than the small hard drive's capacity, then Windows will think the smaller hard drive is full. So is there a fix/workaround that will let me use symlinks across hard drives without the issues or is there a better solution I am not being told about, not mentioning, or not thinking of?

    Read the article

  • Apache Simple Configuration Issue: Setting up per-user directory permission denied problem

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work? [Added after comments] Hi Andol, thank you for your suggestions. So, first off, you said that you assume that I have the userdir module enabled, but I am not sure what that means exactly. That could be part of the problem. I do have the Module loaded, using the LoadModule directive. I have this: LoadModule userdir_module modules/mod_userdir.so I also did a find on where mod_userdir is, and I found it located here: [huckphin@crhyner-workbox]/% find / . -name '*mod_userdir.so*' 2> /dev/null /usr/lib64/lighttpd/mod_userdir.so /usr/lib64/httpd/modules/mod_userdir.so Is there something else I need to enable? Also, my directory configuration was mentioned. I have uncommented the default configuration given. I have not looked into what all of the configurations mean, and this could probably explain the issue. Here is the Directory that I have for the user directories: <Directory "/home/*/public_html"> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory>

    Read the article

  • Certain SFTP user cannot connect

    - by trobrock
    I have my Ubuntu Server set up so users with the group of sftponly can connect with sftp, but have a shell of /bin/false, and they connect to their home directories. This is working fine with three of the user accounts I have. But I added a new user account today the same way that I added the others and it will not successfully connect. sftp -vvv user@hostname debug1: Next authentication method: password user@hostname's password: debug3: packet_send2: adding 48 (len 73 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug1: channel 0: free: client-session, nchannels 1 debug3: channel 0: status: The following connections are open: #0 client-session (t3 r-1 i0/0 o0/0 fd 5/6 cfd -1) debug3: channel 0: close_fds r 5 w 6 e 7 c -1 debug1: fd 0 clearing O_NONBLOCK debug3: fd 1 is not O_NONBLOCK Connection to hostname closed by remote host. Transferred: sent 2176, received 1848 bytes, in 0.0 seconds Bytes per second: sent 127453.3, received 108241.6 debug1: Exit status -1 Connection closed For a successful user: sftp -vvv good_user@hostname debug1: Next authentication method: password good_user@hostname's password: debug3: packet_send2: adding 48 (len 63 padlen 17 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug2: fd 5 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug1: Sending subsystem: sftp debug2: channel 0: request subsystem confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: subsystem request accepted on channel 0 debug2: Remote version: 3 debug2: Server supports extension "[email protected]" revision 1 debug2: Server supports extension "[email protected]" revision 2 debug2: Server supports extension "[email protected]" revision 2 debug3: Sent message fd 3 T:16 I:1 debug3: SSH_FXP_REALPATH . -> / sftp> I cannot figure out why one user will work and the other wont, I have restart the ssh service after adding the user. I have even removed the user and added them again to be sure I am adding it correctly.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >