Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 252/1132 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Virtualmin deactivating PHP on new virtual servers

    - by Josh
    This is related to my other question... but the situation is much worse now. After updating to the most recent version of Virtualmin, when I create new accounts, Virtualmin sets up their VirtualHost entries as follows: <Directory /home/username/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php FCGIWrapper /home/username/fcgi-bin/php.fcgi .php </Directory> <Directory /home/username/cgi-bin> allow from all </Directory> [...] RemoveHandler .php Now, not only is it specifically inserting AddHandler fcgid-script and FCGIWrapper... which I do not want because I am using mod_fastcgi, but it's also setting up PHP in such a way that it will never work! It's adding a RemoveHandler .php after setting up the handler for PHP! Where is this behavior configured and how can I stop it? Better yet, how can I make Virtualmin not include any PHP commands at all in the VirtualHost section?

    Read the article

  • Why won't my service start, and why doesn't upstart output any errors?

    - by Alex Waters
    I am trying to 'start gunicorn' as a service via upstart as user ale. I'm using gunicorn/flask on ubuntu 12.04 w/ init (upstart 1.5) Here is my /etc/init/gunicorn.conf setuid btw setgid flask script export HOME=/home/btw export WORKON_HOME=$HOME/.virtualenvs . $HOME/.virtualenvs/default/bin/activate cd $HOME/flask workon default gunicorn -c gunicorn.py bw:app end script It doesn't output anything other than gunicorn start/running, process 12992. If i then do 'status gunicorn' I get stop/waiting. any ideas on how to debug this? I tried following http://upstart.ubuntu.com/wiki/Debugging but it didn't help. If I do the following as user ale in the app's directory: 1. workon default 2. gunicorn -c gunicorn.py bw:app then Gunicorn runs fine. Here is ~/flask/gunicorn.py: bind = "0.0.0.0:8080" workers = 3 backlog = 2048 worker_class = "gevent" debug = True daemon = False pidfile ="/tmp/gunicorn.pid" log_level = "debug" accesslog = "/var/log/gunicorn/access.log" errorlog = "/var/log/gunicorn/error.log" user = "btw" group = "flask" Also, /var/log/error.log doesn't show anything new when I try to start the Gunicorn service. If I start it manually, it shows that the workers have been loaded, etc. Thanks for any help / suggestions!

    Read the article

  • Starting nginx with systemctl fails, but running the command manually doesn't

    - by Ivan
    On Arch Linux, for some reason, when I try to start nginx with the command "systemctl start nginx", it fails, with this being the output of "systemctl status nginx": Loaded: loaded (/etc/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Wed 2013-10-30 16:22:17 EDT; 5s ago Process: 9835 ExecStop=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; -s quit (code=exited, status=126) Process: 3982 ExecStart=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 10967 ExecStartPre=/usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=126) Main PID: 3984 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nginx.service ...but when I run /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -t -q -g "pid /run/nginx.pid; daemon on; master_process on;" and then /usr/bin/chroot --userspec=http:http /home/nginx /usr/bin/nginx -g "pid /run/nginx.pid; daemon on; master_process on;" as root, all it does is return a warning, but works just fine: nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 Why is it doing that?

    Read the article

  • Editing .bash_profile file not taking effect

    - by Sandeepan Nath
    I need to put export PATH=$PATH:/opt/lampp/bin to my ~/.bash_profile file so that mysql from command line works on my system. Please check mysql command line not working for further details on that. I am working on a fedora system and logged in as root user. If I run locate .bash_profile then I get these:- /etc/skel/.bash_profile /home/sam/.bash_profile /home/sohil/.bash_profile /home/windows/.bash_profile /root/.bash_profile So, I modified the /root/.bash_profile file like this:- from PATH=$PATH:$HOME/bin export PATH to PATH=$PATH:/opt/lampp/bin export PATH But, still the change is not taking effect - Opening a new console and running mysql again says bash: mysql: command not found. However running export PATH=$PATH:/opt/lampp/bin in console makes it work for that session. So, I am doing something wrong with the .bash_profile file. May be editing incorrect one or doing the edit incorrectly.

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • Virtualbox shared folder mount from fstab fails; works once bootup is complete

    - by Ben
    I've got Ubuntu 13.10 installed in Virtualbox 4.3. The host machine is Windows. I have a couple of Virtualbox shared folders being mounted by /etc/fstab. Until recently this setup worked just fine, but after upgrading from Ubuntu 13.04 and Virtualbox 4.2 (at essentially the same time) the fstab mounting stopped working. I get the following error during boot: An error occurred while mounting /home/benme/Documents. keys:Press S to skip mounting or M for manual recovery Pressing M for manual recovery and then trying to mount manually also fails: root@benme-vb:~# cd /home/benme root@benme-vb:/home/benme# mount Documents /sbin/mount.vboxsf: mounting failed with the error: No such device But if I instead skip mounting during boot, wait for Unity to start and then mount manually in a shell, everything works fine: benme-vb ~ % ls Documents benme-vb ~ % sudo mount Documents [sudo] password for benme: benme-vb ~ % ls Documents # actual file list omitted Note that when I mount manually I'm letting mount take all the options from /etc/fstab, and it works. This suggests to me that it's some sort of timing issue, where Virtualbox isn't "ready" to provide the shared file mounts at the point /etc/fstab mounts are run during bootup. Here's the fstab line, just for completeness: Documents /home/benme/Documents vboxsf uid=benme,gid=benme,dmode=774,fmode=664 0 0 Is there something I can do about this from the Ubuntu side? Or does anyone happen to know more about this from the Virtualbox angle? I've found an old report on the Virtualbox bug-tracker with identical symptoms, but in that case the user had updated Virtualbox without updating their guest additions and resolving that fixed the problem; this isn't happening here, I've definitely got the 4.3 guest additions installed.

    Read the article

  • wsgi - narrow user permissions.

    - by Tomasz Wysocki
    I have following Apache configuration and my application is working fine: <VirtualHost *:80> ServerName ig-test.example.com WSGIScriptAlias / /home/ig-test/src/repository/django.wsgi WSGIDaemonProcess ig-test user=ig-test </VirtualHost> But I want to protect my files from other users, so I do: chown ig-test /home/ig-test/ -R chmod og-rwx /home/ig-test/ -R And application stops working: (13)Permission denied: /home/ig-test/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is it possible to achieve what i'm doing with wsgi? If I have to give read permissions to some files it will be fine. But there are files I have to protect (like file with DB configuration or business logic of application).

    Read the article

  • How to fix boot and mount failed drops to initramfs prompt in Ubuntu 12.04?

    - by msPeachy
    Ubuntu partition does not boot. This started after a power interruption during system boot. The next time I boot, I encountered the following error message: mount: mounting /dev/disk/by-uuid/3f7f5cd9d-6ea3-4da7-b5ec-**** on /root failed: Invalid argument mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target file system doesn't have /sbin/init. No init found. Try passing init= bootarg. Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _ I've searched for similar posts here and most of the recommended solution is to reboot to the Ubuntu LiveCD. That's another problem because I cannot boot to a LIVEUSB, this is the error message I get when booting to a LiveUSB: Busybox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) mount: mounting /dev/sda2 on /isodevice failed: Invalid argument Could not find the ISO /ubuntu-12.04-desktop-i386.iso. This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into Windows, let it fully start, log in, run 'chkdsk /r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation. I cannot boot into Windows because I don't have a Windows partition. Do I have to install Windows to fix this problem? Is there a way to fix this in the (initramfs) prompt? Please help. Thank you!

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • Are there any multiple desktops software for Vista? (similar to what is included in GNOME and KDE)

    - by Macha
    In GNOME and KDE, (and from what I hear, OS X aswell), you have a feature called multiple desktops. It allows you to have multiple screens of apps running, without having the same number of monitors. Does such a program exist for Vista? I'd like around four desktops (one for IM, one for programming, one for browsing/email and one for other apps), and at home I usually have one monitor if using my laptop outside of home, or two if I'm using it at home.

    Read the article

  • File access with hostname or ip only - no domain?

    - by Jonathon
    It seems likely that this is an obvious question, but I'm having trouble tracking down any useful information. Normally when accessing files in a particular directory on a server, I'm able to create a virtual host, assign a domain, root directory location, etc -- however am in a situation where I have server space and need to access files with only a hostname. Is this possible? For example, let's say the hostname is 123hostname.com, and the file I want access to is in /home/sub-directory/filename.php. How do I get at it via a browser? I've tried: http://123hostname.com/home/sub-directory/filename.php ...and some other variations on that theme (that I can't post because new users are restricted to one link in messages). But generally stuck. Any help -- even if it's just to let me know that this isn't possible without some additional configuration -- would be great. Thank you!

    Read the article

  • SSH Port Forward 22

    - by j1199dm
    I'm trying to set up the following: At work I want to create a local port that will forward to port 22 on my home server. ssh -L 56879:home:22 username@home -p 443 right now I'm testing this on my two machines at home, my ubuntu server and the other my iMac. iMac: 192.168.1.104 ubuntu: 192.168.1.103 iMac - ssh -p 443 -L 56879:192.168.1.103:22 [email protected] in my ~/.ssh/config on my iMac I have port set to 56879. so when I do git pull remoteserver:/path/to/repo.git on my iMac git will use ssh client on my iMac and use port 56879 since setup in config which should forward to 22 on my ubuntu machine. I keep getting connection refused? Any ideas?

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • suphp how disable ls /

    - by Pol Hallen
    Using suphp, I set a php.ini to every virtual host. In php.ini I also setted: open_basedir = /home/site1 php script runs, but if I ve a script with ls / I can see whole root directory. How can disable this hole security? <VirtualHost *:80> ServerName site1 ServerAlias www.site1.com DirectoryIndex index.html index.htm DocumentRoot /home/site1/ suPHP_Engine on AddHandler x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler x-httpd-php # THIS READ php.ini suPHP_ConfigPath /home/site1/ <Directory /home/site1/> Options -Includes -Indexes -FollowSymLinks -ExecCGI -MultiViews AllowOverride none Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • Using find and tar with files with special characters in the name

    - by Costi
    I want to archive all .ctl files in a folder, recursively. tar -cf ctlfiles.tar `find /home/db -name "*.ctl" -print` The error message : tar: Removing leading `/' from member names tar: /home/db/dunn/j: Cannot stat: No such file or directory tar: 74.ctl: Cannot stat: No such file or directory I have these files: /home/db/dunn/j 74.ctl and j 75. Notice the extra space. What if the files have other special characters? How do I archive these files recursively?

    Read the article

  • Apache 2.2.14: SSLCARevocation location

    - by Doc
    I am installing a .crl in my apache config. It looks like this: VirtualHost default DocumentRoot "web" ServerName example.com SSLEngine on SSLCertificateFile "cert.crt" SSLCertificateKeyFile "key.key" SSLCertificateChainFile "cert.ca-bundle" SSLProtocol -all +SSLv3 SSLCipherSuite SSLv3:+HIGH:+MEDIUM Directory Order deny,allow Allow from all SSLCACertificateFile "ClientRootCert.crt" SSLVerifyClient require SSLVerifyDepth 3 SSLCARevocationFile "CRLList.crl" Directory VirtualHost When Apache is started, I get the error: SSLCARevocationFile not allowed here When I place SSLCARevocationFile above the Directory tag, Apache starts, but all client certs are rejected with the message: ssl_error_expired_cert_alert (both revoked and active certs) How to solve this?

    Read the article

  • debian/ubuntu locales and language settings

    - by AndreasT
    This self-answered question solves the following issues: locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory and some other locale related problems.

    Read the article

  • Warning messages while build Apache server

    - by GoinOff
    I am building Apache server 2.4.6 from source and am not sure about a few warning messages I received during the rpm build process. The build completes OK and everything seems fine..BTW, this is on CentOS 5.5... During the make process: /home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr/libtool --silent --mode=install install mod_authn_file.la /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/modules/ libtool: install: warning: remember to run `libtool --finish /usr/local/apache2/modules' What is this warning message about?? remember to run libtool --finish ?? Also, I see this: libtool: install: warning: `/home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr-util/libaprutil-1.la' has not been installed in `/usr/local/apache2/lib' I am building Apache in a temp directory but libtools seems to be looking in the wrong place (/usr/local/apache2/lib instead of /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/lib). This seems like something I can blow off?? In my specfile I set DESTDIR to /home/johnm/dev/project1/install/linux/tmp where the install files are placed: %install export DESTDIR=%{buildroot} make install Both messages appear numerous times during the make process. When I install the rpm on the system, everything appears to work without problems..Thinking I can ignore these messages??? or am I missing something important??

    Read the article

  • Symbolic Links Between User Accounts

    - by Pez Cuckow
    I have been using a cron job to duplicate a folder into another users account every day and someone suggested using symbolic links instead although I cannot get them to work. In summary user GAMER generates log files that they want to access via HTTP, however I only have a web-server in the user account SERVER, in the past I would copy the logs folder from GAMERS account into SERVER/public_html/. and then chmod the files so the server could access them. Trying to use symbolic links I set up a link from root (as only root can access both accounts) I used: ln -s /home/GAMER/game/logs/ /home/SERVER/public_html/logs However it seems that only root can use this link, I tried chmoding the link, all the files in the gamers /game/logs/*, /game/logs itself to 777 as well as changing chown and chgrp to server the files still cannot be read. When viewed from servers account my shell shows the link and where it is to hi-lighted in black with red text. Am I doing something wrong? Please enlighten me! /home/GAMER/game/ (chmod & chgrp) drwxrwxrwx 3 SERVER SERVER 4096 2011-01-07 15:46 logs /home/SERVER/public_html (chmod -h & chgrp -h) lrwxrwxrwx 1 server server 41 2011-01-07 19:53 logs -> /home/GAMER/game/logs/

    Read the article

  • SSH into Ubuntu Linux on a box without a static IP address

    - by Steven Xu
    Basically, how do I do it? I'd like to connect to my home computer from work, but my internet is routed through my apartment building's network, so I don't have the static IP address I'm accustomed to having. How do I go about accessing my home computer through SSH (I'll be using Putty at work if it matters) if my home computer doesn't have a static IP address?

    Read the article

  • Local file system not properly unmounted during shutdown

    - by bernhard
    I have a file system with two HDDs and several partitions mounted separately locally. /root, /home, /usr, /var, /local/share , /home/bernhard/fotos/bilder, /backup are on separate partitions and are all ext3. During unmounting the message "unmounting local file system" does not appear any further and when booting all partitions but the root partition have to reload the journal, which indicates improper unmounting. The root partition and /usr are on sda, the others on sdb or further usb-mounted devices. the only partition unmounted w/o problem seems to be the root partition on sda4. I wonder whether the script to umount all devices has a "wait for success" loop or that the script itself got corrupted. However, yesterday I upgraded to 11.04 and the error persists. pmount does not look to be appropriate since the device are not hotplugged but simply mounted during system start. Obviously mounting /usr and afterwards /usr/local/share as well as /home and later /home/bernhard/fotos/bilder presents problems for umount; the devices may be busy und thus not properly unmounted. Does anybody have an idea for a script to organize unmounting in an ordered way? How to wait for unmounting of the secondary mount? Do you know as well where to place such a script that it will be used instead of the original umount command? Could be a general solution.

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • Am I misunderstanding chown and chmod?

    - by isomorphismes
    I want to either extend the size of my guest partition or figure out how to copy stuff from the guest partition to my normal /home directory. (Because of some other problems I can only run Xorg as guest, but I can log into virtual console as myself or root.) Here's the motivation: I want to torrent a large file. It's larger than my guest filesystem. But I have plenty of space on my real drive, I just can't log into it graphically. So I tried to set up a "pipe" to get the file out of the tmpfs. I did: su -u myself #catch mkdir ~/receiver_dir sudo su cd /tmp/guest-lkj567UIO/ #throw ln -s mario_pipe /home/myself/receiver_dir chown -R guest-lkj567UIO /home/myself/receiver_dir chown -R guest-lkj567UIO /tmp/guest-lkj567UIO/mario_pipe chmod -R a+rw /home/myself/receiver_dir chmod -R a+rw /tmp/guest-lkj567UIO/mario_pipe su -u guest-lkj567UIO cd /tmp/guest-lkj567UIO cd mario_pipe touch something #success! However, when I try to torrent to /tmp/guest-lkj567UIO/mario_pipe, Transmission says I don't have write permissions. But it looks like I just wrote there? And that everybody (a+rw) can write there in fact? Maybe this indicates I don't actually understand chown and chmod but nothing from their man pages pops out.

    Read the article

  • how to connect with ssh when it's hanging

    - by Eduardo Bezerra
    I'm trying to ssh into a remote machine, but it hangs when looking for an identity file: [username@local .ssh]$ ssh -v remote uname OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /home/username/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to remote [192.168.3.36] port 22. debug1: Connection established. debug1: identity file /home/username/.ssh/identity type -1 debug1: identity file /home/username/.ssh/id_rsa type -1 debug1: identity file /home/username/.ssh/id_dsa type -1 I can ping the machine normally, and it's obviously working with the sshd service running... I just don't know how to log into it. In fact, I'd just like to reboot it. That'd be fine. Thing is: it's across the ocean (I'm in the US, and the machine is in Europe). I'd run some hundreds of java threads at the same time and apparently that was too much for the host. How can I get back in?

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >