Search Results

Search found 168144 results on 6726 pages for 'new linux user'.

Page 49/6726 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Symbolic Links Between User Accounts

    - by Pez Cuckow
    I have been using a cron job to duplicate a folder into another users account every day and someone suggested using symbolic links instead although I cannot get them to work. In summary user GAMER generates log files that they want to access via HTTP, however I only have a web-server in the user account SERVER, in the past I would copy the logs folder from GAMERS account into SERVER/public_html/. and then chmod the files so the server could access them. Trying to use symbolic links I set up a link from root (as only root can access both accounts) I used: ln -s /home/GAMER/game/logs/ /home/SERVER/public_html/logs However it seems that only root can use this link, I tried chmoding the link, all the files in the gamers /game/logs/*, /game/logs itself to 777 as well as changing chown and chgrp to server the files still cannot be read. When viewed from servers account my shell shows the link and where it is to hi-lighted in black with red text. Am I doing something wrong? Please enlighten me! /home/GAMER/game/ (chmod & chgrp) drwxrwxrwx 3 SERVER SERVER 4096 2011-01-07 15:46 logs /home/SERVER/public_html (chmod -h & chgrp -h) lrwxrwxrwx 1 server server 41 2011-01-07 19:53 logs -> /home/GAMER/game/logs/

    Read the article

  • linux command prompt ftp to ftp server running on windows

    - by Vass
    Hi, I am using on Windows Vista, Filezilla server. I have it set up to be accessed via outside IPs and when I use a client on the IP I have it connects normally using Filezilla client. On the same machine I have Ubuntu running in a virtual box and when using filezilla client in there it works fine. Now I want to try the command prompt. So I do the ftp xxx.xxx.xx.xx I enter the name and password and i get the ftp command prompt, but the commands are not working properly. when trying "ls" or "cd" these commands do not work. "cd" tells me that the current directory is "/" root, but this does not make sense in the windows operating system. Now the filezilla client is taking the user in the application window directly to the root folder of the permitted filespace granted to that user. How can the same be done from the command prompt, if there is a way? It is as if the command prompt takes me to the root which does not exist or even have correct permissions to move in. Is there any way to be taken to the correct directory directly, or move there especially when the slashes are the wrong way around etc? Best,

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • How to add a user to Wheel group?

    - by Natasha Thapa
    I am trying to add a use to wheel group using in a Ubuntu server. sudo usermod -aG wheel john I get: usermod: group 'wheel' does not exist On my /etc/sudoers I have this: > cat /etc/sudoers > # sudoers file. > # > # This file MUST be edited with the 'visudo' command as root. > # > # See the sudoers man page for the details on how to write a sudoers file. > # > > # Host alias specification > > # User alias specification > > # Cmnd alias specification > > # Defaults specification > > # User privilege specification root ALL=(ALL) ALL %root ALL=(ALL) NOPASSWD: ALL > > %wheel ALL=(ALL) NOPASSWD: ALL Do I have to do groupadd of wheel?

    Read the article

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • Heartbeat (Linux HA) and NetApp?

    - by Drew
    Does anyone have any experience setting up a high availability two node Linux cluster using heartbeat (linux-ha.org) and NetApp storage (preferably using SnapDrive for Linux)? Basically I would like to mount the same NetApp LUN over Fibre Channel to two servers in an Active/Passive mode (only one server can access the LUN at a time) Thanks!

    Read the article

  • FFMPEG Install on EC2 - Amazon Linux

    - by Oliver Holmberg
    Hello Serverfault friends, I am about two days into attempting to install FFMPEG with dependencies on an AWS EC2 instance running the Amazon Linux AMI. I've installed FFMPEG on Ubuntu and Fedora systems with no problems in the past, and have read reportedly successful instructions on installing on Red Hat/Fedora. I have followed a number of tutorials and forum articles to do so, but have had no luck yet. As far as I can tell, the main problems are as followed: The amazon linux (Most similar to red-hat/centos) yum repositories don't have ffmpeg available. I have found instructions to update the repositories to include the required packages, but adding these repositories cause yum to fail in updating packages. (Also, I've read some cautionary tales about adding redhat/centos repositories to amazon linux that lead me to believe it may be a bad idea) (https://forums.aws.amazon.com/thread.jspa?messageID=229166) I have tried a more complicated method of downloading the source tarball, compiling, and installing, but this always fails due to missing dependencies and other errors. On to my question: Has anyone successfully installed FFMPEG on Amazon Linux? Is there a fundamental incompatibility? If anyone could share specific instructions on installing ffmpeg on amazon linux I would be greatly appreciative. Any other insights/experiences would also be appreciated. Thanks in advance, Oliver

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Best book for learning linux shell scripting?

    - by chakrit
    I normally works on Windows machines but on some occasions I do switch to development on linux. And my most recent project will be written entirely on a certain linix platforms (not the standard Apache/MySQL/PHP setup). So I thought it would pay to learn to write some linux automation script now. I can get around the system, start/stop services, compile/install stuffs fine. Those are probably basic drills for a programmer. But if, for example, I wanted to deploy a certain application automatically to a newly minted linux machine every month I'd love to know how to do it. So if I wanted to learn serious linux shell scripting, what book should I be reading? Thanks

    Read the article

  • how to multi-boot/upgrade linux from LVM-based partition

    - by kenny-bobby
    i currently have FC3 linux which installed itself on the hard disk using LVM partitioning, so it is basically all one big partition. i would like to try some other distributions and upgrade to something newer, but don't want to lose my current capabilities and data files, and i know nothing or less about LVM. Is it possible (and if so an example would be nice) to install a non-LVM-based distribution on the LVM disk and have multi-boot options? Or do i have to start over new and drop the LVM? My guess is that i should save my /home (data files and .rc files) on a backup device first, then somewhere/somehow create a new partion for installing another distribution. Any LVM experts out there that have tried anything like this--well i sure could use some pointers and advice...

    Read the article

  • Screensaver does not work for one user account, but does for the other

    - by Travesty3
    Both my screensaver and my "turn off monitor" power saver never come on for one user account on my computer, but it does for the other. It worked before. I can't think of anything I did just before this started happening. I'm using Windows 7 Pro SP1 with the default Aero theme, my screensaver is set to Mystify and should come on after 10 minutes. My power saver setting is set to Balanced, the Turn Off Monitor setting is set to 15 minutes, and the Sleep setting is set to Never. This all works absolutely fine on my user account, but if my wife's user account is logged in, it doesn't work. I don't have any antivirus, I'm not using a wireless keyboard or mouse, and I don't have MagicJack. I have restarted the computer many times since I've seen this problem. Anyone have any suggestions?

    Read the article

  • Linux installation analysis

    - by blunders
    "Ending company IT Admin relationship" has a good checklist for taking over an existing IT system, but I'm wondering as it relates to Linux: What is the most effective way to assess the scope of existing custom configurations, installs, scripts, etc done? Is there any software that will check if the kernel, system files, etc mirror the default files for the version installed? At this point I don't know what distro of Linux the server (though using Netcraft I do know the server appears to be Linux) -- so it's possible without knowing that information that this would be a hard question to answer.

    Read the article

  • How to set the default file permissions on ALL newly created files in linux

    - by eviljack
    My question is similar to this: http://stackoverflow.com/questions/228534/linux-default-file-permission but there is no scp/ftp client involved and that question looks abandoned. Simply put: I want to be able to, at some global level decree that all newly created files will never have world writable permissions (0775). I tried putting a umask 02 in /etc/profile then in my bash_profile but it only works for scripts or new files that I create in a shell. It doesn't work for files that another binary creates. Is there anyway to have all new files that are created?

    Read the article

  • Change environment variables as standard user (Windows 7)

    - by SealedSun
    When clicking on "Advanced system settings", I need to login as the administrator and hence only edit the administrators environment variables (in addition to the machine wide ones). How do I edit the environment variables of a standard user? Details With the migration to Windows 7, I decided to work as a standard user instead of an unprivileged administrator. Works well so far but I encountered a tiny problem: When I try to change per user environment variables via the control panel I have to login as an administrator. But since I run that part of the control panel as the administrator I can only edit the administrators variables. How am I supposed to edit my own environment variables? Without resorting to extreme measures, such as editing the registry (as suggested in "Is there any command line tool that can be used to edit environment variables in Windows?" )

    Read the article

  • Windows CA to issue certificate to authenticate SSH to a Linux server

    - by BArnold
    I have a Windows Server Root Certificate Authority, Linux SSH server, and users with Windows SSH clients. The Linux box is not part of the AD domain (and probably never will be [sigh]) OpenSSH 5.4 and above supports X.509 certiicate based authentication. I am trying to find a way to use my Windows Certificate Authority to issue certificates for authentication of the users when the SSH to the Linux box. I do not want to have to generate a keypair on each user's desktop. And we want the certificates controlled and revokable at the Windows CA. My question is not exactly the same as SSH from Windows to Linux with AD certificates (and the referenced moelinux.net seems to be down) I have searched Google a lot, and haven't found much results about how to accomplish this. An answer doesn't necessarily have to include a full tutorial, even some hints about what to search on or pointers to some references may be helpful.

    Read the article

  • Gre Tunnel Cisco Linux traffic forwarding

    - by mezgani
    I setup a gre tunnel a cisco router and a Linux machine, the tunnel interface in the Linux box named pic. Well i have to forward traffic coming from cisco through the Linux box. the rules i've set in the Linux box is described as follow: echo "1" /proc/sys/net/ipv4/ip_forward iptables -A INPUT -p 47 -j ACCEPT iptables -A FORWARD -i ppp0 -j ACCEPT iptables -A FORWARD -i pic -o ppp0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i ppp0 -o pic -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE I see the traffic coming from tunnel and forwarded to internet but no reply from sent packet. May i miss something like a routing rule.

    Read the article

  • Setting up Samba shares on a Linux VPS

    - by 101265052760541259879
    Hi, I'm trying to set up a folder that can be accessed via Windows clients over the net on my Linux VPS on which our companies website resides. I know a little bit about Linux, and have used Samba before to browse Windows shares from a Linux laptop. I'm guessing it's possible to do the reverse - to share a folder from Linux TO a Windows client. I have root SSH access to the VPS, would anyknow know what steps I need to take to set up the share, and how I can secure it, ideally with a simple username/password so the Windows clients can connect easily? Many thanks, Jack

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

  • How do I recover from a Linux CentOS 4.6 Operating System Crash

    - by Greg Omebije
    Our x86 Linux server running CentOS4.6 has crashed. The machine boots only to the Grub prompt. We have tried using the "rescue mode" to recover the System, but it hasn't worked. How can we fix this problem, so that the machine boots normally? How can we fix this problem to the point were we can recover our files from the server Our Linux Server Configuration: Dell PowerEdge 1950 Intel Xeon 2 HDD (146GB each) 4GB RAM Hardware and Software raid setup CentOS 4.6 We used Sysrecord to boot the computer: the following are the output of fdisk -l Disk /dev/sda: 293.3 GB, 292326211584 255 heads, 63 sectors/track, 35539 Cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17769 142625070 8e Linux LVM

    Read the article

  • Running Montavista Linux kernel 2.6.21 on XEN

    - by Josh
    I've been assigned the task of getting our MontaVista Linux (2.6.21 kernel) running on Xen. We'll be running Xen in -hvm- mode. My Xen version is 3.4.0 (linux kernel 2.6.18) and am unable to run MontaVista Linux (kernel 2.6.21) in hvm mode. Does anyone have suggestions?

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >