Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 486/1328 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • How do I allow users to execute commands via ssh without allocating a pseudo-terminal

    - by Dani El
    I need to allow users to run a limited set of commands. But not to allow them to create interactive sessions. Just like GitHub does. If you try to ssh without a command it greetings you and close the session. I can acquire this by using ForceCommand some-script But getting in some-script i then need to eval user's input. Perhaps any other NoTTY-like option in sshd_config? --- UPDATE --- i'm looking for a pure SSH / Bash solution, not Perl/Python/etc. hacks.

    Read the article

  • In Ubuntu I make changes to php.ini but nothing happens

    - by MrAn3
    Hi, Apache with php works well, but none of the changes I make in php.ini have effect, I've even delete all the contents of the file, then restart Apache, and run phpinfo() and surprisingly everything continues working well. The file I'm editing is the one that appears in the phpinfo() like "Loaded Configuration File". (/etc/php5/apache2/php.ini) P.S. I'm running Ubuntu 9.04 and PHP 5.2 Thanks in advance. More Details: I'm restarting with sudo /etc/init.d/apache2 restart, I've also tried sudo /etc/init.d/apache2 stop, and then start, at restarting I get: Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] "which php" did not produce any results. My installation of PHP was done using Synaptic Package Manger, choosing "Mark Packages by task" and then LAMP server. I don't have any clue of what to do...

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Ubunt doesn't mount one of my NTFS disks

    - by Jader Dias
    There is a mountable /dev/sda NTFS formatted (Windows disk) There is no /dev/sdb when I ls /dev (NTFS Data disk) There is a /dev/sdc which is another disk of the same model, (Ubuntu disk) I can see that Ubuntu detected this unmountable disk in the Disk Utility It states incorrectly it is unpartioned and a RAID volume. (it previously was RAID0 setup with /dev/sdc but now it is a simple volume, no RAID whatsoever) When I boot Windows 7, it uses this unmountable disk without a glitch The problem happens in both IDE and AHCI modes Ubuntu 10.04 Lucid Lynx

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • In a php script making /coderoot refer to /var/webroot/coderoot?

    - by Josh
    We are migrating a server and have modified the architecture slightly so that instead of /var/coderoot we now have /var/webroot/coderoot - I realize I could do a scripted find and replace, but I would rather have full unmodified reverse compatibility, or if that's unreasonable lets just say for theories sake. I tried using a symmlink ln -s /var/coderoot /var/webroot/coderoot but attempting to include a file in the code root using /var/coderoot/file does not work. I also tried using mod_alias with ScriptAlias and Alias. Neither worked. Is there anyway to do this?

    Read the article

  • RHEL 5 list missing critical patches/packages

    - by Vinnie Biros
    Im trying to figure out if there is an easy way to identify the missing critical patches/packages on my RHEL5 boxes. This is for audit purposes and was trying to figure out if there was an RPM command or something of the sort that would accomplish this easily. I know with my Solaris 10 boxes, i can run the "smpatch analyze" command which would display this information for me. Anyone know of anything similar for RHEL5? Thanks.

    Read the article

  • Permission denied when trying to execute a binary burned to a CD-R

    - by user16654
    On an Ubuntu 9.10 (Karmic Koala) machine, I burned a CD from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the CD by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the CD as root so I burned another one as another user but I still have the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Timeout ssh sessions after inactivity?

    - by Insyte
    PCI requirement 8.5.15 states: "If a session has been idle for more than 15 minutes, require the user to re-enter the password to re-activate the terminal." The first, and most obvious, way to deal with ssh sessions that are idling at the bash prompt is by enforcing a read-only, global $TMOUT of 900. Unfortunately, that only covers sessions sitting at the bash prompt. The spirit of the PCI spec would also require killing sessions running top/vim/etc. I've considered writing a */1 cron job that parses the output of "/usr/bin/w" and kills the associated shell, but that seems like a blunt instrument. Any ideas for something that would actually do what the spec requires and just lock the terminal? I've looked at away and vlock; they both seem great for voluntarily locking your terminal, but I need a cron/daemon task that will enforce locking.

    Read the article

  • Apache Virtual host Subdomains points to same directory

    - by Jakobud
    I have setup subdomains using Apache before and have never really ran into any big problems. But with this (I believe Centos) server that is one of my clients, I'm not understanding what I'm doing wrong. Here is the .conf that apache is loading: Listen 80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.thedomain.com DocumentRoot /u1/thedomain.com/public RailsEnv production </VirtualHost> <VirtualHost *:80> ServerName subdomain.thedomain.com DocumentRoot /u1/subdomain.thedomain.com/public_html </VirtualHost> When I access either the primary or subdomain addresses, they both point to the primary www.thedomain.com content. Any thoughts? UPDATE: Yes I did a configtest and graceful after making the changes.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • Upgrading phpmyadmin (and other packages) on Debian Squeeze

    - by westexasman
    I just setup a new VM with Debian Squeeze (latest stable release, 6.0.4). I am going for a webserver, so I installed the usual... apache, php5, mysql, phpmyadmin, etc. Everything went well, everything is working. My question is about upgrading packages. I noticed the phpmyadmin version is 3.3.7... the latest is 3.4.10.1. Doing apt-get update/upgrade does not upgrade the package. How does one go about upgrading packages on a Debian Squeeze server if apt-get update/upgrade does not work? Thanks!

    Read the article

  • running a web server with encrypted file system (all or part of it)

    - by Carlos
    Hi, I need a webserver (lamp) running inside a virtual machine (#1) running as a service (#2) in headless mode (#3) with part or the whole filesystem encrypted (#4). The virtual machine will be started with no user intervention and provide access to a web application for users in the host machine. Points #1,#2 and #3 are checked and proved to be working fine with Sun VirtualBox, so my question is for #4: Can I encrypt the all filesystem and still access the webserver (using a browser) or will grub ask me for a password? If encrypting the all filesystem is not an option, can I encrypt only /home and /var/www ? will apache/php be able to use files in /home or /var/www without asking for a password or mounting these partitions manually? Thanks

    Read the article

  • Run application with other user

    - by user62367
    OS: Fedora 14 GUI: GNOME I need to run an application with another user then the "default" (normally used). Purpose: create a ".desktop" file on my desktop to run e.g.: Google Chrome with another user (NOT ROOT! - so beesu doesn't count.) There aren't any gksu, or kdesu packages in Fedora 14. Why? So i want to create a user with "adduser SOMEONE", and i want to run e.g.: Google Chrome with "SOMEONE" - then it will have minimum permissions, "more security". Thank you!

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • getting a weird error whenever I try restarting apache

    - by Binny Zupnick
    I'm trying to install apache, php5, mysql, and phpmyadmin. And I'm following a tutorial but this error keeps happening here's the error: apache2: Syntax error on line 227 of /etc/apache2/apache2.conf: Could not open configuration file /etc/phpmyadmin/apache.conf: No such file or directory Action 'configtest' failed. The Apache error log may have more information. ...fail! I've tried removing all of them and reinstalling but, to no avail. I'm pulling my hair out over this so, thanks in advance! =) edit: during the tutorial I screwed up and deleting something, lol, so I know that's the issue I just don't know what to do about it now

    Read the article

  • "service"-command and environment variables

    - by varesa
    I am trying to start a service that requires a env. variable to be set to certain path. I set this variable in "/etc/profile.d/". However when I start this service using the service command, it doesn't work. man service: service runs a System V init script in as predictable environment as possible, removing most environment variables and with current working directory set to /. So it seems that service is removing my variables. How should I set the variables up to keep them from being removed. Or is that something i should not do. I could start the service manually using the init-scripts, or even hardcode the path into the script, but I'd like to know how to use it with the service command.

    Read the article

  • What music streaming app fits my needs on Ubuntu?

    - by Jim
    I'm looking for an application to stream Internet radio on Ubuntu. I like listening to Radio Paradise while I work. Right now, I'm using Amarok. "Movie Player" sometimes refuses to open the stream, and VLC doesn't keep its window title updated with the currently playing track. Amarok has nice translucent notifications when tracks change, but track changes in streams don't trigger the notifications. Mostly, I want something that reliably opens streams and makes it easy to see the name of the track that's playing. If it has a built-in directory of streaming radio stations, that would be a big benefit.

    Read the article

  • Creating rescue / install USB flash disk for CentOS

    - by wwwpanda
    For CentOS installation CDs, you can install OS, as well as booting into "rescue" mode so that you can do a chroot mount on the system partition for problem solving, even the system is installed in hardware RAID drives. How can we create a similar thing but on usb flash drive? I tried to do it with unetbootin, but when booting into the USB, eventually the CentOS setup still requires presence of CDs. Ultimately, I want to use this usb flash drive for remote disaster recovery through say HP iLo remote console / Dell iDrac etc.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >