Search Results

Search found 1603 results on 65 pages for 'ls'.

Page 37/65 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Lenova B460e laptop Unknown filesystem

    - by Dinesh
    I got Lenova B460e laptop yesterday (by TN Govt). It has WINDOWS 7 Proff….. Setup it up the way I liked it... all software drivers and all. I wanted extra drives, so I entered the WINDOWS diskadmin and changed my partition setup, by splitting and I did the partition setup in the installing proces, so I splitted my harddisk into 4 parts. So far so good. When I rebooted, I entered GRUB RESCUE MODE. In this mode I know the only command “ls”. Which gives like (hd0) (hd0,msdos5) (hd0,msdos4) (hd0,msdos2) (hd0,msdos1). In this Lap I do not have CD drive also, otherwise I could have done Format the OS and installed new OS using WINDOWS 7 CD. Now I don’t know what I’m supposed to do. Anyone an idea how to fix this?

    Read the article

  • /dev/fuse "permission denied" even when member of fuse group

    - by steeef
    I have a backup script scheduled on a Debian 5.0 x86 server, via sshfs. However, when I attempt to mount the remote directory, I receive: failed to open /dev/fuse: Permission denied ls -l /dev/fuse returns: crwxrwxr-x 1 root fuse 10, 229 2010-11-12 09:08 /dev/fuse id backup returns: uid=501(backup) gid=501(backup) groups=501(backup),46(plugdev),108(fuse) The only way I can get the directory to mount is if I run chmod a+w /dev/fuse, but this is reset at some point during the day. It's a kludge though, and I'd rather figure out why the group permissions aren't working.

    Read the article

  • What is the fastest way to reload history commands begin with certain characters in linux?

    - by gerry
    In Dos we can input the first several characters to filter command history and find proper one rapidly. But how to do the same thing in Linux ? for example when I am testing a local server: cd sudo /etc/init.d/vsftpd start wget ... ls emacs ... sudo /etc/init.d/vsftpd restart sudo /etc/init.d/vsftpd stop ... In Dos you can easily type sudo and switch among the three commands beginning with it using arrow keys. But in Linux, is below command the best we can do ? historty | grep sudo I don't like it, because history can easily become a mess, and it also need mouse action.

    Read the article

  • Apache + LDAP Auth: access to / failed, reason: require directives present and no Authoritative hand

    - by Karolis T.
    Can't solve this one, here's my .htaccess: AuthPAM_Enabled Off AuthType Basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthName "MESSAGE" Require ldap-group cn=CHANGED, cn=CHANGED AuthLDAPURL "ldap://localhost/dc=CHANGED,dc=CHANGED?uid?sub?(objectClass=posixAccount)" AuthLDAPBindDN CHANGED AuthLDAPBindPassword CHANGED AuthLDAPGroupAttribute memberUid AuthLDAPURL is correct, BindDN and BindPassword are correct also (verified with ldapvi -D ..). Apache version: Apache/2.2.9 (Debian) The error message seems cryptic to me, I have AuthzLDAPAuthoritative on so where's the problem. EDIT: LDAP modules are loaded, the problem is not with them being missing. # ls /etc/apache2/mods-enabled/*ldap* /etc/apache2/mods-enabled/authnz_ldap.load /etc/apache2/mods-enabled/ldap.load EDIT2: Solved it by changing funky Require ldap-group cn=CHANGED, cn=CHANGED line with Require valid-user Since AuthzLDAPAuthoritative is on, no other auth methods will be used and valid-user requirement will auth via LDAP. (right? :/)

    Read the article

  • why can't macports find make

    - by GeoffreyF67
    I am trying to run macports like thus: port install php5 When I do so, however, I get this error: Error: Unable to open port: can't read "build.cmd": Failed to locate 'make' in path: '/opt/local/bin:/opt/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin' or at its MacPorts configuration time location, did you move it? So I looked at my path: declare -x PATH="/Developer/usr/bin:/opt/subversion/bin:/opt/local/bin:/opt/local/sbin:/usr/local/php5/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin" and then looked to make sure make was in one of those dirs: ls -l /Developer/usr/bin/make $ lrwxr-xr-x 1 root admin 7 Aug 7 16:47 /Developer/usr/bin/make -> gnumake And typing: make produces: make: *** No targets specified and no makefile found. Stop. So I know that it's there. But macports can't find it. Any ideas? G-Man

    Read the article

  • Problem launch Java on Debian: "error while loading shared libraries: libjli.so"

    - by aetaur
    I'm trying to launch Java: $ java -version java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory $ ldd /usr/lib/jvm/java-6-openjdk/jre/bin/java linux-gate.so.1 => (0xb779f000) libz.so.1 => /usr/lib/libz.so.1 (0xb7780000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb7767000) libjli.so => /usr/lib/jvm/java-6-openjdk/jre/bin/../lib/i386/jli/libjli.so (0xb7762000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb775e000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7603000) /lib/ld-linux.so.2 (0xb77a0000 $ ls /usr/lib/jvm/java-6-openjdk/jre/bin/../lib/i386/jli/ libjli.so However Java does work under root: $ sudo java -version java version "1.6.0_18" OpenJDK Runtime Environment (IcedTea6 1.8.7) (6b18-1.8.7-2~lenny1) OpenJDK Client VM (build 14.0-b16, mixed mode, sharing) How can I launch Java as a regular user without errors?

    Read the article

  • How can I get vim to set an ACL on its swap files?

    - by thsutton
    I use vim on an OS X Snow Leopard Server machine. A number of the directories I work in have ACLs (so that various groups of users can access them over AFP) that are inherited. For some reason, when I'm working in one of these directories, vim cannot read it's own swap files. It can create them fine but can't read them which, for some reason, makes it display the "swap file already exists" message (and no, the swap file does not already exist). vim -r lists the newly created swap file as "[cannot be read]". The owner and group are correct and the permissions are 0600, and the ACLs on the swap file and the file I'm editing are identical (as disclosed by ls -le and compared with diff). groups returns the same thing whether invoked from my login shell or via :! in vim. Has anyone encountered (and hopefully resolved) a problem like this before?

    Read the article

  • kinit gives me a Kerberos ticket, but no AFS token

    - by Tomas Lycken
    I'm trying to setup access to my university's IT environment from my laptop running Ubuntu 12.04, by (mostly) following the IT-department's guides on AFS and Kerberos. I can get AFS working well enough so that I can navigate to my home folder (located in the nada.kth.se cell of AFS), and I can get Kerberos working well enough to forward tickets and authenticate me when I connect with ssh. However, I don't seem to get any AFS tokens locally, on my machine, so I can't just go to /afs/nada.kth.se/.../folder/file.txt on my machine and edit it. I can't even stand in /afs/nada.kth.se/.../folder and run ls without getting Permission denied errors. Why doesn't kinit -f [email protected] give me an AFS token? What do I need to do to get one?

    Read the article

  • View hidden contents of usb device

    - by Srikanth Suresh
    I have a USB with the following contents on ls -lah total 8.0K drwx------ 1 srikanth srikanth 4.0K May 27 22:54 . drwxr-xr-x 4 root root 4.0K May 28 19:37 .. -rw------- 2 srikanth srikanth 0 May 27 22:52 Files.az3w On viewing the properties of the folder I have the following information: 90.4Mb used and 16.1GB free There is data in this pen drive which I am currently unable to view also it is sensitive. After searching about hiding contents in a USB I think that there is a hidden partition here that I cant access. How should I proceed to view the contents without damaging the files already present?

    Read the article

  • How do I unmount a tmpfs that is missing from /etc/mtab?

    - by vrinek
    I have the following line in /etc/fstab: none /home/hydra/tmp tmpfs user,noauto,size=1000M,uid=1001,gid=1001 0 0 I can do mount ~/tmp as user hydra and it gets mounted ok. The only problem is that even thought it gets added to /proc/mounts, it does not get added to /etc/mtab. When I try a umount ~/tmp (again as hydra) it complains: umount: /home/hydra/tmp is not mounted (according to mtab) And when I try -f or -n, it complains that I am not root. Some more info on the system that manifests this problem: On sudo umount /home/hydra/tmp, the fs gets unmounted (I think I needed to used -f too) Debian version is testing mount --version - mount from util-linux 2.19.1 (with libblkid and selinux support) ls -l /etc/mtab - -rw-r--r-- 1 root root 921 Nov 14 09:08 /etc/mtab cat /proc/mounts | grep rootfs - rootfs / rootfs rw 0 0 /home, /home/hydra nor /home/hydra/tmp are symbolic links

    Read the article

  • Setting up NIS/NFS on Mac OS 10.6

    - by evan
    We have an Ubuntu NIS/NFS server at work and we recently got a few new iMacs. Is there a way to set them up so they can use the linux user accounts and mount the shared nfs files? Are there any guides on how to do this? I've been googling with no success. I tried getting NFS to work by connecting to the server via the Disk Utility but after I run 'sudo automount' from the command line and ls the directory I tried to mount it to (Volumes/nfs) it gives a permissions error. If there isn't a way to do this, anyone know of any not to complicated ways to share user accounts and files between mac and linux computers (and even hypothetically a windows computer one day?) I know its kind a of huge question, but I'll greatly appreciate any advice on the topic. Thanks!

    Read the article

  • AFP AD ACL permissions issues with external drive

    - by AlanGBaker
    Mac OS X Server 10.4.11 connected to an AD domain system serving AFP shares to Mac OS X 10.5.8. If I create a share on the the internal RAID of the server with an ACL that allows RW to all ("Domain Users"), then it works, but a share created identically on the external RAID appliance (Drobo v2) doesn't. When the share from the Drobo is mounted, it shows no sign that it has any ACLs associated with it: neither in the Finder (Get Info), nor when checked via the terminal with "ls -lae". The Drobo does show that the ACLs exist when I ssh into the server and check it there, but when the clients mount that share, they just... ...disappear. Any thoughts?

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • Cannot use scp on Mac OS X

    - by Robert
    Hi all, when I try to copy any file with scp on Mac OS X Snow Leopard from another machine I get this error: scp [email protected]:/home/me/file.zip . Password: ... ---> Couldn't open /dev/null: Permission denied this is the output of "ls -l /dev/null": crw-rw-rw- 1 root wheel 3, 2 May 14 14:10 /dev/null I am in the group wheel, and even if I do "sudo scp..." it doesn't work. It's driving me crazy, do you have any suggestion? Thanx!

    Read the article

  • nfs server on cygwin slow

    - by Weltenwanderer
    The setup: We run an instance of cygwin nfsd on a Windows 2008 Server (Xeon 3,2 GHz). There are several Sun Solaris and SunOS machines accessing the shares. This is the exports file: /disk3 (rw,all_squash) /disk2 (rw,all_squash) Those paths are soft linked to the relevant cygdrive/d/path/to/dir paths. Some of the folders contain up to 10k files. The Problem: ls -la on the mounted folder on the sun boxes takes 2 - 3 minutes and the general read performance is really bad. cat filename displays the file in slow bursts and this hurts performance on tasks that access those shared files heavily. Processor load is not the issue, the nfs server idles most of the time, the cygwin tasks never get over 1% load.

    Read the article

  • Graphing services using pnp4nagios

    - by Matias
    I've managed to install pnp4nagios 0.6.3 and I'm a bit confused about how pnp4nagios generates graphics. Almost out of the box, it started graphs for ping and some http servers (not all of them). But, how can I make it graph things like disk utilization (When that value comes from SNMP)?? For example, ls /usr/local/pnp4nagios/var/perfdata/isis/ Cola_de_Mail.rrd Cola_de_Mail.xml HTTP.rrd HTTP.xml PING.rrd PING.xml Those are checks running on the host isis, but there are many other checks for that server that are not taken into account by pnp4nagios. How can I make pnp4nagios "see" the other checks?? Thanks!

    Read the article

  • SSH command from PHP script - nothing, yet work at cmd line

    - by waxical
    I'm working on an EC2 box and trying to SSH command another box. The command works in command-line, even php -a interactive. However it does not work when running as apache. Example cmd:- system('ssh -i /home/me/keys/key.pem [email protected] "ls"'); I've tried adding apache to wheel group, and gshadow on both boxes. I've also just tried chowning the pem file to apache. Nothing. Yet the command response fine in the two other use cases outlines. What's going on here? Anyone know?

    Read the article

  • changing user in ubuntu

    - by Rahul Mehta
    Hi , this is my ls -all, the zfapi folder have the root right , how can i change this to www-data. Also Please advise what is the first root and secont root is ? Thanks drwxr-xr-x 4 www-data www-data 4096 2011-01-06 18:21 cdnapi -rw-r--r-- 1 www-data www-data 678 2010-08-30 12:02 config.js drwxr-xr-x 4 www-data www-data 4096 2010-11-23 15:55 css drwxr-xr-x 7 www-data www-data 4096 2010-11-17 13:12 images -rw-r--r-- 1 www-data www-data 25064 2010-12-17 18:26 index.html -rw-r--r-- 1 www-data www-data 19830 2010-12-18 11:24 init.js drwxr-xr-x 2 www-data www-data 4096 2010-12-02 12:34 lib -rw-r--r-- 1 www-data www-data 18758 2010-12-06 18:00 styles.css -rw-r--r-- 1 www-data www-data 1081 2010-10-21 17:56 testbganim.html drwxr-xr-x 2 www-data www-data 4096 2010-12-17 11:15 yapi drwxr-xr-x 7 root root 4096 2011-01-07 18:20 zfapi

    Read the article

  • Error while loading shared libraries - libwebsock

    - by kittyPL
    Im trying to setup libwebsock, simple C websocket library. I followed the installation procedure from INSTALL file, everything went fine. Im able to compile test program given in the examples. But when I want to run my executable, wild error appears: ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory I checked /usr/local/lib twice, libwebsock.so.1 exists and is doing very well. I also tried copying the lib to the echo folder (so its placed next to binary), still same error. It's quite funny for me: shadowz@Ubu:~/WebSocket$ ls echo echo.c echo.cpp libwebsock.so.1 shadowz@Ubu:~/WebSocket$ ./echo ./echo: error while loading shared libraries: libwebsock.so.1: cannot open shared object file: No such file or directory Any suggestions? Im running out of ideas...

    Read the article

  • Ubuntu 12.04 Server: permissions on /var/www for newly copied files

    - by Abe
    I ran the following commands to set up ACL on the /var/www folder in my Ubuntu 12.04 Server: sudo usermod -g www-data abe sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I downloaded Wordpress using wget in my /var/www folder and unzipped the downloaded file: cd /var/www wget http://wordpress.org/latest.zip mv latest.zip wordpress.zip unzip wordpress.zip I created a new database and user in mysql and attempted to run the setup process through the web interface. When I enter the configuration info in wordpress I run into the following error message: Sorry, but I can't write the wp-config.php file. When I run ls -la, I see that the files are owned by my user abe, but they are part of the group www-data. Would I have to run the chmod command every time I copy new files to /var/www? sudo chmod -R 775 /var/www

    Read the article

  • Mount CIFS share with autofs

    - by Phanto
    I have a system running RHEL 5.5, and I am trying to mount a Windows share on a server using autofs. (Due to the network not being ready upon startup, I do not want to utilize fstab.) I am able to mount the shares manually, but autofs is just not mounting them. Here are the files I am working with: At the end of /etc/auto.master, I have: ## Mount this test share: /test /etc/auto.test --timeout=60 In /etc/auto.test, I have: test -fstype=cifs,username=testuser,domain=domain.com,password=password ://server/test I then restart the autofs service. However, this does not work. ls-ing the directory does not return any results. I have followed all these guides on the web, and I either don't understand them, or they.just.don't.work. Thank You

    Read the article

  • NFSv4 - ACLs not working

    - by alex
    I've setup an NFSv4 server (Debian) and client (Ubuntu/Mythbuntu). It seems that uid username mapping is working nicely out-of-the-box (I get correct usernames on ls -l if the usernames match between the two boxes, even if the uids don't), but ACLs are not working. I've installed nfs4-acl-tools and I can read the ACLs correctly on the client, but they don't get applied. What needs to be done for ACLs to work? To clarify; username mapping works for regular permissions. ACLs are applied using uid/gid (I can even set ACLs by uid and they work).

    Read the article

  • How to parse pipe with multiple commands independently?

    - by yarun can
    How can I parse output of a single command by multiple commands without truncating at each step? For example ls -al|grep -i something will pass every line that has "something" in it to the next pipe which is fine, but that also means every other line in the pipe is discarded since there wont be matching the condition. What I want is to be able to operate on single pipe by many commands independently. In this case it a pipe from Mutt which passes the whole message body. I want to get grep, sed, delete and assign each of these to bash variables maybe. Initially what I want is to be able to assign "message id" to a variable, "subject" to another variable etc Then pass those into proper commands arguments. Here is how it will be MessageBodyFromMutt|grep something -Ax -Bx |grep another thing from the original message| sed some stuff from the original message| cut from here to there Obviously the above line does not do what I want. I want all these commands to operate on the original message body. I hope it makes sense

    Read the article

  • Linux - How to run Firefox with AT command

    - by conualfy
    I try to run a specified command on a desired time. I found at for this, and it seems to work fine if I run: echo "ls -al / > /home/florin/test.txt" | at 4:21am But I want to run a different thing: /usr/bin/firefox -new-tab http://google.ro I tried adapting the first line with my action (running it in terminal opens a new Firefox tab with http://google.ro, so the command is correct), but with at, it does not work: echo "firefox -new-tab http://google.ro" | at 4:23am The task seems to be scheduled, but it does not run. When running the previous line I get the default reply from at: warning: commands will be executed using /bin/sh Should my Firefox command be differently run in sh? Is there a way to do my action with at, or some other way? Thanks a lot!

    Read the article

  • vsFTPd and iptables - how to configure them in CentOS 5.5?

    - by Vincenzo
    I've installed vsFTPd in CentOS 5.5, on TWO servers, and added this rule to their iptable-s: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT Looks like this is not enough, since when I'm trying to upload a file from one server to another, I'm getting this result (IP address is masked): # ftp 99.99.99.99 Connected to …com (99.99.99.99). 220 (vsFTPd 2.0.5) Name (99.99.99.99:root): vinny 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 227 Entering Passive Mode (99,99,99,99,107,74) ftp: connect: No route to host I've found a few articles in the net about the second rule I have to add to iptables, but I didn't find the right syntax for it. Could you please help?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >