Search Results

Search found 4860 results on 195 pages for 'sudo petruza'.

Page 140/195 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • OSSEC is not running

    - by batman
    I have an two ec2 instances. In one I have installed ossec server and in other I have installed ossec agent. Here are my server config INBOUND (security group/firewall) : port:514 source:0.0.0.0/0 port:1514 source:0.0.0.0/0 But it seems to be not working. In my agent log file I keep on getting: 2012/08/28 06:52:52 ossec-agentd: INFO: Using IPv4 for: x.x.x.x.x.x . 2012/08/28 06:53:13 ossec-agentd(4101): WARN: Waiting for server reply (not started). Tried: 'x.x.x.x.x'. Edit: Running sudo netstat --inet -nlp | grep ossec. I'm getting: udp 0 0 0.0.0.0:1514 0.0.0.0:* 26027/ossec-remoted Where I'm making the mistake?

    Read the article

  • VMWare tools on Ubuntu Server 10.10 kernel source problem

    - by Hamid Elaosta
    After install and running the vm-ware config, the config needs my kernel headers to compile some modules, ok, so I'll give it them, but it just won't work. It asks for the path of the directory of C header files that match my running kernel. If I uname -r I get 2.6.35-22-generic-pae So I tell it the source path is /lib/modules/2.6.25-22-generic-pae/build/include and it returns "The directory of kernel headers (version @@VMWARE@@ UTS_RELEASE) does not match your running kernel (version 2.6.35-22-generic-pae). ..I'm confused? can anyone offer suggestions please? I installed hte kernel source andh eaders myself using sudo apt-get install linux-headers-$(uname -r)

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Windows 7 connect to Lion file sharing

    - by McKvack
    Trying to access my Mac from a Windows 7 computer, I fail with the infamous error 86 incorrect password. Now this appears to be a well-known problem with countless threads on the internet giving as many "solutions" as there are discussion threads about it (mostly ranging from installing third-party commercial samba servers, to switching to some other protocol, to compiling a plain-vanilla Samba installation - the latter which I will probably do when I give up this :) ) I am stubborn, and I believe there must be some problem here that can be solved or worked around, but there is surprisingly little detail about this problem. It appears to have something to do with a mismatch of authentication methods. Trying to run samba in debug mode: sudo /usr/sbin/smbd -debug -stdout gets me this output when trying to access it from Win 7 ... smb1_dispatch_one [smb_dispatch.cpp:377] dispatching SMB_COM_SESSION_SETUP_ANDX smb1_dispatch_session_setup [session_setup.cpp:261] FIXME erase existing sessions log_gss_error [gssapi_mechanism.cpp:97] gssapi: gss-code: Miscellaneous failure (see text) log_gss_error [gssapi_mechanism.cpp:113] gssapi: mech-code: unknown mech-code 22 for mech unknown What is the problem here, and how do I fix it?

    Read the article

  • iptables NAT configuration

    - by Sarp Kaya
    Hello I am experiencing some issues with my iptables. Here's what I want to do: A(eth0)--------(eth0)B(eth2)---------------(eth2)C Brackets are interface names A,B and C are hosts. Now I would like to forward port number 80 of host C so that It would be accessed via host A. host A is 192.168.1.254 host C is 192.168.3.2 I intentionally ACCEPTed all FILTER chain options as the default policy because I wanted to make sure that NAT is working properly first. I enabled ip_forward. So here's what I have done: sudo iptables -A PREROUTING -t nat -p tcp - d 192.168.1.254 -j DNAT --to 192.168.3.2 However it is not working. What am I missing here?

    Read the article

  • Ubuntu - How to automount an external drive at a preconfigured mount point?

    - by Lars Haugseth
    Normally, when I attach an external USB drive to my Ubuntu system, the filesystem on it are automounted to /media/label. However, I'd like the filesystem to be mounted at a mount point of my choosing. I've added a line like this to my /etc/fstab: UUID=2BE905C238C1F724 /p ntfs-3g defaults 0 0 # Passport 320GB This allows me to manually mount the volume at /p by running sudo mount /p, however the filesystem is no longer automounted when the drive is attached to the PC. What do I need to do to get automount to this mount point to work, if at all possible?

    Read the article

  • NIS server setting problem in ubuntu

    - by Asma
    Hi, I set NFS for server-client that works properly. Now I am trying to set NIS server-client on same PC's. I am following the instruction from "SettingUpNISHowTo" from https://help.ubuntu.com/community/SettingUpNISHowTo link. But in step-10, "sudo /etc/init.d/nis restart".......it show error fail. If I try to use "ypcat passwd" to check......it shows error YPBINDPROC_DOMAIN: Domain not bound No such map passwd.byname. Reason: Can't bind to server which serves this domain Can anyone able to help me to get rid out from this problem? Is all the step in the document is proper to configure the NIS server? Thanks in advance.

    Read the article

  • Users in ubuntu; Cant figure it out

    - by Camran
    I am the only one who will have access to my website. Just installed my VPS and managed to get most stuff working. However, stuck on the "members" part. Currently, everything has been done as "root". I have read posts that I should create a user, because root isn't ideal. I have found thousand guides on how to create a user, but now what to do next. 1- Should I create a user with adduser username and then add the user to a group? But which group? 2- And will the user then be able to do everything as I have done logged on as "root"? 3- And somebody plz explain what "sudo" has to do with this? (if anything at all) Thanks

    Read the article

  • Cannot use scp on Mac OS X

    - by Robert
    Hi all, when I try to copy any file with scp on Mac OS X Snow Leopard from another machine I get this error: scp [email protected]:/home/me/file.zip . Password: ... ---> Couldn't open /dev/null: Permission denied this is the output of "ls -l /dev/null": crw-rw-rw- 1 root wheel 3, 2 May 14 14:10 /dev/null I am in the group wheel, and even if I do "sudo scp..." it doesn't work. It's driving me crazy, do you have any suggestion? Thanx!

    Read the article

  • Adding Thunderbird-stable repository gives "can't find signing_key_fingerprint" error

    - by EBV2010
    I'm trying to install Thunderbird 11 on Kubuntu 10.04. I was able to do it on the machine I'm working on. To get a clean process that I can roll out to other clients, I re-installed the machine and repeated the process. This is what I did (I've left out the sudo for clarity): add-apt-repository ppa:ubuntu-mozilla-security/ppa apt-get update add-apt-repository ppa:mozilla-team/thunderbird-stable The last one resulted in this error: Error: can't find signing_key_fingerprint at https://launchpad.net/api/1.0/~mozilla-team/+archive/thunderbird-stable The machine as it was before re-installation gave no such message. It was built from the same sources. Bottomline: I got Thunderbird 11.0 to run on Kubuntu 10.04 but after re-installation, adding the repository gives an error and won't add. Is there a way to solve the signing_key_fingerprint error?

    Read the article

  • After closing the ssh terminal, the thin server is down

    - by Keating Wang
    I have a rails project run on the thin server(1.3.1) on a ubuntu server. I ssh to the server and start thin with command 'thin start -C config/thin.yml', following the thin.yml, port: 3000 log: log/thin.log timeout: 30 chdir: /home/byht/56platform/dev/tracker environment: production servers: 1 daemonize: true After thin starts successfully, I visit the project and it works well. Then, I close the terminal, I can also visit the pages that have been visited, but when I visit the pages that not been visited before closing ssh terminal, a "500" error appears on the page. I didn't find the error messages in the log file. I have tried start thin with nohup and sudo, but they are useless. I sign in the ubuntu server locally, then the problem disappears. But I need sign in the server to stat thin with ssh when I'm home.

    Read the article

  • "sh: /usr/sbin/xenstored: not found" - But it's there?

    - by Matt H
    What would cause running the file /usr/sbin/xenstored to print sh: /usr/sbin/xenstored: not found However, the file /usr/sbin/xenstored is there and is not a symbolic link. Actually I should be running this as root. That prints a similarly odd message. sudo: unable to execute /usr/sbin/xenstored: No such file or directory By the way, xenstored is not a script, it's an ELF executable. My guess is that it's because I haven't gotten all the dependent libraries installed. However, I would expect it to say something like this: ./xenstored: error while loading shared libraries: libxenctrl.so.4.0: cannot open shared object file: No such file or directory Which is true of running xenstored on a system that doesn't have all the required libraries. Why do I get "not found" vs the much more useful "cannot open shared object file"?

    Read the article

  • Printer Brother DCP-110C Linux 64-bit drivers

    - by Ondra Žižka
    Hi, I need 64-bit Linux driver for DCP-110C (for Ubuntu 10.04 64-bit) I found only 32-bit here. http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html I've tried to follow those instructions. During the installation, I got this: ondra@ondra-doma:~/Downloads$ sudo dpkg -i --force-all dcp110clpr-1.0.2-1.i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) (Reading database ... 257283 files and directories currently installed.) Preparing to replace dcp110clpr 1.0.2-1 (using dcp110clpr-1.0.2-1.i386.deb) ... Unpacking replacement dcp110clpr ... Setting up dcp110clpr (1.0.2-1) ... ln: creating symbolic link `/usr/lib/libbrcompij2.so.1.0': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so.1': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so': File exists After installation, the printer is listed at the cups server, but does not work (no command has any effect on printer (which is, of course, on and connected)). Anyone has found some working solution? Thanks, Ondra

    Read the article

  • Connect to wired and wireless networks at same time, Ubuntu

    - by Gary Chambers
    Currently, I have a media PC running Ubuntu 10.04 that I am trying to connect via a wired network cable directly to a NAS box, and wirelessly to the router. This works no problem after I run sudo /etc/init.d/networking restart but I can't get both interfaces to come up on system startup. My /etc/network/interfaces file reads as follows: auto eth0 iface eth0 inet static address 10.0.1.2 netmask 255.255.254.0 broadcast 10.0.1.255 network 10.0.1.0 auto wlan2 iface wlan2 inet dhcp As I say, I know this works, because I can get it to work by restarting the network interfaces, but I can't bring them both up on system startup. Does anyone know why this might be?

    Read the article

  • Installing Linux from External Card Reader

    - by Subhamoy Sengupta
    I have this problem. I was experimenting if I could use a memory card (SDHC) as an USB drive for all intents and purposes, and when I put the card in an USB card reader, I can use it just like regular USB stick and it also shows up in the BBS popup menu as an USB stick. When I tried to create an installation media out of it like this: sudo dd if=/path/to/image of=/dev/sdb And tried to boot from it, simply nothing happened. Cursor blinked a couple times, and jumped to the GRUB of my pre-existing GNU/Linux installation. What am I missing here? Is this not doable? I tried this with Xubuntu 12.04 and ArchLinux, by the way. I have also tried UNetBootIn instead of dd. Nothing happened differently.

    Read the article

  • Installing Image Magick on Amazon EC2

    - by Kapil Sharma
    Well I'm a PHP developer who know few linux commands to get my job done. I need to launch a symfony 1.4 website on Amazon EC2. Everything is fine except IMagick. I magick is also installed through following command: sudo yum install ImageMagick Its php lib is not installed/configured, if that do not happen with above command. In PHP, I'm using IMagick, but script is failing on IMagick. I know problem is with PHP IMagic extention but dont know how to fix that. On dev box, its as simple as turning it on on WAMP. Can someone please suggest where should I look to confirm if IMagick PHP extention installed and configured correctly?

    Read the article

  • Getting kernel errors when manually mounting VirtualBox shared folders

    - by Ross
    Updated: I've rephrased this problem as I understand it a bit more now (and have encountered another problem). I'm using a Fedora 15 guest on a Windows 7 host using VirtualBox. I am trying to mount a partition on the host PC as a shared folder for use in the guest. The folder appears in /media and is accessible when I use the auto-mount feature when setting up the shared folder, but when I attempt to mount without auto-mount I get the following error: $ sudo mount.vboxsf data /mnt/host_data /sbin/mount.vboxsf: Could not add an entry to the mount table.: Invalid argument In addition a popup appears (part of Fedora/GNOME) reporting a crash in the kernel package: WARNING: at lib/list_debug.c:26 __list_add+0x3e/0x81() However the shared folder seems to work, I can certainly browse it (although everything seems to be executable, probably down to a Windows host). Is there something wrong with what I'm doing or is this a bug (and in which case should it be reported to the Linux Kernel team or VirtualBox)?

    Read the article

  • Setting up Tornado with Nginx on Ubuntu 10.04 for production use

    - by DjangoRocks
    Hi all, I understand that there's an nginx configuration file at http://www.friendfeed.com But i don't really know how to set up Tornada for production use on Ubuntu 10.04 with Nginx. Here's my situation and assumptions: 1) Assuming my Tornado project is set up as such: project/ src/ static/ templates/ project.py And I have installed Tornado by downloading the repositary from Github and than sudo python setup.py install 2) I've installed Nginx and started it based on the instructions here : http://library.linode.com/web-servers/nginx/installation/ubuntu-10.04-lucid My questions are: Where does my nginx configuration file go ? Within the src/ folder? After configuring Nginx, how do I start my Tornado project?

    Read the article

  • Trouble with NFS file sharing on Synology 211 NAS and Ubuntu Client

    - by Aglystas
    I'm attempting to set up NFS file sharing and keep getting the error mount.nfs: access denied by server while mounting 192.168.1.110:/myshared Here is the exact command I'm using to mount: sudo mount -o nolock 192.168.1.110:/myshared /home/emiller/MyShared I have set 'Enabled NFS' in DSM and set NFS priviledges in the the Shares section of the control panel. Here is the /etc/exports entry from the NAS: volume1/myshared 192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0) I read some things about the hosts.allow and hosts.deny but it seems like if they are empty they aren't used for anything. I can see the share when I run ... showmount -e 192.168.1.110 Any help would be appreciated in this matter.

    Read the article

  • "Slave" user accounts in GNU/Linux

    - by Vi
    How to make one user account to be like root for some other user account, e.g. to be able to read, write, chmod all it's files, chown from this account to master and back, kill/ptrace all it's processes and to all thinks root can, but limited only to that particular slave account? Now I'm simulating this by allowing "master" user to "sudo -u slaveuser" and setting setfacl -dRm u:masteruser:rwx ~slaveuser. It is useful as I run most desktop programs in separate user accounts, but need to move files between them sometimes. If it requires some simple kernel patch it is OK.

    Read the article

  • how to automatically accept license with mounting mac osx .dmg files from command line?

    - by Vitaly Kushner
    Im automating my mac installation using 'puppet'. as a part of it I need to install several programs that come in a .dmg format. I use the following to mount them: sudo /usr/bin/hdiutil mount -plist -nobrowse -readonly -quiet -mountrandom /tmp Program.dmg The problem is that some .dmg files come with a license attached, and so script is stuck accepting the license. (there is no stdin/out when running with puppet, so I can't manually approve it to continue). Is there a way to pre-approve or force-approve the license?

    Read the article

  • Wireless router not connection -> AP Not associated

    - by candido
    I can not connect to internet by wireless router after some months with ubuntu 10.04. I can connect with the same portable but with win OS. My SO is ubuntu 10.04 linux 2.6.32-41 arch SMP i686. The internet wireless network controller is Atheros AR9285 chipset (pci express) Kernel module ath9k I have tried a command line connection: $ sudo /etc/init.d/network-manager stop #stop gui network manager $ iwconfig wlan0 essid WLAN_3C key s:C001D20550B3C $ ifconfig Access Point: NOT-ASSOCIATED $dmesg ... AP 00:1a:2b:08:60:49 associated Is the SO has connected to router for booting long ( associated message), after boot and login why the connection to router is not possible by network-manager or command line (NOT associated message)? Thanks in advance

    Read the article

  • How to make VirtualBox headless answer on rdp port?

    - by stiv
    I'd like to run windows xp on RDP: $ VBoxManage modifyvm winxp32 --vrdeport 3389 $ VBoxHeadless -s winxp32 -v on Oracle VM VirtualBox Headless Interface 4.1.18_Debian (C) 2008-2012 Oracle Corporation All rights reserved. (waiting) in another window: $ telnet localhost 3389 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused Yes, I've read about extension: $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.20-80170.vbox-extpack 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error: Failed to install "Oracle_VM_VirtualBox_Extension_Pack-4.1.20- 80170.vbox-extpack": Extension pack 'Oracle VM VirtualBox Extension Pack' is already installed. In case of a reinstallation, please uninstall it first Looked through all manuals and all help requests. No success. What's wrong? Any ideas?

    Read the article

  • SOLR high CPU usage in amazon EC2

    - by user644745
    I installed solr-3.6 in my local windows box and it worked fine. I installed solr-4.0 in amazon ec2 linux large instance and the cpu usage shot upto 100%. It maintained at 80-90% average cpu power. I thought it could be because of 4.0, So I installed 3.6 in EC2 again. But again the CPU usage was 80-90% average. With both the versions, solr works in EC2. dont know why CPU usage is so high. i started the solr server using "sudo nohup java -jar start.jar &" In my local box java 1.7 is installed and in EC2 it is 1.6.0_24. I have mapped solr dir to an EBS volume. /dev/mapper/vg1-solr 8361916 1935928 6342128 24% /home/ec2-user/SOLR/solr/example/solr Is there any known issue ?

    Read the article

  • can't execute scripts compiled with shc

    - by serilain
    I'm trying to use SHC to compile a shell script so that I can set the SUID bit on it and obfuscate what it's doing (I'm attempting to have it run as part of all new users' .bashrc). As a test, I wrote a script that's simply: #!/bin/bash env And compiled it using shc -r -f script.sh However, when I try to run the resulting script by simply doing ./script.sh.x, even after setting it to 777 (just for testing purposes), I get "Operation not permitted; killed" unless I run it as sudo (which I don't want to have to do). Am I running afoul of some Ubuntu permissions that won't let me run binaries created by shc? Thanks!

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >