Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 316/1051 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • alias gcc='gcc -fpermissive' or modifying ./configure script

    - by robo
    I am compiling quite big project from source. The compilation always ends with: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] I have already compiled this project one year ago. So I know a solution to this. Actualy I found more solutions: Adding a typecast to appropriate line of cpp code (It went to endless number of changes in each file. So I found next solution.) Modifying a makefile to compile that file with -fpermissive option. (I had to modify a lot of lines in each makefile. So I find even better solution.) "g++" or "gcc" was stored in a variable so I added -fpermissive to these variables. This is the best solution I have. It is sufficient to add this option to each makefile once. Unfortunately this software has big number of subdirectories. So I need to modify more than 100 makefiles. It took me whole day one year ago. Is there a way how to do this faster. What about this? alias gcc='gcc -fpermissive' I am not familiar with aliases. But it should be easy to try this. Is the syntax correct? And is this one correct? alias g++='g++ -fpermissive' ? And do I need to export the alias somehow? Will the make program respect the alias? Should I maybe change ./configure script? Or the ./configure.in? Or other file?

    Read the article

  • CentOS RPM database trashed, "rpm --rebuilddb" won't fix, can I recover using /var/lib/rpm/ from a 2

    - by user18330
    My RPM database is shot, neither rpm or yum works. Supposedly "rpm --rebuilddb" will fix it, but it doesn't in my case. This server has three sister servers that are basically identical, and have working RPM databases. I tried copying /var/lib/rpm/ from working server to the sick one, but that didn't fix it. Any ideas of how I can use good server's rpm to fix the sick one?

    Read the article

  • Are spurious TCP connections on port 53 a problem?

    - by Darren Greaves
    I run a server which amongst other things uses tinydns for DNS and axfrdns for handling transfer requests from our secondary DNS (another system). I understand that tinydns uses port 53 on UDP and axfrdns uses port 53 on TCP. I've configured axfrdns to only allow connections from my agreed secondary host. I run logcheck to monitor my logs and every day I see spurious connections on port 53 (TCP) from seemingly random hosts. They usually turn out to be from ADSL connections. My question is; are these innocent requests or a security risk? I am happy to block repeat offenders using iptables but don't want to block innocent users of one of the websites I host. Thanks, Darren.

    Read the article

  • what service to restart for /var/log/auth.log to start

    - by Bond
    Here is a situation since the log files on my server had grown to several Gigabytes I took a backup of directory /var/log and then manually when to each subdirectory of /var/log and the files which were big in size I did cat > /var/log/file_which_is_big press 2 times enter key (basically over wrote those files with a blank space) and then Ctrl+C So basically I over wrote those files to be blank. Now when I open /var/log/auth.log I don't see any entry (which is expected also since I over wrote) but when I exit the SSH session and login again then also I do not see any entry in auth.log is there any way other than rebooting the machine to make sure I keep getting the entries in /var/log/auth.log I am not sure which service writes in this file. This is a Ubuntu 10.04 server.

    Read the article

  • PXE Boot Fedora 17 Error

    - by DrifterDave
    When trying to boot into the latest Fedora 17 cd via PXE, I am presented with the following error: PXE dracut: fatal: no or empty root= argument So, I added a root= line to my fedora menu entry (shown below), but receive the following error: dracut Warning: Unable to process initqueue Any assistance would be greatly appreciated. Fedora.menu LABEL 1 MENU LABEL fedora 17 (32-bit) KERNEL fedora/17/i386/vmlinuz0 APPEND method=nfs:192.168.1.101:/srv/install/fedora/17/i386/ lang=us keymap=us ip=dhcp ksdevice=eth1 noipv6 root=/dev/ram0 initrd=fedora/17/i386/initrd0.img ramdisk_size=10000 TEXT HELP Install Fedora 17 (32-bit) ENDTEXT

    Read the article

  • Install Glibc2 using Yum

    - by Nerrve
    I'm trying to install glibc2 version 2.11 that's needed for openoffice 3.4 https://issues.apache.org/ooo/show_bug.cgi?id=119393 but i can't seem to find the dependency with yum. I already have the following dependencies installed. glibc.i686 2.5-49.el5_5.7 installed glibc.x86_64 2.5-49.el5_5.7 installed glibc-common.x86_64 2.5-49.el5_5.7 installed glibc-devel.x86_64 2.5-49.el5_5.7 installed glibc-headers.x86_64 2.5-49.el5_5.7 installed libc-client.x86_64 2004g-2.2.1 installed and glibc.i686 2.5-81.el5_8.2 updates glibc.x86_64 2.5-81.el5_8.2 updates glibc-common.x86_64 2.5-81.el5_8.2 updates glibc-devel.i386 2.5-81.el5_8.2 updates glibc-devel.x86_64 2.5-81.el5_8.2 updates glibc-headers.x86_64 2.5-81.el5_8.2 updates glibc-utils.x86_64 2.5-81.el5_8.2 updates I ran the following to get the version but it shows something different [root@***** /]# ./lib64/libc.so.6 GNU C Library stable release version 2.5, by Roland McGrath et al. Can someone please help? Thanks! EDIT : I'm using CentOS 2.6.18-128.1.10.el5

    Read the article

  • Why are UDP messages from outside the network received but not delivered?

    - by Warren Pena
    I have an Ubuntu Server 10.04 application I've developed that receives messages over a UDP port. The ultimate purpose of this application is to receive messages sent from workers' 3G modems out in the field. If use netcat on either another ubuntu Server or my Vista laptop (both on the same LAN as my test machine) to send a message, the message arrives correctly and appears in my application. However, if I go out to my car and use its 3G modem to send a message from the same Vista laptop, it doesn't work. If I run tcpdump -A, I see the message arrive correctly, but it's never delivered to my application. Clearly, the OS is the one making the choice not to deliver the messages (else they wouldn't appear in tcpdump nor would my app receive them when coming from local machines). I have not installed any firewall software on this machine, nor am I aware of anything installed by default that would block the traffic. sudo iptables --list returns Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I'm not too familiar with iptables, but it looks to me like that's telling it to not do anything. What could be going on that's preventing my messages from being delivered?

    Read the article

  • How to configure postfix for per-sender SASL authentication

    - by Marwan
    I have two gmail accounts, and I want to configure my local postfix server as a client which does SASL authentication with smtp.gmail.com:587 with credentials that depend on the sender address. So, let's say that my gmail accounts are: [email protected] and [email protected]. If I sent a mail with [email protected] in the FROM header field, then postfix should use the credentials: [email protected]:psswd1 to do SASL authentication with gmail SMTP server. Similarly with [email protected], it should use [email protected]:passwd2. Sounds fairly simple. Well, I followed the postfix official documentation at http://www.postfix.org/SASL_README.html, and I ended up with the following relevant configurations: /etc/postfix/main.cf smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sender_dependent_authentication = yes sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay smtp_tls_security_level = secure smtp_tls_CAfile = /etc/ssl/certs/Equifax_Secure_CA.pem smtp_tls_CApath = /etc/ssl/certs smtp_tls_session_cache_database = btree:/etc/postfix/smtp_scache smtp_tls_session_cache_timeout = 3600s smtp_tls_loglevel = 1 tls_random_source = dev:/dev/urandom relayhost = smtp.gmail.com:587 /etc/postfix/sasl_passwd [email protected] [email protected]:passwd1 [email protected] [email protected]:passwd2 smtp.gmail.com:587 [email protected]:passwd1 /etc/postfix/sender_relay [email protected] smtp.gmail.com:587 [email protected] smtp.gmail.com:587 After I'm done with the configurations I did: $ postmap /etc/postfix/sasl_passwd $ postmap /etc/postfix/sender_relay $ /etc/init.d/postfix restart The problem is that when I send a mail from [email protected], the message ends up in the destination with sender address [email protected] and NOT [email protected], which means that postfix always ignores the per-sender configurations and send the mail using the default credentials (the third line in /etc/postfix/sasl_passwd above). I checked the configurations multiple times and even compared them to those in various blog posts addressing the same issue but found them to be more or less the same as mine. So, can anyone point me in the right direction, in case I'm missing something? Many thanks.

    Read the article

  • Weird scp behavior

    - by bryan1967
    I am trying to scp a file but it returns immediately with the DATE and not file is copied: [cosmo] Downloads > scp V17530-01_1of2.zip bryan@elphaba:Downloads bryan@elphaba's password: Sat Apr 10 13:35:41 PDT 2010 I have never seen this before. I have confirmed that I have the sshd running on the target system and that the firewall is allowing 22/tcp. Any help on what is going on would be very much appreciated. Thanks, Bryan

    Read the article

  • Reaver keeps reapeating the same PIN

    - by Umair Ayub
    I have been trying to Hack a WPA2 Wifi and so far I am stuck with it. Problem is that it keeps trying same pin over and over again. Here is the last REAVER command I entered. reaver -i mon0 -b 2C:AB:25:51:F1:CF -vv -c 1 -S -L -f It does this (only one PIN again and again) [+] Switching mon0 to channel 1 [+] Waiting for beacon from 2C:AB:25:51:F1:CF [+] Associated with 2C:AB:25:51:F1:CF (ESSID: PTCL-BB) [+] Trying pin 12345670 [+] Sending EAPOL START request [+] Received identity request [+] Sending identity response [!] WARNING: Receive timeout occurred [+] Sending WSC NACK [!] WPS transaction failed (code: 0x02), re-trying last pin [+] Trying pin 12345670 [+] Sending EAPOL START request [+] Received identity request [+] Sending identity response [+] Received identity request [+] Sending identity response ^C [+] Nothing done, nothing to save.

    Read the article

  • Why doesn't pppd over ssh work here? Why can't I kill pppd?

    - by Peter V. Mørch
    I'm trying to setup a simple ppp tunnel over ssh. It works on several machines just fine. But on one machine, pppd gets "stuck": > pgrep pppd | xargs ps up USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 4178 0.0 0.1 3020 1088 pts/1 Ds+ 05:28 0:00 /usr/sbin/pppd Any attempt to kill it (even sudo kill -9 4178) has no effect that I can see. strace -p 4178 also hangs similarly. After it has been started for a while, I start getting messages in dmesg like shown below. It is started like so from another machine: ssh -t root@server /usr/sbin/pppd passive noauth When I do this to one of the machines that work, the remote end's pppd spits out garbage/binary data to the console (as expected). When I do it to the one that fails, I get no output from pppd, but the ssh session eventually times out. If I instead ssh to the machine, and then run /usr/sbin/pppd passive noauth in a separate step I also get the expected binary output. I now have a couple of questions: What could be up with the one machine where pppd fails? I don't even know where to start looking... What could be the difference between ssh -t root@server /usr/sbin/pppd passive noauth in a single step and ssh root@server and /usr/sbin/pppd passive noauth in two steps? How can it be that I can't kill the process even with sudo kill -9? The only way I know is to reboot. (I've tried searching for something similar but didn't get anywhere so I'm sorry I don't have any more leads) Any ideas? The problem machine runs in debian on VMware "hardware" (as do the ones that work) and it exhibits the problem when cloned and on both debian lenny (original) and squeeze (after upgrade) dmesg entries: [ 1198.727248] INFO: task pppd:4178 blocked for more than 120 seconds. [ 1198.727507] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1198.727904] pppd D ece2dc9c 0 4178 4174 0x00000004 [ 1198.727908] 00000098 00000082 f2503520 ece2dc9c 0000b1e7 00000000 c148d1c0 c148d1c0 [ 1198.727913] f2a06100 f6e071c0 00000000 ece2dc18 f5cd07e0 00000000 ece2d400 ece2dc9d [ 1198.727918] 00c52300 ece2dcbc f67bfef8 ec98e480 f291cec0 00000000 c10cf5b0 c10dfd21 [ 1198.727923] Call Trace: [ 1198.727926] [<c10cf5b0>] ? nameidata_to_filp+0x37/0x41 [ 1198.727929] [<c10dfd21>] ? dput+0x21/0xb7 [ 1198.727932] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1198.727935] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1198.727938] [<c11cb91b>] ? tty_ioctl+0x85f/0x8ba [ 1198.727941] [<c10fec18>] ? do_lock_file_wait+0x3d/0xd9 [ 1198.727944] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1198.727946] [<c11cb0bc>] ? tty_check_change+0xb9/0xb9 [ 1198.727949] [<c10dbeb7>] ? do_vfs_ioctl+0x485/0x4c7 [ 1198.727952] [<c10db59a>] ? do_fcntl+0x24f/0x3a2 [ 1198.727954] [<c10dbf3a>] ? sys_ioctl+0x41/0x58 [ 1198.727957] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28 [ 1318.457225] INFO: task sshd:4174 blocked for more than 120 seconds. [ 1318.457500] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1318.457896] sshd D f25024cc 0 4174 2393 0x00000000 [ 1318.457901] 00000098 00000086 f2a06940 f25024cc 0000b246 00000000 c148d1c0 c148d1c0 [ 1318.457906] f2503520 f6e071c0 00000000 3f056585 0000000f ece2d4bc 3f056585 f2503520 [ 1318.457911] ec98bb38 ec98bbdc 00000000 00000000 00000000 c12c09b5 f2503520 c10327cb [ 1318.457916] Call Trace: [ 1318.457926] [<c12c09b5>] ? schedule_hrtimeout_range_clock+0x3c/0xd9 [ 1318.457931] [<c10327cb>] ? try_to_wake_up+0x13f/0x13f [ 1318.457935] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1318.457940] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1318.457943] [<c11c9ad3>] ? tty_poll+0x32/0x5e [ 1318.457947] [<c10dd4d5>] ? do_select+0x2a1/0x42e [ 1318.457950] [<c10dcb83>] ? poll_freewait+0x69/0x69 [ 1318.457953] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457955] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457958] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457960] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457963] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457965] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457968] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457971] [<c10429c2>] ? lock_timer_base+0x19/0x35 [ 1318.457974] [<c1042eb5>] ? __mod_timer+0x10c/0x116 [ 1318.457977] [<c1042f89>] ? mod_timer+0x69/0x6e [ 1318.457981] [<c121325d>] ? sk_reset_timer+0xc/0x16 [ 1318.457984] [<c1252f57>] ? tcp_event_new_data_sent+0x66/0x6b [ 1318.457987] [<c1255b85>] ? tcp_write_xmit+0x7a7/0x86a [ 1318.457990] [<c121760d>] ? __alloc_skb+0x50/0xfd [ 1318.457994] [<c12c12bc>] ? _raw_spin_lock_bh+0x8/0x1e [ 1318.457996] [<c1212e98>] ? release_sock+0x10/0xc4 [ 1318.457999] [<c124b543>] ? tcp_sendmsg+0x6dd/0x7b7 [ 1318.458003] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1318.458006] [<c10dd7a0>] ? core_sys_select+0x13e/0x1c3 [ 1318.458009] [<c12102a3>] ? sock_aio_write+0xc0/0xd4 [ 1318.458012] [<c10d0655>] ? do_sync_write+0xa0/0xe4 [ 1318.458016] [<c10b141c>] ? handle_mm_fault+0x222/0x238 [ 1318.458019] [<c10f6096>] ? fsnotify+0x1de/0x1f9 [ 1318.458022] [<c10dd9e8>] ? sys_select+0x6e/0x8f [ 1318.458024] [<c10d105e>] ? sys_write+0x3c/0x63 [ 1318.458028] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28

    Read the article

  • XAMPP with PostgreSQL

    - by fred smith
    I'm looking for a package like XAMPP, but instead of MySQL it would use PostgreSQL. I've done some searching and haven't turned up anything other than doing a full server setup of both.

    Read the article

  • Error when Sending Emails

    - by dallasclark
    A client of mine keeps receiving the following email when sending mail but their emails are sent successfully. Your outgoing (SMTP) e-mail server has reported an internal error... The server responded: 451 qq read error (#4.3.0) In the mail log (/usr/local/psa/var/log/maillog) I receive the following error: /var/qmail/bin/relaylock[3152]: /var/qmail/bin/relaylock My SMTP Service is setup as followed, if this helps service smtp { socket_type = stream protocol = tcp wait = no disable = no user = root instances = UNLIMITED env = SMTPAUTH=1 server = /var/qmail/bin/tcp-env server_args = -Rt0 /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true }

    Read the article

  • What does %st mean in top?

    - by Ben
    Here is an example from my top: Cpu(s): 6.0%us, 3.0%sy, 0.0%ni, 78.7%id, 0.0%wa, 0.0%hi, 0.3%si, 12.0%st I am trying to figure out the significance of the %st field. I read that it means steal cpu and it represents time spent by the hypervisor, but I want to know what that actually means to me. Does it mean I may be on a busy physical server and someone else is using too much CPU on the server and they are taking from my VM? If I am using EBS could it be related to handling EBS I/O at the hypervisor level? Is it related to things running on my VM or is it completely unaffected by me?

    Read the article

  • Unix LVM: how to resize root lvm

    - by Hussein Sabbagh
    I took over a virtual server at work after a co-worker left. He, however, setup the server incorrectly at multiple stages and im cleaning them up as I run into them... Currently I realized that the file system is broken in half onto 2 logical volumes both at 50gb. One is mounted as the root directory and the other as the /home directory. Saddly, the server has taken up 46gb of the root lv and i need to expand it. I have already shrunk and remounted the home lv. I resized the root lv, but I can't figure out how to unmount the root directory while the computer is on. Obviously this needs to be done before I can finalize the expansion, but I don't know how. I'd appreciate any help or pointing in the right direction. Thanks in advance. PS this is on a CentOS server.

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • Installed ASUS HD4670, now unable to install ANY Debian due to low memory corruption

    - by Alfabravo
    I have a desktop PC which initially had the Intel D946gzis mobo, its chipset as video controller, some RAM and so. There I installed Debian without a problem alongside WindowsXP. I've bought an ASUS HD 4670 video card, installed it on the PC and now the installed Debian does not work, while the Ubuntu live CD refuses to run no matter if I set acpi, apic on or off... it throws me some low memory corruption at position just like shown here. With normal configuration, Debian throws kernel panic (keyboard lights blinking). Anyone have faced this before? Ideas? Thanks!! (meanwhile, debian hides in a virtualbox :'( ) Edited: Tried Ubuntu 9.10 x64 (due to the fact i've a core2duo at 2GHz) and it throws a kernel-panic to me (flashing caps and num LEDs). On screen, can be read different lines with things like: ... [ 1.957161] [] rb_erase+0xd6/0x160 [ 1.957266] [] page_fault+0x25/0x30 Could it be something about this new video card having ddr3?

    Read the article

  • Bridging Virtual Networking into Real LAN on a OpenNebula Cluster

    - by user101012
    I'm running Open Nebula with 1 Cluster Controller and 3 Nodes. I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes. However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly. The Nodes all have a br0 interfaces which is bridged with eth0. The IP Address is in the 192.168.1.x range. The Template file I used for the vmnet is: NAME = "VM LAN" TYPE = RANGED BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address NETWORK_SIZE = 126 NETMASK = 255.255.255.0 GATEWAY = 192.168.1.1 NS = 192.168.1.1 However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and onevm list also states that the vm is running. It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.

    Read the article

  • go back to original version of firefox in ubuntu from beta version

    - by Jack Coroman
    In ubuntu 9.04, I tried to upgrade from firefox 3.0 to 3.5, by installing some apt-get packages, and there is a problem! Now firefox calls itself "Namoroka" and the firefox logo is gone and replaced by a black square in the upper bar and it says it is a development beta version. I really don't like this version, how can I go back to the stable version of firefox? I tried apt-get remove firefox-3.5 and apt-get install firefox-3.0 and that did not work. How do I go back to the stable version of firefox?

    Read the article

  • Unable to access VLAN host from VLAN interface in CentOS

    - by Amrit
    I am playing with VLAN (Virtual LAN) configuration on CentOS 6.4. I have 2 interfaces, eth0 and eth1. I have configured 2 VLAN interfaces eth0.20 and eth0.30 as #file: ifcfg-eth0.20 #------------- VLAN=yes DEVICE=eth0.20 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR=192.168.20.1 GATEWAY=192.168.20.1 NETMASK=255.255.255.0 USERCTL=no #file: ifcfg-eth0.30 #------------- VLAN=yes DEVICE=eth0.30 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR=192.168.30.1 GATEWAY=192.168.30.1 NETMASK=255.255.255.0 USERCTL=no Then connected a desktop to interface eth0 port using LAN cable and assigned 192.168.30.2/24 IP. When I try to ping 192.168.30.1 from 192.168.30.2 machine, It shows destination host unreachable. I am also not able to ping 192.168.130.2 from 192.168.30.1. However ping -I eth0 192.168.30.2 works fine. Any pointers?

    Read the article

  • Why can’t two programs access my webcam simultaneously?

    - by qdii
    I first launch cheese and my webcam turns on. I then run vlc to grab the output of /dev/video0 but it fails with: [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3eb4000b78] main input error: open of `v4l2:///dev/video0' failed Whatever pair of video programs I run (skype, cheese, vlc, etc.), the result is always the same: the second program can no longer use the webcam when the first one has already grabbed the output. However I find it curious as video4linux states: In general, V4L2 devices can be opened more than once. When this is supported by the driver, users can for example start a "panel" application to change controls like brightness or audio volume, while another application captures video and audio. My webcam is seen in lspci as 058f:a014 Alcor Micro Corp. Asus Integrated Webcam, but I don’t even know what the underlying driver is, so I can’t check whether my problem is driver-related or not. Any input would be more than welcome!

    Read the article

  • IPv6 address is not working in Ubuntu

    - by Alex Farber
    Telnet connection with echo service succeeds for localhost and 127.0.0.1 host names, but fails with ::1 host name: alex@u120432:~$ telnet localhost 7 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 123 123 ^] telnet q Connection closed. alex@u120432:~$ telnet ::1 7 Trying ::1... telnet: Unable to connect to remote host: Connection refused alex@u120432:~$ My own program trying to talk using IPv6 address fails as well. Why IPv6 address is rejected? OS: Ubuntu 12.04 32 bit.

    Read the article

  • Problem with PXE boot

    - by user70523
    Hi, I followed the following link for PXE boot, http://www.howtoforge.com/setting-up-a-pxe-install-server-on-ubuntu-9.10-p3 and I was able to ping the client from the server and also when I booted up the client It is getting the IP address from the server. But later,I got this error PXELinux 3.82 2009-06-09 . . . [other informations] !PXE Entry point found (we hope) at 9D3B:0109 via plan A UNDI code segment at 9D3B len 16C2 UNDI data segment at 933B len A000 Getting cached packet 01 02 03 . . . [other informations] TFTP prefix: Trying to load: pxelinux.cfg/ec5db4c0-74fe-d511-b9e7-3d9235afe5a1 Trying to load: pxelinux.cfg/01-00-17-31-b6-5e-a8 Trying to load: pxelinux.cfg/0A64491E Trying to load: pxelinux.cfg/0A64491 Trying to load: pxelinux.cfg/0A6449 Trying to load: pxelinux.cfg/0A644 Trying to load: pxelinux.cfg/0A64 Trying to load: pxelinux.cfg/0A6 Trying to load: pxelinux.cfg/0A Trying to load: pxelinux.cfg/0 Trying to load: pxelinux.cfg/default Unable to locate configuration file Boot failed: press a key to retry or wait for reset I have put all the files mentioned in the link in tftpboot. Can anyone explain what could be the problem. Thanks in advance

    Read the article

  • Why don't I have write permission to my vmware virtual network device?

    - by Robert Martin
    I want to allow my VMWare machine to force the virtual network it's on into promiscuous mode so I can play around with honeyd. I received an error message that told me to go to http://vmware.com/info?id=161 to allow this behavior. Based on their advice, I did: $ groupadd promiscuous $ cat /etc/group | grep promiscuous promiscuous:x:1002:robert $ usermod -a -G promiscuous robert $ id robert uid=1000(robert) gid=1000(robert) groups=1000(robert),....,1002(promiscuous) $ chgrp newgroup /dev/vmnet8 $ chmod g+rw /dev/vmnet8 $ ls -l /dev/vmnet8 crw-rw---- 1 root promiscuous 119, 8 2012-03-29 10:29 /dev/vmnet8 Looks like I gave RW permission to the promiscuous group, and added myself. Except that VMWare still gives me an error message that says I cannot enter promiscuous mode. To try out the group thing, I tried: $ echo "1" >/dev/vmnet8 bash: /dev/vmnet8: Permission denied That really surprised me: It makes me think that I still haven't properly given myself the correct permissions... What am I missing?

    Read the article

  • nginx with stub_status.. need help with nginx.conf

    - by Amar
    Hello I am trying to setup nginx with stub status so I can monitor nginx requests etc.. with serverdensity.com. I needed to put something like this in nginx.conf server { listen 82.113.147.xxx; location /nginx_status { stub_status on; access_log off; allow 82.113.147.xxx; deny all; } } And with this monitoring acctualy works. However It seems I lost "include" part in my nginx.conf and now none of vhosts in sites-enabled work. Here is a bit more of my nginx.conf http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 82.113.147.226; location /nginx_status { stub_status on; access_log off; allow 82.113.147.226; deny all; } } } Hope someone can help me with this , as I belive its minor issue, its just that "I dont see it" ty

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >