Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 262/838 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • xampp mysql and rubby

    - by user115079
    I've installed ruby and xampp server. now i am trying to use xampp mysql for ruby application. i copied xampp mysql lib (libmysql) from C:\xampp\mysql\lib to C:\Ruby192\bin (as told on some post on this forum). now after that when i try to create a resource using following command, i get an error. command: rails generate scaffold ShortUrl url:string error: C:/Ruby192/lib/ruby/gems/1.9.1/gems/mysql2-0.3.11-x86-mingw32/lib/mysql2/mysql2.rb:2:in `require': Incorrect MySQL client library version! This gem was compiled for 6.0.0 but the client library is 5.5.16. (RuntimeError) i know that there is version issue b/w ruby mysql client and xampp mysal. now i need advice that what is better solution? upgraded xampp mysql or downgrade ruby mysql version. Personally i want to upgrade xampp mysql but i read on some post that xampp mysql can't be upgraded. please advise.

    Read the article

  • "Show In Finder" won't open a new finder window

    - by Gavin Miller
    The "Show In Finder" action isn't working on Mac OS X Mountain Lion. The problem has just started to occur all the time, before it was a bit sporadic, but now it happens all the time. Things that don't work: In the chrome Downloads page clicking any of the "Show in Finder" links. Right clicking a file in XCode and choosing "Show in Finder" Things that work: open . in terminal command-n after command tabbing to Finder. Things I've tried to fix the issue: Opt - Right Click finder in the dock and relauching Restarting my computer Anybody ever experienced this issue?

    Read the article

  • Booting from USB on Mac Air (using setup_mac_usb_boot.sh)

    - by Mike O
    So, I've been working on this for hours and it's getting a little tiring. As some of you may know, installing Ubuntu on Macs is frequently an adventure, and I'm experiencing that right now. The part I'm hung up on at the moment is making a bootable USB. I would just use a CD, but my laptop is a MacBook Air (which doesn't have a CD drive), and I don't own an external CD drive. I initially attempted to use the command line method supplied by the Ubuntu documentation here: https://help.ubuntu.com/community/How%20to%20install%20Ubuntu%20on%20MacBook%20using%20USB%20Stick However, that wasn't even recognized by rEFIt even when I made a number of different modifications to the process, so I quickly decided to look elsewhere. I came across this guide: https://help.ubuntu.com/community/MacBookAir4-2#Basic_Installation_Instructions This ended up working to a large extent. If I choose the supplied grub from rEFIt, it will bring me to the Ubuntu grub, asking me to try it, install, or check the disk. And if I choose to boot Linux directly from rEFIt, it will bring me to the language selection menu. But when I make my selection from either of these menus it pauses for about ten seconds and then gives me a command line error message. It begins with kernel panic - not syncing timer doesn't work through interrupt, and then shows about eight file names. Does anyone here have any ideas as to what can be causing this? I also tried the script with both Ubuntu 11.10 (the current version when the script was written) and 12.04.

    Read the article

  • CentOS 5.7 keeps rebooting after fresh installation

    - by Wagner Maestrelli
    I have just installed CentOS 5.7 x86_64 on a new computer. The installation went on without any issues. But, after it finnished, the machine started to show an awkward behaviour: it restarts every time it tries to boot. It happens after all the services have been started. The screen just goes black and it shows an error message from the monitor: Input not supported. And then it reboots. I took a look at the logs, but I couldn't manage to find anything. Any help? Update Before doing the hardware diagnosis, as pointed out, I decided to make some tests. First, I changed the runlevel to 3, adding the 3 parameter at the end of the kernel command. Then, after logging in in text mode, I checked the xorg.conf file out for some problems regarding the screen resolution. There was nothing unexpected set. Well, if there had to be a problem with it, I couldn't start the X server at the command line, right? So, I typed startx and Gnome started! So, probably, it's not an issue with the screen resolution, I suppose. Then I selected the Log Out root... Gnome menu option and something odd happened: the screen went black, the Input not supported monitor error message was displayed and the system rebooted. Yes, the same problem I was having while trying to boot! After that, I decided to try yet another test: I removed the rhgb quiet parameters from the kernel command to see if some error would show up. Well, to my surprise, the boot went on without problems! The Gnome login screen showed up, I logged in and the session started. But then I selected the Shut Down... menu option and guess what? Same problem: black screen, same monitor error and the system rebooted. Yes, it rebooted, it did not shut down. I repeated both of the tests and the behaviours were the same. I really don't know what's going on. It seems to be an issue regarding the changing of the screen mode or something like that. Any ideas? Could this be a hardware problem? Or does it seem to be something regarding the system configuration?

    Read the article

  • Error: Cannot find a valid baseurl for repo: updates in ffmpeg installation

    - by athomas14super
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

  • EC2 instance store cloning or to ebs via gui management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • Nagios3: Conditional operators for service checks?

    - by Dave
    I'm trying to setup Nagios to monitor my various using hostgroups to define 'machine roles', against which I run services to check the machines by role. However, I'd like to use conditional operators that would enable me to run the service check against an intersection of two host groups, rather than their unions... i.e. using &&, ||, or () operators. For example, imagine I have the following servers: www-eu: Linux WWW (Apache) server, in the EU www-us: Windows WWW (IIS) server, in the US (West coast) ftp-eu: Linux FTP server, in the EU ftp-us: Windows FTP server, in the US I would want to create the following host groups: US-Servers: www-us, ftp-us EU-Servers: www-eu, ftp-eu WWW-Servers: www-us, www-eu FTP-Servers: ftp-us, ftp-eu Now say I'm interested in checking the HTTP response time for my web servers. Then let's say this particular Nagios service is running from the US (West Coast), and that I have a command called *check_http_response_time*. This command will check the responsiveness of the HTTP server, which I can provide an argument which defines the max response time before raising critical. My command might look like: check_http_response_time $HOSTNAME$ 50 Now traditionally, I can run my checks by specifying a list of host or hostgroups. define service{ use local-service hostgroup_name WWW-Servers # Servers = www-us, www-eu servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } However, with the above service definition, given my Nagios service is in US West, I could reasonably expect that my EU server will return critical. Really, I want different thresholds for each region (50 for US West, 200 for EU.) I would have to permutate my service for each host and set their custom threshold, or alternatively permutate out my service groups by role & region (i.e. WWW-Servers-EU), and run my specific thresholds against those. Though the latter is better, both are much messier than I'd like... What I would love, and what this post is asking for, is a way to use hostgroups to perform an intersection using conditional logic, rather than a simple union. It might look like: define service{ use local-service hostgroup_name WWW-Servers && US-Servers servicegroups WWW Checks service_description Check HTTP Response Time check_command check_http_response_time!50 } It then would run the check only against servers that are in both WWW-Servers and US-Servers, in my example, just www-us. The benefits of such a feature would be significant for Nagios services configured for large-scale. Is this feature available? If it isn't, will it be available in the future? Is there an alternative way to accomplish this given the most recent Nagios version? Any tips/suggestions are most appreciated! Dave

    Read the article

  • Remote connection IP to use

    - by petwho
    I have two laptops that both run on ubuntu and installed ssh server and ssh client on them. One is usually on my desk at home and one I usually bring to my company. When I'm at home I can easily ssh to one from the other by typing this command (to login to the other laptop whose IP address is: 192.168.0.105) : ssh -p 22 [email protected] However, When I'm at my company, I try to type the same command and ofcourse it doesn't work. I understand that when at home I'm on LAN network, that my laptops actually using my ISP's address which differ from 192.168.0.107 asummed 203.113.131.1. So could you tell me what IP that ssh shoud use for my laptop (at work) to connet to my computer at home? Thank you.

    Read the article

  • Programmatically get the WLAN config of a machine and use netsh to setup new profile

    - by Maestro1024
    How can I programmatically get the wireless LAN configuration of a machine and use netsh to setup new profile? I am having trouble getting the netsh command to set the ssid of a new card. I installed the drivers and plugged it in. I see in ipconfig it says the "media is disconnected" (fair enough). I then send the following command netsh wlan connect name=profile1 ssid=myNetwork interface="Wireless Network Connection 2" problem is I get an error "There is no profile "profile1" assigned to the specified interface. What is a profile for a wireless card? What should I set it to? How can I get my SSID set and connected for the card?

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • VMware ESXi - vSphere - Can't exit VM console access

    - by caleban
    I'm running ESXi 4.1 on a Dell T110 Server I connect to ESXi using vSphere vSphere is running inside a Windows 7 VM The Windows 7 VM is running in VMware Fusion on my Mac OS X system When I'm in vSphere and I've selected a VM and I click the console tab on some systems the VM console won't release me when I press the control + command keys. pfSense (FreeBSD) and Ubuntu Server behave like this. I can't exit their console screen. I have to shut down these VM's to be released from their VM console access. Windows, Ubuntu Desktop, etc. all behave like I'd expect; When I press the control + command keys I'm released from the VM console and I'm able to navigate in vSphere. Does anyone know what might be causing this or a way around this? Thanks in advance.

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • ffmpeg open webcam using YUYV but i want MJPEG

    - by Pavel
    I need ffmpeg to open webcam (logitech c910) in MJPEG mode, because the webcam can give ~24 using MJPEG "protocol" and only ~10 fps using the YUYV. Can i choose between them using ffmpeg command line? xx@(none) ~ $ v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG My current command line: ffmpeg -y -f alsa -i hw:3,0 -f video4linux2 -r 20 -s 1280x720 -i /dev/video0 -acodec libfaac -ab 128k -vcodec libx264 /tmp/web.avi ffmpeg produces corrupted h264 stream when i record from webcam, but normal h264 strem when i record from x11grab. Another codecs (mjpeg, mpeg4) works well with webcam... But this is another story.

    Read the article

  • Bash: Reset and Clear Commands

    - by sixtyfootersdude
    I have been using the command: reset to clear my terminal. Although I am pretty sure this is not what I should be doing. Reset, as the name suggests resets your entire terminal (changes lots of stuff). Here is what I want: I basically want to use the command clear. However if you clear and then scowl up you still get tones of stuff from before. In general this is not a problem however I am looking at gross logs that are long and I want to make sure that I am just viewing the most recent one. I know that I could use more or something like that but I prefer this approach.

    Read the article

  • Grub options are not visible on booting on Samsung ATIV Book 9 Lite running Ubuntu 14.04

    - by mjwittering
    I've managed to install Ubuntu 14.04 on my new Samsung ATIV Book 9 Lite ultrabook. After updating some configuratiosn in the UEFI installation was very easy. The only questions and issue I believe I'm still experience is when booting. I believe when the laptop would be displaying the grub boot options I see the following. There is a black screen with a purple border of 10px around the screen. I'd like to know how I can update my system so that I see the grub boot manager. I've run these commands: sudo cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" The command was not possible, sudo efibootmgr.

    Read the article

  • IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in

    - by RSolberg
    I just launched my first IIS FTP site following many of the tutorials from IIS.NET... I'm using IIS Users and Permissions rather than anonymous and/or basic. This is what I'm seeing while trying to establish the connection... Status: Resolving address of ftp.mydomain.com Status: Connecting to ###.###.##.###:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFTPUser Response: 331 Password required for MyFTPUser. Command: PASS ******************** Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server

    Read the article

  • Ctrl+Z and fg to append commands

    - by avilella
    I would like to know what is the behaviour of Ctrl+Z and fg in bash when wanting to append commands to be executed after a running command has finished. For example, in the sequence for commands below, I would expect the console to display "1", then "2", then "3", then "4", but I only get the last command, echo 4, after sleep 30 finishes: avilella@magneto:~$ sleep 30 && echo 1 ^Z [1]+ Stopped sleep 30 avilella@magneto:~$ fg && sleep 5 && echo 2 sleep 30 ^Z [1]+ Stopped sleep 30 avilella@magneto:~$ fg && sleep 5 && echo 3 sleep 30 ^Z [1]+ Stopped sleep 30 avilella@magneto:~$ fg && sleep 5 && echo 4 sleep 30 4 Any ideas?

    Read the article

  • Could not calculate upgrade from Maverick Meerkat to Natty Narwhal

    - by xralf
    I upgraded from Ubuntu Lucid Lynx to Maverick Meerkat with the following commands: sudo apt-get update && sudo apt-get upgrade sudo apt-get install update-manager-core sudo vi /etc/update-manager/release-upgrades and changed the last line to Prompt=normal sudo do-release-upgrade -d This upgrade was OK. I decided to repeat the same steps and to upgrade Maverick Meerkat to Natty Narwhal. It ended with this message: Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: Can not mark 'xubuntu-desktop' for upgrade This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Nov 21 09:37:21 2011) === === Command terminated with exit status 1 (Mon Nov 21 09:37:21 2011) === How can I correct it?

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Strange Misleading Error[XML -2018/ AC-10006] when doing the R12 Cloning

    - by [email protected]
    During the recent Multi Node to Single Node R12 Clone, Encountered an strange error. When doing the database portion of the clone. Below command 'adclonectx.pl' creates the Context file perl adclonectx.pl contextfile=$ORACLE_HOME/appsutil/SOURCE_CONTEXT_FILE.xml template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp pairsfile=$ORACLE_HOME/appsutil/clone/pairsfile.txt initialnode   When running the same command, It dumped the below error,   file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. AC-10006: Exception - org.xml.sax.SAXParseException: file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. thrown while creating OAVars object for file: /tmp/tmpCtxClone.xml The new database context file has been created :   /opt/oracle/product/11.1.0_IOFT/appsutil/IOFT_frws35ta.xml   At first site, I suspected that the issue is with format of the source xml file. Hence compared with the working XML file. Result is clean. Below portion of the error struck me Thrown while creating OAVars object for file: /tmp//dummy.xml   Cause : The /tmp is 100% full.   Fix: Either remove the old files in /tmp  directory  OR  export TEMP=/new/location where there is plenty of free space.

    Read the article

  • OS X 10.6 Snow Leopard no longer mounting an external USB drive

    - by Brant Bobby
    I have a 1TB generic external hard drive containing a single HFS partition. I originally formatted this using Disk Utility and it worked fine. Now, for some reason, it's not auto-mounting when I start up. Using mount at the command line gives the following error: $ sudo mount /dev/disk1s2 /Volumes/Test /dev/disk1s2 on /Volumes/Test: Incorrect super block. ... but if I use the mount_hfs command it works fine, mounts, and is readable. $ mount_hfs /dev/disk1s2 /Volumes/Test/ fsck gives me an error about a bad super block: $ fsck /dev/disk1 ** /dev/rdisk1 (NO WRITE) BAD SUPER BLOCK: MAGIC NUMBER WRONG ... but fsck_hfs -fn /dev/disk1s2 doesn't find any problems and reports that the volume appears to be OK. In Disk Utility, the drive appears to have a single MS-DOS partition with a curious notice about how it appears to be partitioned for Boot Camp: I have the Boot Camp HFS driver installed in WIndows 7, and that OS sees the drive/partition normally. What's wrong with my disk?

    Read the article

  • How do I make my USB Bluetooth dongle work in Ubuntu 11.04 ? (Can't init device hci0: Connection timed out (110)) [closed]

    - by MaikoID
    I've a USB bluetooth dongle root@maiko-cce-lin:~# lsusb | grep Bluetooth Bus 001 Device 007: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) that isn't working properly, hardly-ever it works but stops working in my next reboot. what I've tried it isn't software blocked root@maiko-cce-lin:~# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no my device is recognized by hciconfig root@maiko-cce-lin:~# hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1F:81:00:01:1C ACL MTU: 1021:4 SCO MTU: 180:1 DOWN RX bytes:330 acl:0 sco:0 events:8 errors:0 TX bytes:24 acl:0 sco:0 commands:30 errors:22 Features: 0xff 0x3e 0x09 0x76 0x80 0x01 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT but I can't turn on my hci interface root@maiko-cce-lin:~# hciconfig hci up Can't init device hci0: Connection timed out (110) I don't understand why.. the hcitool command doesn't show any device. root@maiko-cce-lin:~# hcitool dev Devices: I've tried to restart my bluetooth service too with this command and make all these previous commands again but without success. root@maiko-cce-lin:~# service bluetooth restart * Stopping bluetooth [ OK ] * Starting bluetooth [ OK ] root@maiko-cce-lin:~# The dongle works if you disconnect it from usb, wait a few seconds and connect it again. so there must be better solution for it ( a solution not involving physically removing the dongle!)

    Read the article

  • MAC OSX 10.5.8 need to save rsync password with ssh-copy-id

    - by Brady
    Hello all, I'll start by saying I'm very new to MAC but comfortable in using the command line thanks to using a linux a lot. I currently have rsync setup to run between a MAC OSX 10.5.8 server to a Linux Centos 5.5 Server. This is the command I'm running on the MAC server: rsync -avhe ssh "/Path/To/Data" [email protected]:data/ As it does it prompts for a password but I need it to save the password. After looking around I need to use: ssh-keygen -t dsa save the passkey and then move it over to the Linux server using: ssh-copy-id -i .ssh/id_dsa.pub [email protected] But ssh-copy-id doesnt seem to exist on the MAC server. How do I copy this key over? I've tried searching for the answer myself but the help seems to be all over the place for this.. Any help is greatly appreciated. Scott

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >