Search Results

Search found 8611 results on 345 pages for 'apt fast'.

Page 240/345 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Debian Bluetooth headphones not working

    - by cYrus
    Hardware Headphones Bluetooth dongle Maybe not exactly these models. Setup I tried to follow some guides, here's what I've done so far: Install software: sudo apt-get install bluez-utils bluez-alsa Reboot (just to be sure): $ dmesg | grep -i bluetooth [ 20.268212] Bluetooth: Core ver 2.16 [ 20.268230] Bluetooth: HCI device and connection manager initialized [ 20.268233] Bluetooth: HCI socket layer initialized [ 20.268235] Bluetooth: L2CAP socket layer initialized [ 20.268239] Bluetooth: SCO socket layer initialized [ 20.284685] Bluetooth: RFCOMM TTY layer initialized [ 20.284692] Bluetooth: RFCOMM socket layer initialized [ 20.284693] Bluetooth: RFCOMM ver 1.11 [ 20.335375] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.335378] Bluetooth: BNEP filters: protocol multicast The deamon is running: $ /etc/init.d/bluetooth status [ ok ] bluetooth is running. Plug the dongle: $ dmesg | tail [...] [23108.352034] usb 5-2: new full-speed USB device number 2 using ohci_hcd [23108.571131] usb 5-2: New USB device found, idVendor=0a12, idProduct=0001 [23108.571136] usb 5-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [23108.629042] usbcore: registered new interface driver btusb Put the headphones in pairing mode, and try scanning: $ hcitool scan Scanning ... Found nothing. What's next? What should I try? I'll update this answer as soon as you provide me hints.

    Read the article

  • Proper web server setup

    - by DMin
    I just got myself a slicehost basic slice to play around with so I can learn how to setup web-servers. I have Ubuntu 10.04.2 installed on the server. I was able to successfully get the server up and running from scratch, these were the things I did - following this tutorial. I know this is probably just a starters tutorial, so, I was wondering if you guys can tell me what you like to do while setting up production servers. These are the steps that were followed : Update and Upgrade Ubuntu sudo apt-get install apache2 php5-mysql libapache2-mod-php5 mysql-server Backup a copy of and edit apache2.conf Set : 'ServerTokens Full' to 'ServerTokens Prod''ServerSignature On' to 'ServerSignature Off' Backup php.ini and then Change “expose_php = On” to “expose_php = Off” Restart Apache Install Shorewall firewall Configure Shorewall to only accept HTTP and SSH connections(in the rules file) Enable shorewall on startup Add the website to the server : sudo usermod -g www-data root sudo chown -R www-data:www-data /var/www sudo chmod -R 775 /var/www I want make this CommunityWiki but can't seem to find the option to do it. Please feel free to add any feedback on the processes and things I am doing right/wrong. Much appriciated, thanks! :)

    Read the article

  • Asus PCE-N53 11n N600 PCI-E Adapter on 3.x kernel

    - by CITguy
    Problem ASUS PCE-N53 wireless NIC doesn't work for latest versions of the linux kernel. How do I get it working on my system? (Note: I'm posting the answer I've found for others to use.) Installing Driver for Linux 3.x Kernel ASUS provides Linux drivers from their website, but it mentions that the driver supports "Linux Kernel 2.6.x", so it won't work without a some modifications to the driver code. Fortunately, an archlinux forum mentions similar problems and one user was able to create a patch for kernel 3.8.x that seems to work with kernel 3.11.x. Here's how I got it working: Prerequisites Ubuntu: sudo apt-get install build-essential Arch: sudo pacman -S base-devel Steps: 1. Download the driver from the ASUS website The download can be found under "Support Drivers & Tools". 2. Unzip the contents of the downloaded file cd into the new directory 3. Patch The arch forum mentions a 3.8 patch file that needs to be downloaded. Download rt5592sta_fix_64bit_3.8.patch to the current directory. tar -xvf {driver_source.tar.gz} cd into the directory created in previous step patch -p1 < ../rt5592sta_fix_64bit_3.8.patch 4. Compile NOTE: You will need to use sudo for it to compile properly. sudo make sudo make install sudo modprobe rt5592sta 5. Enjoy If all is well, you should now have a working card.

    Read the article

  • How do I enable additional debugging output from Ansible and Vagrant?

    - by Brian Lyttle
    I'm investigating Ansible for server and application provisioning. My application is currently provisioned with shell scripts in Vagrant. Rather than rewrite my scripts I've taken a sample and attempted to deploy it. It appears to deploy fine, but I've seeing a failure message after what looks like a series of successful steps: » vagrant provision ~/vm/blvagrant 1 ? [default] Running provisioner: ansible... PLAY [web-servers] ************************************************************ GATHERING FACTS *************************************************************** ok: [192.168.9.149] TASK: [install python-software-properties] ************************************ ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [add nginx ppa if it ubuntu 10.04 and up] ******************************* ok: [192.168.9.149] => {"changed": false, "item": "", "repo": "ppa:nginx/stable", "state": "present"} TASK: [update apt repo] ******************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [install nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": ""} TASK: [copy fixed init for nginx] ********************************************* ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0755", "owner": "root", "path": "/etc/init.d/nginx", "size": 2321, "state": "file", "uid": 0} TASK: [service nginx] ********************************************************* ok: [192.168.9.149] => {"changed": false, "item": "", "name": "nginx", "state": "started"} TASK: [write nginx.conf] ****************************************************** ok: [192.168.9.149] => {"changed": false, "gid": 0, "group": "root", "item": "", "mode": "0644", "owner": "root", "path": "/etc/nginx/nginx.conf", "size": 1067, "state": "file", "uid": 0} PLAY RECAP ******************************************************************** 192.168.9.149 : ok=8 changed=0 unreachable=0 failed=0 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. How do I go about getting additional debug information? I've already added ansible.verbose = true to my vagrant config which results in the dictionaries being displayed within the output above.

    Read the article

  • Git clone/pull across local network

    - by Tom Sarduy
    I'm trying to clone/pull a repository in another PC using Ubuntu Quantal. I have done this on Windows before but I don't know what is the problem on ubuntu. I tried these: git clone file:////pc-name/repo/repository.git git clone file:////192.168.100.18/repo/repository.git git clone file:////user:pass@pc-name/repo/repository.git git clone smb://c-pc/repo/repository.git git clone //192.168.100.18/repo/repository.git Always I got: Cloning into 'intranet'... fatal: '//c-pc/repo/repository.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly or fatal: repository '//192.168.100.18/repo/repository.git' does not exist More: The other PC has username and password Is not networking issue, I can access and ping it. I just installed git doing apt-get install git (dependencies installed) I'm running git from the terminal (I'm not using git-shell) What is causing this and how to fix this? Any help would be great! UPDATE I have cloned the repo on Windows using git clone //192.168.100.18/repo/intranet.git without problems. So, the repo is accessible and exist! Maybe the problem is due user credentials?

    Read the article

  • OwnCloud RSA certificate configured for SERVER- ISSUE, webpage has a redirect loop

    - by jmituzas
    I had Owncloud running on a server that had died, I remember installing being easy, I have migrated server and Owncloud is one of the last apps to install. Ok Just downloaded and installed the newest version of Owncloud on a Ubuntu 14.04 server with PHP 5.5.9-1, I am trying the manual install. I have tried adding repo and installing from apt-get install owncloud, did not work for me :/, whereis owncloud reported nothing. It's installed but never was able to bring up site. Now for my issue I finished the manual install from .tar.bz2 when it came time to login I receive "This webpage has a redirect loop" , I receive the error from Chrome and Safari web browsers. I can't login at all, with no user, I get the error page. Don't know if it is related or not but here's a look at the owncloud-error.log "RSA certificate configured for "mysite.com" Does NOT include an ID which matches the server name" Installed new ssl cert with CN as my ServerName directive in the vhost config file, same error :/ Re-installed owncloud same issue... Out of ideas. Thanks in advance, jmituzas

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • fglrx-legacy-driver not seeing Radeon HD 4650 AGP

    - by Rocket Hazmat
    I am running Debian Squeeze on an old Dell Dimension 8300 box. It has an AGP Radeon HD 4650 card. I use this machine to mine bitcoins, and today I noticed that the machine had rebooted! My precious uptime! Anyway, my miner wouldn't start, so I figured might as well update my graphics driver, maybe that would fix the issue. I went to amd.com and downloaded the newest driver (12.6 legacy), but after installing it, aticonfig gave an error: aticonfig: No supported adapters detected I uninstalled the driver and figured I'd try to install it from apt. AMD has dropped support for the HD 4000 series in fglrx, forcing me to use fglrx-legacy-driver (currently only in experimental). In order to install this, I had to update libc6 (and some other important packages, like gcc), I had to use their wheezy versions. I finally got fglrx-legacy-driver installed, but I still got: aticonfig: No supported adapters detected Why isn't the driver finding my video card? I have a hunch it has something to do with the fact that it's an AGP video card. Here is the output of lspci -v (why does it say Kernel driver in use: fglrx_pci?): 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0028 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 16 Memory at e0000000 (32-bit, prefetchable) [size=256M] I/O ports at de00 [size=256] Memory at fe9f0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fea00000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] AGP version 3.0 Kernel driver in use: fglrx_pci EDIT: fglrx 12.4 seems to work. Thing is, since I am on kernel 3.2, I need to apply this patch to common/lib/modules/fglrx/build_mod/firegl_public.c. I thought ATI dropped support for the 4xxx series after 12.4. Why doesn't 12.6 legacy work?

    Read the article

  • Puppet variables best practice, generalise or specialise?

    - by Andrei Serdeliuc
    I'm trying to figure out which things should be in git within the puppet manifest and which should be in env vars like FACTER_my_var and use that in the manifest instead. Scenario: you are deploying 3 php apps and you've already built all the layers up to the app in other manifests (base system, php extensions, users, etc), and all that's left is installing the correct app (from an apt repo) and creating a vhost. I'm tempted to have something along the lines of: apache::vhost { $::project_hostname: priority => '10', port => '80', docroot => $::project_document_root, logroot => "/var/log/apache2/${$::project_name}", serveradmin => '[email protected]', require => Package[httpd], ssl => false, override => 'all', setenv => ["APP_KERNEL dev"] } This would run on each server, and the FACTER_project_* vars would be set on a per server basis. An obvious restriction of this would be that you can't run more than one app with this specific example. Or would you rather have project_x.pp, project_y.pp which have hardcoded paths and names?

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • What can be a reason for phpMyAdmin login to be not working (not at all, no reaction on submit)?

    - by Ivan
    When I open "http://localhost/phpmyadmin/", enter "root" as the user name and my MySQL root password and press go, then if I was using Firefox, I was getting offered to download index.php file (of a zero length), if I was using Opera 11, it said "Connection closed by remote server". Following recommendations I've removed all packages related to phpMyAdmin, PHP, MySQL and Apache and then reinstalled them step-by step (instead of just issuing apt-get install phpmyadmin and relying on the system to install the whole LAMP stack via dependencies as I've done before). The only change I've got was Firefox to stop offering to download index.php - now when I press Ok to submit my password, it just doesn't show any visible reaction at all. What may the reason be and how to fix it? I use up-to-date Xubuntu 11.04. Reinstalling the whole LAMP stack and phpMyAdmin did not help, neither did removing AppArmor. I've tried to use SQLBuddy instead, but there's exactly the same problem. So, I think, the problem is not in phpMyAdmin but in MySQL, Apache or something. MySQL seems to work if I use command line to access it. Apache & PHP seems to work also, as the login page of phpMyAdmin displays correctly.

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    Hi Geeks, I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • How to connect via SSH to a linux mint system that is connected via OpenVPN

    - by Hilyin
    Is there a way to make SSH port not get sent through VPN so when my computer is connected to a VPN, it can still be remoted in via SSH from its non-VPN IP? I am using Mint Linux 13. Thank you for your help! This is the instructions I followed to setup the VPN: Open Terminal Type: sudo apt-get install network-manager-openvpn Press Y to continue. Type: sudo restart network-manager Download BTGuard certificate (CA) by typing: sudo wget -O /etc/openvpn/btguard.ca.crt http://btguard.com/btguard.ca.crt Click on the Network Manager icon, expand VPN Connections, and choose Configure VPN A Network Connections window will appear with the VPN tab open. Click Add. 8. A Choose A VPN Connection Type window will open. Select OpenVPN in the drop-down menu and click Create.. . In the Editing VPN connection window, enter the following: Connection name: BTGuard VPN Gateway: vpn.btguard.com Optional: Manually select your server location by using ca.vpn.btguard.com for Canada or eu.vpn.btguard.com for Germany. Type: select Password User name: username Password: password CA Certificate: browse and select this file: /etc/openvpn/btguard.ca.crt Click Advanced... near the bottom of the window. Under the General tab, check the box next to Use a TCP connection Click OK, then click Apply. Setup complete! How To Connect Click on the Network Manager icon in the panel bar. Click on VPN Connections Select BTGuard VPN The Network Manager icon will begin spinning. You may be prompted to enter a password. If so, this is your system account keychain password, NOT your BTGuard password. Once connected, the Network Manager icon will have a lock next to it indicating you are browsing securely with BTGuard.

    Read the article

  • Configuring PAM with pam_mount; getting a dlopen() with an HX_Init error

    - by Jamie
    I'm trying to get automounting upon login working on Ubuntu 10.03 Beta 2. I didn't find a package for pam_mount, so I ended downloading it and building it. This required: sudo apt-get install build-essential pkg-config libxml2-dev libssl-dev libpam-dev Additionally, the libHX-dev is required but as of yesterday (23/4/2010) the package version provided (3.2) wasn't up to snuff (3.4) so I downloaded, compiled and installed that too. cd ./pam_mount-1.36/ && ./configure && make && sudo make install When I tried it (pam_mount) I got this in my auth log: Apr 23 12:18:02 ubuntu sshd[1195]: PAM unable to dlopen(/lib/security/pam_mount.so): /lib/security/pam_mount.so: undefined symbol: HX_init Apr 23 12:18:02 ubuntu sshd[1195]: PAM adding faulty module: /lib/security/pam_mount.so Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.20.182 user=jrisk Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): getting password (0x00000388) Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): pam_get_item returned a password Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): user 'jrisk' granted access Apr 23 12:18:06 ubuntu sshd[1195]: Accepted password for jrisk from 192.168.20.182 port 4369 ssh2 Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:session): session opened for user jrisk by (uid=0) What do I need to do get HX_Init into the system? This is related to an answer I previously got here.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • Ubuntu 13.10 - How to disable LVM and cryptsetup? cryptsetup: evms_activate is not available

    - by NeverEndingQueue
    I am trying to remove whole drive encryption from my Ubuntu installation. I've run Ubuntu from Live CD, mounted crypt partition and copied it to another partition /dev/sda3. sudo cryptsetup luksOpen /dev/sda5 crypt1 sudo dd if=/dev/ubuntu-vg/root of=/dev/sda3 bs=1M After that I've run boot-repair: https://help.ubuntu.com/community/Boot-Repair Added entry to /etc/fstab: UUID=<uuid> / ext4 errors=remount-ro 0 1 Of course I've replaced with blkid result of my /dev/sda3. I've also deleted overlayfs and tmpfs lines from /etc/fstab. (I've just compared it to content of /etc/fstab in non-encrypted Ubuntu installation and could not find overlayfs and tmpfs). I've chrooted from LiveCD into my system and rebuilt initramfs: http://blog.leenix.co.uk/2012/07/evmsactivate-is-not-available-on-boot.html I've also removed cryptsetup using apt-get remove. Basically I can easily mount my system partition from Live CD (without setting up the encryption and LVM stuff), but can not boot from it. Instead I see: cryptsetup: evms_activate is not available When I've chosen the Recovery mode I've seen this: Begin: Mounting root file system ... Begin: Running /script/local-top ... Reading all physical volumes. This may take a while ... No volume groups found cryptsetup: evms_activate is not available Begin: Waiting for encrytpted source device ... My /etc/crypttab is empty. I am pretty sure that system tries to find encrypted partition, search for LVMs etc. Do you have ideas what could be the problem or how can I fix it? Thanks

    Read the article

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • Pull network or power? (for contianing a rooted server)

    - by Aleksandr Levchuk
    When a server gets rooted (e.g. a situation like this), one of the first things that you may decide to do is containment. Some security specialists advise not to enter remediation immediately and to keep the server online until forensics are completed. Those advises are usually for APT. It's different if you have occasional Script kiddie breaches. However, you may decide to remediate (fix things) early and one of the steps in remediation is containment of the server. Quoting from Robert Moir's Answer - "disconnect the victim from its muggers". A server can be contained by pulling the network cable or the power cable. Which method is better? Taking into consideration the need for: Protecting victims from further damage Executing successful forensics (Possibly) Protecting valuable data on the server Edit: 5 assumptions Assuming: You detected early: 24 hours. You want to recover early: 3 days of 1 systems admin on the job (forensics and recovery). The server is not a Virtual Machine or a Container able to take a snapshot capturing the contents of the servers memory. You decide not to attempt prosecuting. You suspect that the attacker may be using some form of software (possibly sophisticated) and this software is still running on the server.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >