Search Results

Search found 14041 results on 562 pages for 'home theater'.

Page 168/562 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Can't connect to or see my wifi ssid

    - by ant
    Today I installed ubuntu 12.04 on my laptop. I am unable to see my home SSID or even connect to it. I've tried to connect as a hidden SSID but I always get prompted for authorization although my key is correct. I'm in in Europe but my laptop is from US. I'm not sure if that is relevant. I've read around this site and saw something that has to do with setting the channel above 11. I'm not sure I did that correctly I did this : How to use Wi-Fi channels above 11? Did't help. I'm able to connect with cable but not via wifi either windows or linux. Other devices in my home can connect without any issues, even the kindle. Here is the screenshot from my router : Here is some additional info : lspci | grep -i network 08:00.0 Network controller: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) (rev 01) lspci -nnk | grep -A2 0280 08:00.0 Network controller [0280]: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Hewlett-Packard Company U98Z062.10 802.11bgn Wireless Half-size Mini PCIe Card [103c:303f] Kernel driver in use: ath9k m-tool NetworkManager Tool State: connected (global) Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: disconnected Default: no HW Address: 90:4C:E5:38:79:0D Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes I'm not sure what to do next. Any suggestions?

    Read the article

  • How to automatically execute a shell script when logging into Ubuntu

    - by Mike Rowave
    How do I get a script to execute automatically when I log in? Not when the machine starts up, and not for all users, but only when I (or any specific user with the script) login via the GNOME UI. From reading elsewhere I thought it was .bash_profile in my home directory, but for me it has no effect. When I manually execute it in a terminal window by typing ~/.bash_profile it works, but it won't run automatically when I log in. I'm running Ubuntu 11.04. The file permission on my .bash_profile is -rwx------. No .bash_profile existed in my home directory before I created it today. I seem to remember older versions of Linux having a .profile file for each user, but that doesn't work either. How is it done? Do I need to configure something else to get the .bash_profile to work? Or does the per-user login script need to be in some other file?

    Read the article

  • connect to ssh server thru 80 via HTTP proxy?

    - by im_chc
    Hi, Please help: I want to connect to my ssh server at home However, I'm behind a corporate (CORP) firewall, which blocks almost all ports (443, 22, 23 etc). But it seems that 80 is not blocked, coz I am able to surf the web after I login (i.e. IE sets to CORP's proxy server, and start IE - displayed CORP intranet portal - type in google.com - dialog pops up for userid + pwd - login successful, and surf without restrictions) My ssh server listens at 443. My question is: Is there a way to connect from a computer behind the CORP firewall to the ssh server thru the 80 port, with the ssh server still listening on port 443? Changing the ssh server to listen to port 80 is not an option, coz my home ISP blocks 80. Can I use a public proxy which listens at 80? After some research on google I found that there is something called "connect to SSH thru an HTTP proxy" using the Cockscrew software. Is it useful? Or is there some other way to solve the problem?

    Read the article

  • How to remove the smell of cigarette from the computer?

    - by Tio
    So I'm a smoker, and of course I smoke in front of the computer when I'm at home. On last Friday I moved out from my mothers house to my own, and since the computers were always turned on and the room was a room that everybody could smoke, the smell didn't bother me. At my new home, I turned them on, about 15 minutes ago, and I'm dying because of the smell of cigarette ( this may be kind of stupid from a smoker, but I hope some of you understand ). This solution has to be relatively quick, because I can't stay without the desktop and the server for one week, for example. Tomorrow, I'm going to open them both, and remove all dust there is inside, this should clear some of the smell, but probably won't remove it completely. Does anyone know a technique to get rid of the smell?

    Read the article

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • Vacation sends autoreply message to the recipient as well

    - by elitalon
    Hi, I have configured my Postfix server with vacation for a domain. Sending a message to [email protected] causes two events: The message is delivered to the recipient ([email protected]) An auto-reply message is sent to the sender, alerting that [email protected] should be used instead. Everything works well except for one particular drawback: the auto-reply is also sent to the recipient, so it receives two messages in the end. What can I do to avoid that? I'm only using the $TO variable in the custom vacation.msg message. And here is Postfix's master.cf vacation line: autoreply unix - n n - - pipe flags=Rhu user=vacation argv=/usr/bin/vacation -j -m /home/vacation/.vacation.msg -f /home/vacation/.vacation.db vacation I know using the -j is a little bit risky according to man page, but I'm kind of testing here.

    Read the article

  • Oracle Secure Global Desktop - Business Continuity During Snowstorm!

    - by Mohan Prabhala
    Capgemini, one of the world's largest management consulting, outsourcing and professional services companies, is an Oracle Secure Global Desktop customer and uses it to provide secure, remote access to 1) corporate applications centralized in the datacenter and 2) desktops hosted on Oracle VDI. Earlier this month, one of Capgemini's government customers in Holland were advised to avoid traveling to work, due to a heavy snowstorm. This resulted in a lot of employees working from home. Thankfully due to their deployment of the Oracle Secure Global Desktop gateway, employees were able to easily access their corporate applications and desktops from home and anywhere outside of their office. Capgemini reports that during the days of the snowstorm, a record number of users leveraged Oracle Secure Global Desktop (servers and gateway). Despite this record usage, Oracle Secure Global Desktop remained perfectly stable and allowed users to seamlessly access their applications and desktops. This is a great example of how Oracle Secure Global Desktop allows employee productivity and business continuity even during severe weather conditions such as snowstorms. We are delighted to have enabled business continuity for Capgemini's customers, and look forward to our continued relationship with Capgemini. This blog has been approved for posting by Capgemini.

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • How to make an ISO copy of Linux-filesystem and user files of VPS Debian based?

    - by moogeek
    Hello! I have a Debian-Based VPS on some hosting. I want to migrate from it and i need to make a full copy of all Linux-filesystem (and installed packages) + all home directory with website files. And then pack/convert it to ISO image so that to use it on cloud hostings like Amazon. The problem is that i have only ssh root access. Hosting support can't do that for me. Another part of the question - is it possible to enlarge the Linux-filesystem by not re-installing it and using the free space of home directory? Is it possible to do? I guess it is possible with rsync or something like that. Will my Mysql databes copy together with all other data? Thanks in advance!

    Read the article

  • Can I buy a wireless access point that also acts as a DNS nameserver?

    - by Brabster
    Hi, I was wondering if I buy a wireless access point/router that also acts as a DNS nameserver for DHCP clients. I can see the hostnames of my home devices in the DHCP clients table of the router I have, it doesn't seem like a great leap of the imagination to have a local nameserver on there, something like hostname.home that automatically publishes those entries to a local zone. But - I can't find one that does that. Is there a reason why this shouldn't/can't be done? Or is my Google-Fu just weak? Cheers,

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Ubuntu 12.04 installation on GPT + RAID going into grub rescue

    - by Proy
    I have two 2TB disks. I am installing Ubuntu 12.04 using the alternate version of the server cd. On the partitioning page I have done my partitioning as follows /dev/sda1 - 32 MB - bios_grub /dev/sda2 - 50 GB -raid device /dev/sda3 - 8 GB -raid device /dev/sda4 - Balance full GB - raid device /dev/sdb1 - 32 MB - bios_grub /dev/sdb2 - 50 GB -raid device /dev/sdb3 - 8 GB -raid device /dev/sdb4 - Balance full GB - raid device After this I have setup raid devices /dev/md0 for /(/dev/sda2 + /dev/sdb2) for / ext4 /dev/md1 for swap( /dev/sda3 + /dev/sdb3)for swap /dev/md2 for /home(/dev/sda4 + /dev/sdb4)for /home ext4 The installation finishes it shows that it is installing grub to /dev/sda and /dev/sdb. But once the system reboots it falls into grub rescue mode. on doing ls I can not see the md devices only hd once. I also tried booting into rescue mode with the install cd and doing grub-install /dev/sda and /dev/sdb. What am I doing wrong ? Why is grub2 not detecting the raid revices ? UPDATE: I just did the same steps with Ubuntu 10.04 and it worked perfectly fine. I wiped out the RAID and partitions and everything and did it from scratch. I think the issue is with Ubuntu 12.04 and the way it partitions 2 TB disks

    Read the article

  • Outlook 2010: How do I mark one recurring event public?

    - by goober
    My office utilizes Outlook 2010 and Exchange for e-mail, and our calendars show free/busy information by default. Background I work from home once a week, so I have created an event that lists me as tentative for the entire workday, titled "Working from Home - Available Remotely". However, those attempting to schedule a meeting with me won't see this title, and therefore won't think they can schedule an event. As much as I'd like to get out meetings (!) it's important that folks be able to schedule with me. Question Is there a way to make the title/details public for this one recurring event so that when others attempt to schedule a meeting with me, Attempted Solutions I've tried creating a public calendar and sharing all the details of that calendar. However, all of my calendars are not included when someone wants to schedule with me, and so I'm shown as free unless someone specifically looks at my public calendar. I've Googled around, to no avail.

    Read the article

  • How to setup an iTunes library to use between two Macs?

    - by stead1984
    As you can tell this is no where near work related. I have an iMac G5 where my itunes is currently hosted, I have also just got a new MacBook Pro. What I want to be able to do is sync my itunes library from my iMac to my MacBook Pro, that way it can be accessible away from my home network, then if I make any changes to the itunes library (like change a track name) it will sync these changes back once I connect back to the home network. My current itunes contains music, videos, podcasts, playlists and iPhone apps, I would also like iTunes to track play counts collectively between the iMac and the MacBook Pro.

    Read the article

  • VNC grey screen and start on boot 12.04

    - by Siriss
    I have 12.04 LTS installed and I am trying to get VNC to work. I want to be able to connect to existing sessions, and have it start on boot. I followed this guide and have left a comment to try and fix my problems but no dice. I have also tried all solutions I have found on google, including the one here, but I could not get it to work (I am missing something easy I am sure). When I connect to the VNC session I get a grey screen with three checkboxes: Accept clipboard from viewers Send clipboard to viewers Send primary selection to viewers Here is my xstartup: #!/bin/sh # Uncomment the following two lines for normal desktop: unset SESSION_MANAGER # exec /etc/X11/xinit/xinitrc gnome-session -session=gnome-classic & [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & I have also edited my to include: /usr/bin/vncserver -geometry 1024x768 It does not start on boot, but when I run the command it starts, but I get the grey screen. Any help would be greatly appreciated. Thank you!!

    Read the article

  • How to install Ubuntu 13.10 on Hybrid Disk alongside Windows 8.1

    - by user205691
    I am having trouble installing Ubuntu 13.10 on HP Envy 4-1046tx ultrabook. When i bought this, it came with windows 7 pre-installed, but i upgraded it to 8 and now recently to 8.1. But somehow, i feel 8.1 is slower or something went wrong with the upgrade and made my system slow. I want to try Dual booting Ubuntu 13.10 with windows 8.1 The system recovery drive has windows 7 recovery files. SSD has 4GB allocated to windows 8 (i think for hibernation/rapid start). 25GB of SSD is free and i want to install ubuntu on this SSD pointing it to "/" I will also shrink the windows partition (the only other partition available apart from recovery & SSD) to free up 100GB and allocate this space to "/home" during ubuntu installation. I tried the above steps while on windows 8, but not successful. Ubuntu installation went fine, but the grub was not loaded. I tried to deploy linux via EasyBCD, but after that also, selecting linux in the boot would load grub on command prompt and do nothing. While ubuntu installation, i also deleted the raid drivers with sudo dmraid -rE, but still ubuntu didnt recognize my windows. I think i am missing some steps, so this time i want to do it right with proper info before starting the process. My requirements: dual boot Ubuntu with windows 8.1 c:\ shrinked windows with 300GB on sda1, 100GB for /home on sda1 & ubuntu installed on 25GB SSD volume sda2 (this is mSata i think) GRUB or EFI that helps me load both OS properly without breaking anything SWAP partition can be added if needed on sda1 (4gb?)? I have backed up my drive and have a 16GB usb3.0 with ubuntu loaded. I hope i have mentioned everything i need and know.. All i need now is some guidance and what to do right so that this installation goes as planned :)

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Jail Linux user to directory for FTP login

    - by Greg
    I'm planning on using vsftpd to act as a secure ftp server, but I am having difficulty controlling the linux users that will be used as ftp logins. The users are required to be "jailed" into a specific directory (and subdirectories) and have full read/write access. Requirements: - User account "admin_ftp" should be jailed to /var/www directory. - Other accounts will be added as needed, for each site... e.g: - User account "picturegallery_ftp" should be jailed to /var/www/picturegallery.com directory. I have tried the following, but to no avail: # Group to store all ftp accounts in. groupadd ftp_accounts # Group for single user, with the same name as the username. groupadd admin_ftp useradd -g admin_ftp -G ftp_accounts admin_ftp chgrp -R ftp_accounts /var/www chmod -R g+w /var/www When I log into FTP using account admin_ftp, I am given the error message: 500 OOPS: cannot change directory:/home/admin_ftp But didn't I specify the home directory? Extra internets for a guide how to do this specifically for vsftpd :)

    Read the article

  • Ubuntu 12.04 Server ping gateway responds with destination host unreachable

    - by blckblttkd
    I consider myself fairly avid with Ubuntu and Linux, but this one has me stumped. I built up a Xen Server using Ubuntu 12.04 as the base operating system. It has multiple domUs running on it. My home network has a statically defined network where I got all the network connectivity going peachy. The server was moved to a permanent home this morning. So, the network configuration on the main system had to change. Again, another static network, but now I can't ping the upstream gateway from the host. As the VMs use this NIC over a bridge, they too are broken. Ping responds with "destination host unreachable." I simplified the networking down to a simple static network as seen below (no bridge or anything) just to get it to work. Here's the contents of my /etc/network/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 216.7.188.228 gateway 216.7.188.225 netmask 255.255.255.240 broadcast 216.7.188.255 network 216.7.188.0 dns-nameservers 8.8.8.8 8.8.4.4 Here's the contents of route -n 0.0.0.0 216.7.188.225 0.0.0.0 UG 100 0 0 eth0 216.7.188.224 0.0.0.0 255.255.255.240 U 0 0 0 eth0 And the results of pinging the gateway: PING 216.7.188.225 (216.7.188.225) 56(84) bytes of data. From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable Again, this worked in one network flawlessly (obviously with different parameters in the interfaces file). I did try using eth1 (as there are two NICS on the server (in case the MAC address got flipped on bootup). No success there. Yes, the cable is in the right port now :) Any thoughts? I appreciate the help!

    Read the article

  • After closing the ssh terminal, the thin server is down

    - by Keating Wang
    I have a rails project run on the thin server(1.3.1) on a ubuntu server. I ssh to the server and start thin with command 'thin start -C config/thin.yml', following the thin.yml, port: 3000 log: log/thin.log timeout: 30 chdir: /home/byht/56platform/dev/tracker environment: production servers: 1 daemonize: true After thin starts successfully, I visit the project and it works well. Then, I close the terminal, I can also visit the pages that have been visited, but when I visit the pages that not been visited before closing ssh terminal, a "500" error appears on the page. I didn't find the error messages in the log file. I have tried start thin with nohup and sudo, but they are useless. I sign in the ubuntu server locally, then the problem disappears. But I need sign in the server to stat thin with ssh when I'm home.

    Read the article

  • Possible to get OpenDNS to dereference Host on VPN?

    - by Scott P
    I recently changed ISPs for my home internet. I am now having some trouble getting back into the corporate network from home over the VPN. I have figured out the OpenDNS is resolving the Hosts on the VPN incorrectly when I am using TCP/IP. When I browse to one of the hosts on corporate network, i.e. \host1, from the file manager this succeeds. However, when I ping the host, i.e. ping host1, the IP address is resolving to the OpenDNS name server instead of the actual Host IP address. Does anyone know how to make this work? On a hunch, I turned off type correction. But, this did not help.

    Read the article

  • Encrypt SSD or not?

    - by JamesBradbury
    My desktop machine is running Ubuntu 12.04 (and will probably stay with it until the next LTS). I've got a new 120GB SSD on the way as my existing 420GB spinning disk. If it makes any difference I'll be dual-booting with Windows 7 across both disks too. I've read some helpful answers here about /home setup and enabling TRIM, which I intend to follow. So most of my /home will be on the SSD, with only photos, videos and music on the spinning disk. The question is, when I reinstall Ubuntu from CD or USB, whether I should encrypt the SSD? Specifically: I'm reading that drive wear isn't much of an issue with modern SSDs as they last decades even if you spam them. Is this true? How big a performance reduction will encrypting cause (I have an i7 Sandybridge, so I guess it can cope)? Is it more important from a security point of view to encrypt an SSD? I think I read somewhere that it may be hard to reliably wipe data. By all means answer even if you only know about one of those things.

    Read the article

  • Most effective way to do daily standup meeting when a few people are remote

    - by Burhan Ali
    I am a software developer in a small team of seven. We are not an Agile (with a big 'A') team but are experimenting with some aspects of agile. One of these is the daily "standup" meeting. The difficulty here is that for two days of the week we have at least one person working from home so the full team isn't available in the same room. What is the best way to carry out a daily standup in this situation? Some facts that may be relevant: We all work in a single open plan room. We use Skype in our company. We don't have any video conferencing capability. We all work the same hours so there are no timezone complexities involved. The development manager is one of the people who works from home one day a week. Things we have tried: Conference call using Skype: This is tricky for those in the office because you can hear people speak in the room and then a split second later through the headset. This can e very distracting. Conference phone: Awful experience. Hard to get them to work and poor quality audio. Text-based updates using Skype. This is not as engaging and is no different than just firing off a status email in the morning. I have seen other questions about remote collaboration but they are mainly about completely remote teams and/or teams that span multiple time zones. We are not affected by either of these problems. What can we do to make our standup meetings better in these circumstances?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >