Search Results

Search found 14712 results on 589 pages for 'home hub'.

Page 90/589 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Apache2's recursive directory permission requirement

    - by Sn3akyP3t3
    The experience I've had thus far is from Ubuntu 10.04 and 12.04 64 bit OS so if there are other OS differences I'd like to know if this is an OS specific problem or not. The issue I've experienced is mostly confusion. Once the cause of the problem is identified and corrected there are no further related problems experienced. The symptom is Error 403 forbidden. Typically the cause is attempting to use a directory other than /var/www/ for content. The cause is simply permissions, but its puzzling why the required permissions must persist from at least one level deeper than root onward till the current working directory where the content is stored. For example: Alias /example/ "/home/user/permissions/can/be/confusing/with/apache/" <Directory /home/user/permissions/can/be/confusing/with/apache/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> With www-data being the user that spawned apache and "user" being a member of the www-data group. Thus, if ownership of /home/user/* is user:user then all that is necessary to display content with apache is permssions of read and execute. So d---r-x--- should suffice, but for practical purposes I'm using drwxr-x--- for most. However, if all directories /home/user/* are permissions of drwxr-x-- and /home/user/ itself has permissions of drwx------ then content will always fail with error 403. This is strange because it doesn't follow what I would consider traditional logic of permissions which should only be applicable to the current working directory or a particular file in that directory and not any directory further back in the chain. Is this by design or is it a bug?

    Read the article

  • HP-UX - custom rsync path

    - by stack_zen
    Hi. There are a range of HP-UX 11.11 hosts I'm unable to install rsync (I'm limited to a non-privileged user) I've extracted both rsync binary and libpopt.sl, libiconv.sl, libintl.sl from the depots into one of that user's directories: /home/zenith/rsync/ Problem is, I can't get my RH Linux box communicating with it: rsync -e --rsync-path=/home/zenith/rsync/rsync --compress=9 -pgtov --filter=+rs_/'*.log' --exclude='*' [email protected]:/home/zenith/service/logs/ /u01/rsync_depot/service/192.102.14.18/ /usr/lib/dld.sl: Can't find path for shared library: libintl.sl /usr/lib/dld.sl: No such file or directory sh: 1644 Abort(coredump) I've added to the remote host .profile export SHLIB_PATH=/usr/lib:/home/zenith/rsync export PATH=$PATH:/home/zenith/rsync but still, no libintl.sl is found. How can I initialize the correct env variable/ get this to work?

    Read the article

  • Installing 12.04 within 11.04

    - by user288752
    I recently installed 11.04 from an installation disk (overwriting Windows in the process). I know 11.04 is no longer supported but I had no problems subsequently upgrading it to 12.04 (via 11.10) a couple of months ago on another device. This time though, things are different. I can't upgrade through update manager because Ubuntu then tells me I have no internet connection, which is (obviously incorrect). I have tried to circumvent the problem by downloading the 12:04 iso from ubuntu.com directly but now I'm troubled by something else. The download is succesfull but after mounting the iso I can't interact with it. When I try to access the Wubi it gives me the following message: Archive: /home/lars/.cache/.fr-7g75Fe/wubi.exe [/home/lars/.cache/.fr-7g75Fe/wubi.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/lars/.cache/.fr-7g75Fe/wubi.exe or /home/lars/.cache/.fr-7g75Fe/wubi.exe.zip, and cannot find /home/lars/.cache/.fr-7g75Fe/wubi.exe.ZIP, period. What am I doing wrong here?

    Read the article

  • Unsatisfied Link Error and missing .so files when starting Eclipse

    - by Keidax
    I upgraded to the 12.04 beta yesterday. Now, when I try to start Eclipse, I get the splash screen and then this error message: An error has occurred. See the log file /home/gabriel/.eclipse/org.eclipse.platform_3.7.0_155965261/configuration/1335382319394.log . The log file says something like this: java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-gtk-3740 in java.library.path no swt-gtk in java.library.path Can't load library: /home/gabriel/.swt/lib/linux/x86_64/libswt-gtk-3740.so Can't load library: /home/gabriel/.swt/lib/linux/x86_64/libswt-gtk.so followed by many more error messages. The /home/gabriel/.swt/lib/linux/x86_64/ directory exists, but is empty. I also tried reinstalling eclipse with no success. Any ideas?

    Read the article

  • How do I compile lbflow 1.1?

    - by Ali.A
    After using "sh ./configure" command, I encountered another error during lbflow package installation (a scientific one). The sequence of operations is here with error: ./configure --disable-gts sudo make [sudo] password for alireza: make all-recursive make[1]: Entering directory `/home/alireza/lbflow-1.1' Making all in src make[2]: Entering directory `/home/alireza/lbflow-1.1/src' source='lbflow.cpp' object='lbflow-lbflow.o' libtool=no \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ g++ -DHAVE_CONFIG_H -I. -I. -I.. -c -o lbflow-lbflow.o `test -f 'lbflow.cpp' || echo './'`lbflow.cpp **../depcomp: line 432: exec: g++: not found** **make[2]: *** [lbflow-lbflow.o] Error 127 make[2]: Leaving directory `/home/alireza/lbflow-1.1/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/alireza/lbflow-1.1' make: *** [all] Error 2** Do you have any idea to troubleshoot this problem? (And please notice that i have installed both g++ and gcc. it says g++: not found, but i have installed g++ from Ubuntu Software Center!)

    Read the article

  • How to start /usr/bin/bitcoind on boot?

    - by André
    I'm trying to get /usr/bin/bitcoind to start on boot but without success. I have this script on /etc/init/bitcoind.conf description "bitcoind" start on filesystem stop on runlevel [!2345] oom never expect daemon respawn respawn limit 10 60 # 10 times in 60 seconds script user=andre home=/home/$user cmd=/usr/bin/bitcoind pidfile=$home/.bitcoin/bitcoind.pid # Don't change anything below here unless you know what you're doing [[ -e $pidfile && ! -d "/proc/$(cat $pidfile)" ]] && rm $pidfile [[ -e $pidfile && "$(cat /proc/$(cat $pidfile)/cmdline)" != $cmd* ]] && rm $pidfile exec start-stop-daemon --start -c $user --chdir $home --pidfile $pidfile --starta $cmd -b -m end script After creating this script I've run the command: sudo initctl reload-configuration When I restart Ubuntu the "bitcoind" does not start. I only can start "bitcoind" running manually the command: sudo start bitcoind Any clues on how to start "bitcoind" on boot?

    Read the article

  • Partitioning During Installation, Reinstalling Ubuntu, Backup donwloaded Repositories

    - by user209645
    I am new to Ubuntu and I have just finished installing 13.10 amd64 on my PC. This will replace my Windows 7 as my only OS. I just want to clarify some issues that've been bugging me. I tried reading posts with the same topics but I just can't wrap my head to it yet. I partitioned my 80GB drive into: /root: 30GB (sorry for the confusion, I actually meant /) /home: 40GB /var : 3GB swap : 4GB (2GB of mem) Please correct me if I'm wrong about these: All of the users' documents are saved in their respective folders in /home. But say I want to clean install (format) Ubuntu, I don't need to make backups of /home and /var as they are on separate partitions. But when re-installing, do I just choose /root and format and it will recognize all the partitions (not making another /home and /var inside /root)? Downloaded packages (from all the repositories) and all their dependencies are saved in /var. So after re-installing on the same PC (assuming I'm offline), it will just use the latest updates in /var if I choose to update? And if all the installed apps and their dependents are all in there, all I need to do is re-install them without encountering errors? I have also read that you can back them up using aptoncd and then adding the DVD to the sources. So if I download all the high ranking apps using Synaptic, could I then have an all-in-one DVD installer? 30GB for / is excessive because the bulk of files will either be in /home (personal, downloads, music, videos) or /var (updates, packages, installed apps)? Please excuse me for asking such a question but I really want to explore and learn more.

    Read the article

  • Integration of routes that are not resources in an MVC REST style application

    - by Emil Lerch
    I would like to keep my application relatively REST-pure for the sake of consistency, but I'm struggling philosophically with the relatively few views (maybe just one) that I'll need to build that don't relate to resources directly, and therefore do not fit into a REST style. As an example, take the home page. Ruby on rails seems to bail on their otherwise RESTful approach for this very basic need of all web sites. The home page appears special: You can get it, but a get at the resource level is supposed to give you a collection of elements. I can imagine this being the list of routes maybe, but that seems a stretch, and doesn't address anything else. Getting the home page by id doesn't seem to make a whole lot of sense - what's the element of a home collection? Again, maybe routes, but a get on a route would do what? Redirect? This feels odd. You can't delete it (arguably you could allow this for administrators) Adding a second one doesn't make sense except possibly if the elements were routes Updating it might make sense for administrators, but AFAIK REST doesn't describe updates on the resource directly, only elements of the resource (this article explicitly says "UNUSED" for PUTS on the resource) Is the "right" thing to do just to special case these types of things? At the end of the day, I can wrap my head around most of applications being gathered around resources...I can't think of another good example other than a home page, but since that's the start of an application, I think it warrants some thought.

    Read the article

  • Apache denying requests with VirtualHosts

    - by Ross
    This is the error I get in my log: Permission denied: /home/ross/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable My VirtualHost is pretty simple: <VirtualHost 127.0.0.1> ServerName jotter.localhost DocumentRoot /home/ross/www/jotter/public DirectoryIndex index.php index.html <Directory /home/ross/www/jotter/public> AllowOverride all Order allow,deny allow from all </Directory> CustomLog /home/ross/www/jotter/logs/access.log combined ErrorLog /home/ross/www/jotter/logs/error.log LogLevel warn </VirtualHost> Any ideas why this is happening? I can't see why Apache is looking for a .htaccess there and don't know why this should stop the request. Thanks.

    Read the article

  • MDM for Tax Authorities

    - by david.butler(at)oracle.com
    In last week’s MDM blog, we discussed MDM in the Public Sector. I want to continue that thread. After all, no industry faces tougher data quality problems than governmental organizations, and few industries suffer more significant down side consequences to poor operations than local, state and federal governments. One key challenge area is taxation. Tax Authorities face a multitude of IT challenges. Firstly, the data used in tax calculations is increasing in volume and complexity. They must improve service by introducing multi-channel contact centers and self-service capabilities. Security concerns necessitate increasingly sophisticated data protection procedures. And cost constraints are driving Tax Authorities to rely on off-the-shelf software for many of their functional areas. Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex. They typically include multiple, disparate operational and analytical systems across which the sum total of data about individual constituents is fragmented. To make matters more complicated, taxation is not carried out by a single jurisdiction, and often sources of income including employers, investments and other sources of taxable income and deductions must also be tracked and shared among tax authorities. Collectively, these systems are involved in tax assessment and collections, risk analysis, scoring, tracking, auditing and investigation case management. The Problem of Constituent Data Management The infrastructure described above makes it very difficult to create a consolidated representation of a given party. Differing formats and data models mean that a constituent may be represented in one way in one system and in a different way in another. Individual records are frequently inaccurate, incomplete, out of date and/or inconsistent with other records relating to the same constituent. When constituent data must be aggregated and scored, information within each system must be rationalized and normalized so the agency can produce a constituent information file (CIF) that provides a single source of truth about that party. If information about that constituent changes, each system in turn must be updated. There have been many attempts to solve this problem with technology: from consolidating transactional systems to conducting manual systems integration projects and superimposing layers of business intelligence and analytics. All these approaches can be successful in solving a portion of the problem at a specific point in time, but without an enterprise perspective, anything gained is quickly lost again. Oracle Constituent Data Mastering for Tax Authorities: A Single View of the Constituent Oracle has a flexible and long-term solution to the problem of securely integrating and managing constituent data. The Oracle Solution for mastering Constituent Data for Tax Authorities is based on two core product offerings: Oracle Customer Hub and – optionally – Oracle Application Integration Architecture (AIA). Customer Hub is a master data management (MDM) product that centralizes, de-duplicates, and enriches constituent data. It unifies fragmented information without disrupting existing business processes or IT investments. Role based data access and privacy rules guarantee maximum security and privacy. Data is continuously and automatically synchronized with all source systems. With the Oracle Customer Hub managing the master constituent identity, every department can capture transaction activity against the same record, improving reporting accuracy, employee productivity, reliability of constituent analytics, and day-to-day constituent relationships. Oracle Application Integration Architecture provides a collection of core pre-built processes to support out of the box Master Data Governance across Oracle Customer Hub, Siebel CRM, and Oracle E-Business Suite. It also provides a framework to enable MDM integrations with other Oracle and non-Oracle applications. Oracle AIA removes some of the key inhibitors to implementing a service-oriented architecture (SOA) by providing a pre-built SOA-based middleware foundation as well as industry-optimized service oriented applications, all built around a SOA governance model that encourages effective design and reuse. I encourage you to read Oracle Solution for Mastering Constituents Data for Public Sector – Tax Authorities by Roberto Negro. It is an outstanding whitepaper that describes how the Oracle MDM solution allows you to create a unified, reconciled source of high-quality constituent data and gain an accurate single view of each constituent. This foundation enables you to lower the costs associated with data quality and integration and create a tax organization that is efficient, secure and constituent-centric. Also, don’t forget the upcoming webcast on Thursday, February 10th: Deliver Improved Services to Citizens at Lower Cost to your Organization Our Guest Speaker is Ruben Spekle, from Capgemini. He will also provide insight into Public Sector Master Data Management and Case Management implementations including one that was executed for a Dutch Government Agency. If you are interested in how governmental organizations from around the world are using MDM to advance their cause, click here to register for the webcast.

    Read the article

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • Lubuntu Desktop messed up for logged in user, but not for guest

    - by RPi Awesomeness
    I recently upgraded my laptop from Lubuntu 12.04 to 14.04.1 and the upgrade process seemed to go fine. However, when I went to login as my normal user, I encountered an issue. The background loaded up, but none of LXDE or LXPanel showed up, leaving me with an empty desktop and nothing else except two errors. I thought that this was weird, so I just figured something had been messed up and would be fixed by a reboot. But it wasn't. I then tried logging in as guest, and it's just fine. I checked the ~/.xsession-errors file (for my main user, not guest, did it via TTY1) and this is what I got: Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: Unable to register as subreaper: Invalid argument init: lxsession main process (1649) killed by TERM signal init: Disconnected from notified D-Bus bus init: job dbus failed to stop init: job upstart-dbus-session-bridge failed to stop init: job upstart-dbus-system-bridge failed to stop init: job upstart-file-bridge failed to stop I also read the sometimes removing the ~/.Xauthority file can help, if the ownership is messed up. ls -l /home/MYUSER/.Xauthority tells me -rw------- 1 MYUSER MYUSER 60 Aug 16 09:57 /home/MYUSER/.Xauthority. Should that be root or something else, or should I try deleting that and ~/.profile. Here's what ~/.profile looks like: # ~/.profile: executed by the command interpreter for login shells. # This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login # exists. # see /usr/share/doc/bash/examples/startup-files for examples. # the files are located in the bash-doc package. # the default umask is set in /etc/profile; for setting the umask # for ssh logins, install and configure the libpam-umask package. #umask 022 # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi # set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi Should I post the output of dmesg? I'll try and get a screenshot, but does anyone have any idea what could be causing the desktop (LXDE/LXPanel) not to display? EDIT I attempted removing the ~/.XAuthority file, but that didn't seem to do anything.

    Read the article

  • SSH onto Ubuntu box using RSA keys

    - by jex
    I recently installed OpenSSH on one of my Ubuntu machines and I've been running into problems getting it to use RSA keys. I've generated the RSA key on the client (ssh-keygen), and appended the public key generated to both the /home/jex/.ssh/authorized_keys and /etc/ssh/authorized_keys files on the server. However, when I try to login (ssh -o PreferredAuthorizations=publickey jex@host -v [which forces the use of public key for login]) I get the following output: debug1: Host 'pentheon.local' is known and matches the RSA host key. debug1: Found key in /home/jex/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received Banner message debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/jex/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /home/jex/.ssh/identity debug1: Trying private key: /home/jex/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). I'm not entirely sure where I've gone wrong. I am willing to post my /etc/ssh/sshd_config if needed.

    Read the article

  • Cannot connect to secure wireless with Netgear wna3100 USB

    - by Vince Radice
    I have installed Ubuntu 11.10. I used a wired connection to download and install all of the updates. When I tried to use a Netgear WNA3100 wireless USB network adapter, it failed. Much searching and trying things I was finally able to get it working by disabling security on my router. I have verified this by disabling security and I was able to connect. When I enabled security (WPA2 PSK), the connection failed. What is necessary to enable security (WPA2 PSK) and still use the Netgear USB interface? Here is the output from the commands most requested lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 0846:9020 NetGear, Inc. WNA3100(v1) Wireless-N 300 [Broadcom BCM43231] lshw -C network *-network description: Ethernet interface product: RTL-8139/8139C/8139C+ vendor: Realtek Semiconductor Co., Ltd. physical id: 3 bus info: pci@0000:02:03.0 logical name: eth0 version: 10 serial: 00:40:ca:44:e6:3e size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=8139too driverversion=0.9.28 duplex=half latency=32 link=no maxlatency=64 mingnt=32 multicast=yes port=MII speed=10Mbit/s resources: irq:19 ioport:c800(size=256) memory:ee011000-ee0110ff memory:40000000-4000ffff *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: e0:91:f5:56:e1:0d capabilities: ethernet physical wireless configuration: broadcast=yes driver=ndiswrapper+bcmn43xx32 driverversion=1.56+,08/26/2009, 5.10.79.30 ip=192.168.1.104 link=yes multicast=yes wireless=IEEE 802.11g iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11g ESSID:"vincecarolradice" Mode:Managed Frequency:2.422 GHz Access Point: A0:21:B7:9F:E5:EE Bit Rate=121.5 Mb/s Tx-Power:32 dBm RTS thr:2347 B Fragment thr:2346 B Encryption key:off Power Management:off Link Quality:76/100 Signal level:-47 dBm Noise level:-96 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 ndiswrapper -l bcmn43xx32 : driver installed device (0846:9020) present lsmod | grep ndis ndiswrapper 193669 0 dmesg | grep -e ndis -e wlan [ 907.466392] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 907.838507] ndiswrapper (import:233): unknown symbol: ntoskrnl.exe:'IoUnregisterPlugPlayNotification' [ 907.838955] ndiswrapper: driver bcmwlhigh5 (Netgear,11/05/2009, 5.60.180.11) loaded [ 908.137940] wlan0: ethernet device e0:91:f5:56:e1:0d using NDIS driver: bcmwlhigh5, version: 0x53cb40b, NDIS version: 0x501, vendor: 'NDIS Network Adapter', 0846:9020.F.conf [ 908.141879] wlan0: encryption modes supported: WEP; TKIP with WPA, WPA2, WPA2PSK; AES/CCMP with WPA, WPA2, WPA2PSK [ 908.143048] usbcore: registered new interface driver ndiswrapper [ 908.178826] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 994.015088] usbcore: deregistering interface driver ndiswrapper [ 994.028892] ndiswrapper: device wlan0 removed [ 994.080558] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 994.374929] ndiswrapper: driver bcmn43xx32 (,08/26/2009, 5.10.79.30) loaded [ 994.404366] ndiswrapper (mp_init:219): couldn't initialize device: C0000001 [ 994.404384] ndiswrapper (pnp_start_device:435): Windows driver couldn't initialize the device (C0000001) [ 994.404666] ndiswrapper (mp_halt:262): device e05b6480 is not initialized - not halting [ 994.404671] ndiswrapper: device eth%d removed [ 994.404709] ndiswrapper: probe of 1-5:1.0 failed with error -22 [ 994.406318] usbcore: registered new interface driver ndiswrapper [ 2302.058692] wlan0: ethernet device e0:91:f5:56:e1:0d using NDIS driver: bcmn43xx32, version: 0x50a4f1e, NDIS version: 0x501, vendor: 'NDIS Network Adapter', 0846:9020.F.conf [ 2302.060882] wlan0: encryption modes supported: WEP; TKIP with WPA, WPA2, WPA2PSK; AES/CCMP with WPA, WPA2, WPA2PSK [ 2302.113838] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 2354.611318] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 2355.268902] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 2365.400023] wlan0: no IPv6 routers present [ 2779.226096] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 2779.422343] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 2797.574474] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 2802.607937] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 2803.261315] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 2813.952028] wlan0: no IPv6 routers present [ 3135.738431] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3139.180963] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3139.816561] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.229872] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.444542] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.758297] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3163.860684] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3205.118732] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3205.139553] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3205.300542] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3353.341402] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3363.266399] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3363.505475] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3363.506619] ndiswrapper (set_iw_auth_mode:601): setting auth mode to 5 failed (00010003) [ 3363.717203] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3363.779206] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3405.206152] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3405.248624] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3405.577664] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3438.852457] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3438.908573] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.282995] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.325237] ndiswrapper (set_iw_auth_mode:601): setting auth mode to 5 failed (00010003) [ 3568.460716] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.461763] ndiswrapper (set_iw_auth_mode:601): setting auth mode to 5 failed (00010003) [ 3568.809776] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3568.880641] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3610.122848] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3610.148328] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3610.324502] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 3636.088798] ndiswrapper (iw_set_auth:1602): invalid cmd 12 [ 3636.712186] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 3647.600040] wlan0: no IPv6 routers present I am using the system now with the router security turned off. When I submit this, I will turn security back on.

    Read the article

  • Oracle lanza una comunidad específica para hardware

    - by Eloy M. Rodríguez
    Para aquellos que aún no lo conozcan, quiero presentarles un grupo de interés creado por la compañía en Facebook con el nombre de Oracle Hardware Social Media Hub con el fin de ofrecer un lugar de reunión en la red en el que encontrar a miles de expertos, clientes, partners y reconocidos líderes de Oracle para debatir y descubrir lo último de Oracle. Allí encontrará una pionera aplicación de preguntas y respuestas denominada Pregunte al Experto de Oracle, en donde podrá formular preguntas, aportar comentarios e incluso ser premiado por sus aportaciones especializadas con el título de líderes reconocidos. En el Hardware Hub se podrá, entre otras cosas: Obtener contenidos exclusivos, solamente para los miembros Compartir sus conocimientos y experiencias con una comunidad global Comunicarse con expertos de Oracle en un entorno informal Descubrir métodos innovadores para optimizar el rendimiento de su hardware Acceder a contenidos en su idioma, incluyendo información de eventos, Webcasts, informes técnicos y mucho más.

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • BTrFS subvolume / snapshot question

    - by bumbling fool
    I think I'm having difficulty fully understanding subvolumes and snapshots. The /home partition is btrfs. I want to create a "backup" snapshot of /home/user (for example) but user has existed for years (previously ext4 btrfs-convert). I believe you can only make a snapshot of a subvolume. I checked and there are no "default" subvolumes already present. 1) Is there another way for me to backup /home/user other than creating a subvolume /home/user2 and copying everything from user to user2 in order to snapshot it?

    Read the article

  • L2TP server - site-to-site vpn connection

    - by Pyro
    I am not sure this is the right place for this question but here goes. We want to connect users using an L2TP VPN connection to a users at the other end of a SonicWall site-to-site VPN. Currently we have a SonicWall firewall/router contraption in the home-office that is connected to a far-office over a VPN. Communications with machines in the home-office and far-office is fine. We also have an L2TP server running on the SonicWall that outside users can connect to. This gives them access to machines in the home-office. Communication between outside users and the home-office is fine. However outside users connected to the home-office via the L2TP server can't communicate with machines in the far-office. Will there need to be network bridging or routing needed? Or will this simply be a firewall setting to get this working? Thanks for any help or clues you provide! Rob

    Read the article

  • problem with wireless usb keyboard

    - by Sasha
    I have a problem with wireless keyboard. Problem is: On log, kern.log, messages and syslog is only this line. Every second is 50 lines with such message. Oct 1 08:14:12 wwserver kernel: [ 1447.978908] usb 7-1.3: input irq status -75 received Because these messages is disk full. For this I have to delete log files. Please for help. kern.log file: Oct 1 08:13:53 wwserver kernel: [ 1428.820057] usb 7-1: new full speed USB device using uhci_hcd and address 3 Oct 1 08:13:53 wwserver kernel: [ 1428.977383] usb 7-1: configuration #1 chosen from 1 choice Oct 1 08:13:53 wwserver kernel: [ 1428.980919] hub 7-1:1.0: USB hub found Oct 1 08:13:53 wwserver kernel: [ 1428.982288] hub 7-1:1.0: 4 ports detected Oct 1 08:13:53 wwserver kernel: [ 1429.261317] usb 7-1.3: new full speed USB device using uhci_hcd and address 4 Oct 1 08:13:58 wwserver kernel: [ 1434.408160] usb 7-1.3: configuration #1 chosen from 1 choice Oct 1 08:13:59 wwserver kernel: [ 1434.421484] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1d.2/usb7/7-1/7-1.3/7-1.3:1.0/input/input5 Oct 1 08:13:59 wwserver kernel: [ 1434.421585] generic-usb 0003:046D:C52B.0002: input,hidraw1: USB HID v1.11 Keyboard [Logitech USB Receiver] on usb-0000:00:1d.2-1.3/input0 Oct 1 08:13:59 wwserver kernel: [ 1434.433751] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1d.2/usb7/7-1/7-1.3/7-1.3:1.1/input/input6 Oct 1 08:13:59 wwserver kernel: [ 1434.433933] generic-usb 0003:046D:C52B.0003: input,hiddev96,hidraw2: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:00:1d.2-1.3/input1 Oct 1 08:13:59 wwserver kernel: [ 1434.450210] generic-usb 0003:046D:C52B.0004: hiddev97,hidraw3: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:1d.2-1.3/input2 Oct 1 08:13:59 wwserver kernel: [ 1434.455416] input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1d.2/usb7/7-1/7-1.3/7-1.3:1.3/input/input7 Oct 1 08:13:59 wwserver kernel: [ 1434.455545] generic-usb 0003:046D:C52B.0005: input,hidraw4: USB HID v1.10 Mouse [Logitech USB Receiver] on usb-0000:00:1d.2-1.3/input3 Oct 1 08:14:12 wwserver kernel: [ 1447.964916] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.966907] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.968906] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.970908] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.972907] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.974907] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.976908] usb 7-1.3: input irq status -75 received Oct 1 08:14:12 wwserver kernel: [ 1447.978908] usb 7-1.3: input irq status -75 received

    Read the article

  • SSH as root using public key still prompts for password on RHEL 6.1

    - by Dean Schulze
    I've generated rsa keys with cygwin ssh-keygen and copied them to the server with ssh-copy-id -i id_rsa.pub [email protected] I've got the following settings in my /etc/ssh/sshd_config file RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys PermitRootLogin yes When I ssh [email protected] it still prompts for a password. The output below from /usr/sbin/sshd -d says that a matching keys was found in the .ssh/authorized_keys file, but it still requires a password from the client. I've read a bunch of web postings about permissions on files and directories, but nothing works. Is it possible to ssh with keys in RHEL 6.1 or is this forbidden? The debug output from ssh and sshd is below. $ ssh -v [email protected] OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Connecting to my.ip.address [my.ip.address] port 22. debug1: Connection established. debug1: identity file /home/dschulze/.ssh/id_rsa type 1 debug1: identity file /home/dschulze/.ssh/id_rsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_dsa type 2 debug1: identity file /home/dschulze/.ssh/id_dsa-cert type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa type -1 debug1: identity file /home/dschulze/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 9f:00:e0:1e:a2:cd:05:53:c8:21:d5:69:25:80:39:92 debug1: Host 'my.ip.address' is known and matches the RSA host key. debug1: Found key in /home/dschulze/.ssh/known_hosts:3 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/dschulze/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering DSA public key: /home/dschulze/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Trying private key: /home/dschulze/.ssh/id_ecdsa debug1: Next authentication method: password Here is the server output from /usr/sbin/sshd -d [root@ga2-lab .ssh]# /usr/sbin/sshd -d debug1: sshd version OpenSSH_5.3p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-d' debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 172.60.254.24 port 53401 debug1: Client protocol version 2.0; client software version OpenSSH_6.1 debug1: match: OpenSSH_6.1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user root service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "root" debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 1 failures 0 debug1: test whether pkalg/pkblob are acceptable debug1: PAM: setting PAM_RHOST to "172.60.254.24" debug1: PAM: setting PAM_TTY to "ssh" debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 Postponed publickey for root from 172.60.254.24 port 53401 ssh2 debug1: userauth-request for user root service ssh-connection method publickey debug1: attempt 2 failures 0 debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: trying public key file /root/.ssh/authorized_keys debug1: fd 4 clearing O_NONBLOCK debug1: matching key found: file /root/.ssh/authorized_keys, line 1 Found matching RSA key: db:b3:b9:b1:c9:df:6d:e1:03:5b:57:d3:d9:c4:4e:5c debug1: restore_uid: 0/0 debug1: ssh_rsa_verify: signature correct debug1: do_pam_account: called Accepted publickey for root from 172.60.254.24 port 53401 ssh2 debug1: monitor_child_preauth: root has been authenticated by privileged process debug1: temporarily_use_uid: 0/0 (e=0/0) debug1: ssh_gssapi_storecreds: Not a GSSAPI mechanism debug1: restore_uid: 0/0 debug1: SELinux support enabled debug1: PAM: establishing credentials PAM: pam_open_session(): Authentication failure debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pts/1 ssh_selinux_setup_pty: security_compute_relabel: Invalid argument debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 17323 debug1: session_exit_message: session 0 channel 0 pid 17323 debug1: session_exit_message: release channel 0 debug1: session_pty_cleanup: session 0 release /dev/pts/1 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from 172.60.254.24: 11: disconnected by user debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: deleting credentials

    Read the article

  • How to set up a VirtualHost on Amazon EC2 for phpmyadmin

    - by Oudin
    Hi I'm currently working on setting up a VirtualHost on Amazon EC2 for accessing phpmyadmin so i can access it with test.example.com as oppose to it being widely available as it's default example.com/phpmyadmin. So far I've created a file "testfile" in /etc/apache2/sites-available/ with the code below and enabled it "a2ensite testfile" However I'm not getting the vhost to work <VirtualHost *:80> ServerAdmin [email protected] ServerName test.example.com ServerAlias test.example.com #DocumentRoot /usr/share/phpmyadmin DocumentRoot /home/user/public_html/folder RewriteEngine On RewriteCond %{HTTP_HOST} !test.example.com RewriteRule (.*) [L] <Directory /home/user/public_html/folder> Options FollowSymLinks DirectoryIndex index.php AllowOverride None <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_admin_flag allow_url_fopen Off php_value include_path . php_admin_value upload_tmp_dir /var/lib/phpmyadmin/tmp php_admin_value open_basedir /usr/share/phpmyadmin/:/etc/phpmyadmin/:/var/lib/phpmyadmin/ </IfModule> </Directory> # Authorize for setup <Directory /usr/share/phpmyadmin/setup> <IfModule mod_authn_file.c> AuthType Basic AuthName "phpMyAdmin Setup" AuthUserFile /etc/phpmyadmin/htpasswd.setup </IfModule> Require valid-user </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> ErrorLog /home/user/public_html/folder/logs/error.log LogLevel warn CustomLog /home/user/public_html/folder/logs/access.log combined </VirtualHost> sudo ln -s /usr/share/phpmyadmin /home/user/public_html/folder The above line creates a link of the phpmyadmin in the public folder. Any help on this would be greatly appreciated. Note: example.com will be replaced with my official domain

    Read the article

  • jWIN JC-AM100 not working

    - by Tony Kilgore
    jWIN JC-AM100 not working in skype, i just see black and the little light that comes on the webcam showing its in use does not come on ether, but it works with cheese but its upside down. =[ lsusb: Bus 002 Device 012: ID 093a:2620 Pixart Imaging, Inc. Bus 002 Device 002: ID 04d9:1702 Holtek Semiconductor, Inc. Bus 002 Device 005: ID 046d:c05a Logitech, Inc. Optical Mouse M90 Bus 002 Device 004: ID 058f:9360 Alcor Micro Corp. 8-in-1 Media Card Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub<

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • How do I restore a backup of my keyring (containing ssh key passprases, nautilus remote filesystem passwords and wifi passwords)?

    - by con-f-use
    I changed the disk on my laptop and installed Ubuntu on the new disk. Old disk had 12.04 upgraded to 12.10 on it. Now I want to copy my old keyring with WiFi passwords, ftp passwords for nautilus and ssh key passphrases. I have the whole data from the old disk available (is now a USB disk and I did not delete the old data yet or do anything with it - I could still put it in the laptop and boot from it like nothing happend). The old methods of just copying ~/.gconf/... and ~/.gnome2/keyrings won't work. Did I miss something? 1. Edit: I figure one needs to copy files not located in the users home directory as well. I copied the whole old /home/confus (which is my home directory) to the fresh install to no effect. That whole copy is now reverted to the fresh install's home directory, so my /home/confus is as it was the after fresh install. 2. Edit: The folder /etc/NetworkManager/system-connections seems to be the place for WiFi passwords. Could be that /usr/share/keyrings is important as well for ssh keys - that's the only sensible thing that a search came up with: find /usr/ -name "*keyring* 3. Edit: Still no ssh and ftp passwords from the keyring. What I did: Convert old hard drive to usb drive Put new drive in the laptop and installed fresh version of 12.10 there Booted from old hdd via USB and copied its /etc/NetwrokManager/system-connections, ~/.gconf/ and ~/.gnome2/keyrings, ~/.ssh over to the new disk. Confirmed that all keys on the old install work Booted from new disk Result: No passphrase for ssh keys, no ftp passwords in keyring. At least the WiFi passwords are migrated.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >