Search Results

Search found 14041 results on 562 pages for 'home surveillance'.

Page 15/562 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Mac OS X - User home directories shared via NFS

    - by Hugh
    I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OS X, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OS X workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OS X. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OS X is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users

    Read the article

  • Advice needed for a home network setup (hardware & software) to handle many clients and potentially heavy traffic

    - by posdef
    I have recently decided to re-structure the home network of our flatshare here. Here's a quick outline of the situation. I envision to have the following 4 devices connected to the router via cable: Xbox 360 IP phone Printer QNAP server (Web, File and Multimedia) We are three people living here, so on top of that there will be to 5-6 computers/mobile devices connecting as wireless clients. My goal is to be able to transfer files (when needed) between the computer and the Multimedia server, which I can reach via 360 and play on the TV. I also would like to keep a high level of security; right now I have the encryption on WPA2 and MAC filtering. I don't believe the web server will get heavy traffic, though I would like to have it responsive. Likewise, I don't have a habit of downloading via torrent etc, but I greatly appreciate my network being responsive and fast, especially when I am browsing or streaming high quality media. Now my questions are: is this setup feasible? smart? efficient? can this be improved somehow? my current router (D-Link DI624) and the previous one (DI-524) used to have spontaneous drops in network, which I find highly irritating. I don't believe in my router, especially now that it completely crashed when I was test-running the setup by transferring a large media file to server while xbox was playing music from the server, and two computers browsing the net. Do I need to get new hardware, if so, any recommendations for a reliable and fast router?

    Read the article

  • Website hosted at home pingable from outside, but not browseable from outside

    - by Richard DesLonde
    I have a simple setup. Server at home has local I.P. 192.168.1.3 IIS is running on the server and the website is up. Windows firewall on the server has an exception rule for port 80 TCP Router has static I.P. XX.XXX.XX.XXX Router is forwarding TCP port 80 to 192.168.1.3 My domain registrar is my DNS host and is pointing to the static I.P. XX.XXX.XX.XXX of the router Here's what I can and can't do. I can browse the website from within my home network either by I.P. or domain name. I can ping the domain and the I.P. from outside the network (from a computer at work). I can't browse the website either by domain name or by I.P. Wierd. Why I can't browse my website? Incidentally, I wasn't sure this question was appropriate for SO, but after finding a few others similar to it on SO, and no comments on those questions saying anything about it being innapropriate, I decided I would post this question. Let me know if this is not appropriate for SO, or is more appropriate for another of the SE websites. Thanks!

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • Apache Not Accepting a Path in My Home Folder

    - by Promather
    I have trying to set up an Apache site to use a folder in my home folder without any success. I exactly followed the steps in this page: https://help.ubuntu.com/community/ApacheMySQLPHP yet I did not succeed; I keep getting error 403, which says that the server doesn't have permission to access the requested page. I searched forums and many suggested changing the permission of the folder. I went straight away and set the permission to 777, but that didn't solve the problem. I made another search and somebody gave me a clue, which is that it could be because my home folder is encrypted. I believe this could be the problem, but: What is the relation between encryption and Apache? I suppose Apache server is requesting the file from the system, rather than trying to access the file bytes! Is there anyway to solve this problem? I don't want to move the folder to /var/www because I am using this Apache for testing, so I want whatever change I make to be immediately reflected, rather than having to copy files which is error prone.

    Read the article

  • [MINI HOW-TO] Repair Missing External Hard Drive Database Error in WHS

    - by Mysticgeek
    If you’re using external hard drives with your Windows Home Server, they might get unplugged and create an error. Here we look at running the Repair Wizard to quickly fix the issue. If an external drive that is included in your drive pool becomes unplugged or loses power, you might see the following error under Home Network Health when opening the WHS Console. To fix the issue verify the drive has power and is plugged in correctly and click Repair. The wizard launches and you’ll need to agree that you may lose data during the repair of the backup database. In this example it was a simple problem where an external drive became unplugged from the server…so you can close out of the wizard. If you look under Server Storage you can see the drive is missing…To fix the issue verify the drive has power and is plugged in correctly. WHS will add the drive back into the pool and when finished you’ll see it listed as healthy and good to go. Using external drives that are part of your storage pool may not be the best way to have your home built WHS setup, but if you do, expect occasional errors such as this. Similar Articles Productive Geek Tips Find Your Missing USB Drive on Windows XPFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaSpeed up External USB Hard Drives in Windows VistaRebit Backup Software [Review]Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • Compact DIY Office-in-a-Cart Packs Away Into a Closet

    - by Jason Fitzpatrick
    Many geeks know the pain of losing a home office when a new baby comes along, but not many of them go to such lengths to miniaturize their offices like this. With a little ingenuity an entire home office now fits inside a heavily modified IKEA work table. Ian, an IKEAHacker reader and Los Angeles area geek, explains the motivation for the build: I had to surrender my home office to make room for my new baby boy ;) I took an Ikea stainless steel kitchen “work table”, some Ikea computer tower desk trays, two steel tabletops, and two grated steel shelves to make an “office” that I could pack away into a closet. Hit up the link below to check out the full photo set, the build includes quite a few clever design choices like mounted monitors, a ventilation system, and more. Home Office In A Box [IKEAHacker] HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • Unity on Ubuntu 11.10 - The Dash Home button brings up the panel, but is empty

    - by David M. Coe
    The dash home button brings up a panel that is greyed out, but it is totally empty. It seems to be the very same issue as this: Dash home button brings up blank window which is unanswered. /usr/lib/nux/unity_support_test -p returns OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RV370 OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes I've tried a unity --reset but that doesn't seem to work. Unity seems to reset, but I get the following warning over and over: cs space validation failed unity What should I do next to try and fix this? Edit: Attempted fixes: I've refomatted, did not work. I've done apt-get remove unity then apt-get update then apt-get install unity, did not work. I've switched to Unity 2d and this seems to work. How can I get regualar Unity working or atleast find the error?

    Read the article

  • I deleted all files and folders (including hidden) from /home/username/ now in big trouble

    - by jeffery_the_wind
    I am logged into a remote ubuntu server, and I accidentally erased the entire /home/username/ directory for the current user. The only thing left is a hidden directory called .gvfs. I don't need anything of the Documents/Music/etc. Now it is not letting me cd into the /var/www/ directory, which has permissions 666 and it is owned by the current user. I am afraid to disconnect from my ssh session because I don't know if I will be able to get back on. Have I permanently created a problem? Is there a way I can replace the most important files to the /home/username/ directory? Thanks! ** EDIT ** Thanks everyone for the help. I figured the problem with cd into the /var/www/ was actually my permissions in the /var/www/ directory. It was set to 666, changed it to 755 and everything was good again. It doesn't look like anything systematic was ruined by deleting the contents of the user folder.

    Read the article

  • Why the cucumber features keeps running even though it fails?

    - by Millisami
    Its a rails 2.3.5 app. I'm using the rspec and cucumber for testing. When I run autospec, it runs correctly with the warning (Not running features. To run features in autotest, set AUTOFEATURE=true.) as below: [~/rails_apps/automation (campaign)?] ? autospec (Not running features. To run features in autotest, set AUTOFEATURE=true.) (Not running features. To run features in autotest, set AUTOFEATURE=true.) loading autotest/rails_rspec /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/pathname.rb:263: warning: `*' interpreted as argument prefix /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/bin/ruby /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/rspec-1.3.0/bin/spec --autospec /home/millisami/rails_apps/automation/spec/controllers/campaigns_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/board_spec.rb /home/millisami/rails_apps/automation/spec/models/user_spec.rb /home/millisami/rails_apps/automation/spec/models/campaign_spec.rb /home/millisami/rails_apps/automation/spec/controllers/outlets_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/boards_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_type_spec.rb /home/millisami/rails_apps/automation/spec/models/vendor_spec.rb /home/millisami/rails_apps/automation/spec/controllers/brands_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/vendors_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/dashboard_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/brand_spec.rb /home/millisami/rails_apps/automation/spec/helpers/dashboard_helper_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_spec.rb /home/millisami/rails_apps/automation/spec/models/client_spec.rb /home/millisami/rails_apps/automation/spec/controllers/clients_controller_spec.rb -O spec/spec.opts Now, as it suggests, when I run AUTOFEATURE=true autospec, the specs runs and the cuke features as well. But the problem is that it won't stop. It runs the features and runs them again and again in a loop. It doesn't stop after it fails. Is this due to the warning Warning: $KCODE is NONE as shown below?? [~/rails_apps/automation (campaign)?] ? AUTOFEATURE=true autospec loading autotest/cucumber_rails_rspec Warning: $KCODE is NONE. /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/treetop-1.4.5/lib/treetop/ruby_extensions/string.rb:31: warning: method redefined; discarding old indent /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/pathname.rb:263: warning: `*' interpreted as argument prefix /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/core_ext/object/blank.rb:49: warning: method redefined; discarding old blank? /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/bin/ruby /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/rspec-1.3.0/bin/spec --autospec /home/millisami/rails_apps/automation/spec/controllers/campaigns_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/board_spec.rb /home/millisami/rails_apps/automation/spec/models/user_spec.rb /home/millisami/rails_apps/automation/spec/models/campaign_spec.rb /home/millisami/rails_apps/automation/spec/controllers/outlets_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/boards_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_type_spec.rb /home/millisami/rails_apps/automation/spec/models/vendor_spec.rb /home/millisami/rails_apps/automation/spec/controllers/brands_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/vendors_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/dashboard_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/brand_spec.rb /home/millisami/rails_apps/automation/spec/helpers/dashboard_helper_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_spec.rb /home/millisami/rails_apps/automation/spec/models/client_spec.rb /home/millisami/rails_apps/automation/spec/controllers/clients_controller_spec.rb -O spec/spec.opts

    Read the article

  • Effect of HOME on libreoffice to convert to pdf as non-root user

    - by user1032531
    I installed libreoffice-headless and can convert documents when logged on as root. I then tried doing so as another user, and it didn't show an error, but didn't convert the file. I then found that if I get rid of the HOME=/tmp/ayb, it works with the other user. Doesn't HOME=/tmp/ayb just allow files to default to this directory if not specified? (Sorry, I tried to search "Linux HOME", but as you probably expect, received a bunch of non-relevant results). If not, what is the purpose of specifying HOME? Why does setting HOME prevent it from converting on non-root users? Note that /tmp and /tmp/ayb or both 0777. Thank you [root@desktop ~]# yum install libreoffice-headless [root@desktop ~]# yum install libreoffice-writer [root@desktop ~]# ls -l total 48 -rwxrwxrwx. 1 NotionCommotion NotionCommotion 48128 Jul 30 02:38 document_34.doc [root@desktop ~]# HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# su NotionCommotion sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ exit exit [root@desktop ~]# su NotionCommotion sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export sh-4.1$ rm d*.pdf sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$

    Read the article

  • Changing home directory in cPanel

    - by user30878
    Is it possible (and how?) to change what the home directory is in a hosting environment? For instance, my current home directory is /home/accountname/, but I would like to develop a second version of the site in /home/accountname/cms/ (to avoid filename conflict, etc.), and then make the domain point there. Do I have to do that through the domain registrar, or can I do that in cPanel?

    Read the article

  • The number of soft 404 errors is increasing because of redirects to the home page

    - by Stevie G
    I have an increase in soft 404 errors. Using Apache in my .htaccess file I have: Redirect 301 /test.html? /page/pop/test Redirect 301 /about.html? /about I have also tried: Redirect 301 http://www.example.co.za/test.html? http://www.example.co.za/services/test however whenever I go to: http://www.example.co.za/test.html http://www.example.co.za/about.html it just redirects to the home page I also have: RewriteRule ^.*$ index.php [NC,L] in .htaccess

    Read the article

  • Some Keyword Optimization Tips For Your Work From Home

    Part of search engine optimization or SEO is keyword optimization. We optimize our keywords for our website because these words or phrases are the ones that will link us to our customers or target audience. By typing these keywords in the search box of popular search engines, customers are able to find us, our website and of course, our business or work from home.

    Read the article

  • Element 1.1 for home theater PCs

    <b>LWN.net:</b> "Element is a lightweight Linux distribution for use on a home theater PC (HTPC). It comes with most of the same video-playback applications one would find in a modern desktop distribution, but the development team has put considerable effort into wrapping the applications in an environment that is easy to navigate from across the room..."

    Read the article

  • Internet At Home

    Networking used to be just for businesses at the office, but one of the biggest changes of the last few years is the increased necessity of a network at home. Ten years ago it was common for houses ... [Author: Chris Holgate - Computers and Internet - April 06, 2010]

    Read the article

  • Encrypted home won't mount automatically nor with ecryptfs-mount-private

    - by Patrik Swedman
    Up until recently my encrypted home worked great but after a reboot it didn't mount itself automatically and when I try to mount it manually I get a mount error: patrik@patrik-server:~$ ecryptfs-mount-private Enter your login passphrase: Inserted auth tok with sig [9af248791dd63c29] into the user session keyring mount: Invalid argument patrik@patrik-server:~$ I've also tried with sudo even though that shouldn't be necesary: patrik@patrik-server:/$ sudo ecryptfs-mount-private [sudo] password for patrik: Enter your login passphrase: Inserted auth tok with sig [9af248791dd63c29] into the user session keyring fopen: No such file or directory I'm using Ubuntu 10.04.4 LTS and I access it over SSH with putty.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >