Search Results

Search found 10 results on 1 pages for 'berkes'.

Page 1/1 | 1 

  • Encrypted home breaks on login

    - by berkes
    My home is encrypted, which breaks the login. Gnome and other services try to find all sorts of .files, write to them, read from them and so on. E.g. .ICEauthority. They are not found (yet) because at that moment the home is still encrypted. I do not have automatic login set, since that has known issues with encrypted home in Ubuntu. When I go trough the following steps, there is no problem: boot up the system. [ctr][alt][F1], login. run ecryptfs-mount-private [ctr][alt][F7], done. Can now login. I may have some setting wrong, but have no idea where. I suspect ecryptfs-mount-private should be ran earlier in bootstrap, but do not know how to make it so. Some issues that may cause trouble: I have a fingerprint reader, it works for login and PAM. I have three keyrings in seahorse, containing passwords from old machines (backups). Not just one. Suggestion was that the PAM settings are wrong, so here are the relevant parts from /etc/pam.d/common-auth. # here are the per-package modules (the "Primary" block) auth [success=3 default=ignore] pam_fprintd.so auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config I am not sure about how this configuration works, but ut seems that maybe the*optional* in auth optional pam_ecryptfs.so unwrap is causing the ecryptfs to be ignored?

    Read the article

  • How to make Chrome/Chromium remember passwords in the gnome seahorse keyring?

    - by berkes
    Is it possible to make chrome or chromium (as that comes default in the repos) to use the Gnome seahorse as password vault? I have not found a way to do this for Firefox either, but maybe a solution for Firefox will lead to a solution for Chrome. FYI: Epiphany is properly integrated into Gnome by default, and does use the default password vault. It would be great to at least have all passwords in a single, actually secure, place, instead of laying around in my home-dir. Even better would be if somehow they could re-use eachothers passwords, but that depends on the implementation of this integration, i guess.

    Read the article

  • Howto have thunderbird/lightning open ics files

    - by berkes
    Some websites offer .ics files, calendar files. I would like thunrbird to open these with Lightning so it can add the events in the ics-file to its calendar. When I use /usr/bin/thunderbird to open the file with, it starts a new message with the ics file attached. I could download the file, then import it from lightning, But I rather have lightning just open the file. Is there a commandline-option for this? Some wrapper-script maybe? Should I change something in firefox?

    Read the article

  • Evolution has no access to couchdb

    - by berkes
    Evolution gives the error "Cannot open addressbook". "We were unable to open this addressbook. This either means you have entered an incorrect URI, or the server is unreachable". "Details: Operation not permitted". (rough translation from Dutch). Enabling verbose logging in (desktop)couchdb tells me roughly the same: [info] [<0.7875.1>] 127.0.0.1 - - 'PUT' /contacts/ 400 [debug] [<0.7875.1>] httpd 400 error response: {"error":"invalid_consumer","reason":"Invalid consumer (key or signature method)."} It seems that evolution tries to fetch the contacts, then couchdb denies access, and then evolution fails to do a proper oauth. This is on Ubuntu 10.10, with its default dektopcouch 1.0.1. Any hints on where to start would be most appreciated :)

    Read the article

  • Regain Sudo rights after removing from admin group

    - by berkes
    Hello, I accidentally removed myself from the admin group when editing the user. Now I can no longer use sudo. The error says: ber is not in the sudoers file. This incident will be reported. I booted up in rescue mode, but, when going into root prompt, it asks me for the root password. I don't have one, and providing with my own (first and only ubuntu-user) password, it won't allow entrance. My harddisk is encrypted, but only the /home/user part, not the entire disk, afaik. What can I do?

    Read the article

  • Connect to NFS on availability

    - by berkes
    What would be a good way to automatically mount an NFS when it gets/is available? I have the following: Media server at home, running Ubuntu, 10.10 with GUI *) Laptop often at home, often on the road or at clients. Ubuntu 10.10 with GUI. What I'd like is my laptop connecting to the nfs (or any other mountable networked filesystem) so that Banshee sees all the music, new podcast-entries (and video) from that media-server. I already have firefly (mt-daapd) running, which works, but is flakey on both server-side and client-side. But its biggest downside, is that I cannot easily fix metadata on files on the media-server this way. DAAP is read-only by design. I can mount nfs manually, through a sudo mount /media/nfsmultimedia/. I am not looking for a manual, or howto on setting up a NFS client and server. Merely a way to have this more transparently working. Obviously I'd like the NFS to be unmounted if the network is no longer available (i.e. when I open my laptop-lid on my clients buro). It may be, that an NFS is not suited for this, in that case, I'd love to hear other options. :) *) Actually: I also have a fileserver, backupserver and webserver to which I'd like to connect in a somewhat similar way. Right now I connect to these over SSH, using gvfs.

    Read the article

  • Don't show gwibber messages in indicator applet

    - by berkes
    Gwibber turns the indicator-applet-icon blue, each time a new tweet appears in my timeline, which is always. It is therefore no longer usefull as a "there is a new message for you" resulting in many missed jabber/gtalk chats, finding emails half an hour late and so on. How can I mute Gwibber in there? Preferably so it only turns the new-message-icon blue on new replies and private messages in Gwibber. But certainly not on any update.

    Read the article

  • deploy a sinatra app with passenger gives only 404, page not founds. Yet a simple rack app works.

    - by berkes
    I have correctly (or prbably not) installed passenger on apache 2. Rack works, but sinatra keeps giving 404's. Here is what works: config.ru: #app = proc do |env| return [200, { "Content-Type" => "text/html" }, "hello <b>world</b>"] end run app Here is what works too: Running the app.rb (see below) with ruby app.rb and then looking at localhost:4567/about and / restarting the app, gives me a correct hello world. w00t. But then there is the sinatra entering the building: config.ru require 'rubygems' require 'sinatra' root_dir = File.dirname(__FILE__) set :environment, ENV['RACK_ENV'].to_sym set :root, root_dir set :app_file, File.join(root_dir, 'app.rb') disable :run run Sinatra::Application and an app.rb require 'rubygems' require 'sinatra' get '/' do "Hallo wereld!" end get '/about' do "Hello world, it's #{Time.now} at the server!" end This keeps giving 404s. /var/logs/apache2/error.log lists these correctly as "404" with something that worries me: 83.XXXXXXXXX - - [30/May/2010 16:06:52] "GET /about " 404 18 0.0007 83.XXXXXXXXX - - [30/May/2010 16:06:56] "GET / " 404 18 0.0007 The thing that worried me, is the space after the / and the /about. Would apache or sinatra go looking for /[space], like /%20? If anyone knows what this problem relates to, maybe a known bug (that I could not find) or a known gotcha? Maybe I am just being stupid and getting "it all wrong?" Otherwise any hints on where to get, read or log more developers data on a running rack, sinatra or passenger app would be helpfull too: to see what sinatra is looking for, for example. Some other information: Running ubuntu 9.04, apache2-mm-prefork (deb), mod_php5, ruby 1.8.7, passenger 2.2.11, sinatra 1.0

    Read the article

  • Rails rspec expects Admin::PostsController, which is there.

    - by berkes
    I have a file app/controllers/admin/posts_controller.rb class Admin::PostsController < ApplicationController layout 'admin' # GET /admin/posts def index @pposts = Post.paginate :page => params[:page], :order => 'created_at DESC' end # ...Many more standard CRUD/REST methods... end And an rspec test spec/controllers/admin/posts_controller_spec.rb require 'spec_helper' describe Admin::PostsController do describe "GET 'index'" do it "should be successful" do get 'index' response.should be_success end end #...many more test for all CRUD/REST methods end However, running that spec throws an error. I have no idea what that error means, nor how to start solving it. /home/...../active_support/dependencies.rb:492:in `load_missing_constant': Expected /home/...../app/controllers/admin/posts_controller.rb to define Admin::PostsController (LoadError) I may have it all set up wrong, or may be doing something really silly, but all I want is my CRUD actions on /admin, with separate before filters and a separate layout. And to test these controllers. EDIT ZOMG, made a terrible copy-paste error into this SO posting. The controller was PostsController, not the PagesController that I pasted into there. Problem still stands, as my code is correct, just the SO post, here was wrong.

    Read the article

  • Get absolute (base) url in sinatra.

    - by berkes
    Right now, I do a get '/' do set :base_url, "#{request.env['rack.url_scheme']}://#{request.env['HTTP_HOST']}" # ... haml :index end to be able to use options.base_url in the HAML index.haml. But I am sure there is a far better, DRY, way of doing this. Yet I cannot see, nor find it. (I am new to Sinatra :)) Somehow, outside of get, I don't have request.env available, or so it seems. So putting it in an include did not work. How do you get your base url?

    Read the article

1