Search Results

Search found 24207 results on 969 pages for 'anonymous users'.

Page 113/969 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • CodePlex Daily Summary for Monday, March 26, 2012

    CodePlex Daily Summary for Monday, March 26, 2012Popular ReleasesQuick Performance Monitor: Version 1.8.1: Added option to set main window to be 'Always On Top'. Use context (right-click) menu on graph to toggle..Net Rest API for Kayako Fusion 4: kayako_rest_api_2012.03.26: Added ability to search for users via organisation/email. This is much quicker than getting all users then filtering.GeoMedia PostGIS data server: PostGIS GDO 1.0.1.1: This is a new version of GeoMeda PostGIS data server which supports user rights. It means that only those feature classes, which the current user has rights to select, are visible in GeoMedia. Issues fixed in this release Fixed a problem when gdo.gfeaturesbase table has been visible in GeoMedia. To hide this table, run the previous version of Database Utilities and uncheck this table in the feature classes list. Then load the new release. Fixed a problem when coordinate system list has not...Silverlight 4 & 5 Persian DatePicker: Silverlight 4 and 5 Persian DatePicker: Added Silverlight 5 support.Y.Music: Y.Music v.1.0: ?????? ?????? ?????????. ????????: ????? ???? ????, ??????? ?????? ? ??? - Beta.Asp.NET Url Router: v1.0: build for .net 2.0 and .net 4.0menu4web: menu4web 0.0.3: menu4web 0.0.3ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Final: This release installs both the ArcGIS Editor for OSM Server Component and/or ArcGIS Editor for OSM Desktop components. The Desktop tools allow you to download data from the OpenStreetMap servers and store it locally in a geodatabase. You can then use the familiar editing environment of ArcGIS Desktop to create, modify, or delete data. Once you are done editing, you can post back the edit changes to OSM to make them available to all OSM users. The Server Component allows you to quickly create...Craig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...XNA Electric Effect: Jason Electric Effect v1.1: The library now includes 3 effect types: Line, Bezier, CatmullRom, providing different look and feel.DotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Change default Share-site group SharePoint Online (Office 365): Change default Share-site group SharePoint Online: As default when we share a site collection or site with external users, SharePoint Online show default SharePoint groups which are Visitors and Members. By using this feature, you will get a link which you can use to customize the default groups to your custom groups and other default groups.Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.Working with Social Data: Tag Cloud Customization: http://swatipoint.blogspot.com/2011/10/sharepoint-2010-social-featurestagging.htmlWebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...New ProjectsASIVeste: No description availableAuthor-it Sync Headings Plug-in: Author-it plug-in that allows the user to synchronize the Print, Help, and Web headings with the Description for each selected topic.BlogEngine.Web: BlogEngine.Web is a BlogEngine.Net converted to use Web Application Project model (WAP).Code Writer Helper: A quick solution to help code generator writers.CodeUITest: Practise CodeUI automation.DAX Studio: Excel Add-In for PowerPivot and Analysis Services Tabular projects that will include an Object Browser, query editing and execution, formula and measure editing ,syntax highlighting, integrated tracing and query execution breakdowns.Fated: Fated is an isometric-viewed, tile-based tactical RPG developed in C# using XNA to be deployed to XBox. This includes a character generation core, graphics engine, and storyline parser.iSufe???: “iSufe???”??????????????????????????????。???????????、????、?????????,??????????????。??,????iOS/Android/WAP???????,???????????????。????GPLv2??,?????????????。Kinect test project: Basic project for my kinect test applicationLoLTimers: LoLTimers by Christian Schubert 2012. Version 1.0.0.0 This is a small app that lets you keep track of the most important creep camp cooldowns. Developed in Visual Studio C#.London Priority Security Services Ltd: LPSS - London Priority Security Services LtdNMCNPM: code nhóm nmcnpmOffice 365 Anonymous Access Manager Sandbox Solution: The sandbox solution enables you to manage anonymous access of lists on Office 365. It allows setting read, modifying and adding rights. Additionally the configuration page adds the necessary events to be able to use moderations, when anonymous users are creating a list item. The second feature in the solution enables anonymous access on blogs sites, it allows to enable anonymous users to comment on a blog.Office 365 Google Analytics: This sandbox solution enables google analytics everywhere in your site collection. This allows you to use the google analytics reporting on all your Office 365 sites.Office 365 Mobile Access Enables for Public Sites & Blogs: This sandbox solution enables mobile access on Office 365 sites.OwnMFCSolution: MFC test solution.People Data Generator: Need to load a bunch of test data to represent people (e.g. name, address, phone, etc.)? Wish it looked realistic? People Data Generator is what you need. Features: *Realistic names *Realistic addresses, using real towns and postal codes *Realistic phone numbers and emails *Very ExtensibleProventi: Met dit programma kan je je voorraad van je onderneming beheren. Dit programma zal in eerste instantie gebruikt worden binnen de minionderneming Proventi. Het programma is geschreven in VB.Net en maakt gebruik van SQL Server CE voor de gegevensopslag.qCommerce: ??????????? ???????, ???????????? ??? ????? qSoftwareQuanLyOTo: Ð? án môn h?c C# qu?n lý garage ô tôRoyaSoft.ir Resources: i am use this project for my personal web site :)SGPF: The team does not have nothing to declare here!SharePoint 2010 Autocomplete Lookup Field: Autocomplete Lookup field allows type ahead functionality while entering lookup values in list items.Sharing Photos using SignalR: An MVC application using SignalR that can be used to share photos between friends and get realtime updates. An user connected to the website can upload a photo which will be automatically broadcasted to all clients connected at that point.Sistema Hoteleiro: Sistema Hoteleiro é o meu trabalho final da disciplina Arquitetura de Aplicativos Ambiente .NET da 4a turma do curso de pós-graduação de especialização em Arquitetura de Sistemas Distribuídos oferecido pelo Instituto de Educação Continuada da Pontifícia Universidade Católica de MSoftware Revolution: This project is core information site of Software Revolution named company which provides software solutions.tgryi: tgyrivbWSUS: Really decide which and when to install updates from a centralized server, globally or per host : - installation schedule - updates to install - email results - configure extra Windows Update parameters Works with WSUS server or Windows Update from Microsoft. See README.txt for more informations ! :) Current official website is http://sourceforge.net/projects/vbwsus/XamlCombine: Combines multiple XAML resource dictionaries in one. Replaces DynamicResources to StaticResources. And sort them in order of usage.XNA Shader-free Linear Burn effect: Sample demonstrating a Linear Burn effect in XNA without using custom shaderszhCms: zhCmszhtest: my test project

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • vsftpd not allowing uploads. 550 response.

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • SMTP mailbox unavailable - intermittent and self-inflicted

    - by user134451
    I have an app that runs daily, sending confirmation emails to dozens of customers. Emails are sent using SMTP with authentication. The app also has some error handling, and occasionally anonymous SMTP is used to notify the webmaster that an e-mail issue has been encountered (a malformed email address, usually). Whenever these warning notifications are sent, the customer notifications that follow throw an error: "Mailbox unavailable. The server response was: 5.7.1 Unable to relay". The customer notification emails are sent, but my app drops into the exception handler. And all subsequent customer notification emails have this problem. Everything is fine next time the programs run, until a webmaster warning email is sent. Anyone have an ideas what would cause this? My first thought was that the client didn't like being switched back and forth between anonymous to authenticated modes. I created a separate client for each mode, but that didn't help.

    Read the article

  • RAils Authlogic and Hobo

    - by MrThomas
    Is there anyone out there who has an idea of how to incorporate Hobo as an admin subsite on a existing rails app running on authlogic. I've been following this tutorial, but it not working. Any help or Tutorial link please! Some erorr code for anyone who fancies a crack: ~/dev/copy> ./script/server => Booting Mongrel => Rails 2.3.4 application starting on http://0.0.0.0:3000 /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:426:in `load_missing_constant': Expected /Users/Mister/dev/copy/app/controllers/admin/admin_controller.rb to define Admin::AdminController (LoadError) from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:80:in `const_missing' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/inflector.rb:361:in `constantize' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/inflector.rb:360:in `each' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/inflector.rb:360:in `constantize' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/core_ext/string/inflections.rb:162:in `constantize' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_controller.rb:61:in `all_controllers' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_controller.rb:57:in `each' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_controller.rb:57:in `all_controllers' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_router.rb:97:in `add_routes_for' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_router.rb:83:in `add_routes' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_router.rb:83:in `each' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/lib/hobo/model_router.rb:83:in `add_routes' from /opt/local/lib/ruby/gems/1.8/gems/hobo-1.0.0/rails/../lib/hobo.rb:73:in `add_routes' from /Users/Mister/dev/copy/config/routes.rb:6 from /opt/local/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:226:in `draw' from /Users/Mister/dev/copy/config/routes.rb:1 from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:145:in `load' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:145:in `load' from /opt/local/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /opt/local/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `each' from /opt/local/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:286:in `load_routes!' from /opt/local/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/routing/route_set.rb:266:in `reload!' from /Users/Mister/.gem/ruby/1.8/gems/rails-2.3.4/lib/initializer.rb:537:in `initialize_routing' from /Users/Mister/.gem/ruby/1.8/gems/rails-2.3.4/lib/initializer.rb:188:in `process' from /Users/Mister/.gem/ruby/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `send' from /Users/Mister/.gem/ruby/1.8/gems/rails-2.3.4/lib/initializer.rb:113:in `run' from /Users/Mister/dev/copy/config/environment.rb:11 from /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from /opt/local/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from /Users/Mister/.gem/ruby/1.8/gems/rails-2.3.4/lib/commands/server.rb:84 from /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from ./script/server:3 the environment and gem in the app: RAILS_GEM_VERSION = '2.3.4' unless defined? RAILS_GEM_VERSION ENV['RAILS_ENV'] ||= 'development' config.gem 'hobo' config.gem "RedCloth", :version => ">= 4.2.2" config.gem "authlogic" config.gem "cancan" config.gem "jrails" config.gem "peteonrails-vote_fu", :source => "http://gems.github.com", :lib => "vote_fu" routes: map.site_search 'search', :controller => 'admin/front', :action => 'search' map.admin '/admin', :controller => 'admin/front', :action => 'index' Hobo.add_routes(map)

    Read the article

  • ASP.net roles and Projects

    - by Zyphrax
    EDIT - Rewrote my original question to give a bit more information Background info At my work I'm working on a ASP.Net web application for our customers. In our implementation we use technologies like Forms authentication with MembershipProviders and RoleProviders. All went well until I ran into some difficulties with configuring the roles, because the roles aren't system-wide, but related to the customer accounts and projects. I can't name our exact setup/formula, because I think our company wouldn't approve that... What's a customer / project? Our company provides management information for our customers on a yearly (or other interval) basis. In our systems a customer/contract consists of: one Account: information about the Company per Account, one or more Products: the bundle of management information we'll provide per Product, one or more Measurements: a period of time, in which we gather and report the data Extranet site setup Eventually we want all customers to be able to access their management information with our online system. The extranet consists of two sites: Company site: provides an overview of Account information and the Products Measurement site: after selecting a Measurement, detailed information on that period of time The measurement site is the most interesting part of the extranet. We will create submodules for new overviews, reports, managing and maintaining resources that are important for the research. Our Visual Studio solution consists of a number of projects. One web application named Portal for the basis. The sites and modules are virtual directories within that application (makes it easier to share MasterPages among things). What kind of roles? The following users (read: roles) will be using the system: Admins: development users :) (not customer related, full access) Employees: employees of our company (not customer related, full access) Customer SuperUser: top level managers (full access to their account/measurement) Customer ContactPerson: primary contact (full access to their measurement(s)) Customer Manager: a department manager (limited access, specific data of a measurement) What about ASP.Net users? The system will have many ASP.Net users, let's focus on the customer users: Users are not shared between Accounts SuperUser X automatically has access to all (and new) measurements User Y could be Primary contact for Measurement 1, but have no role for Measurement 2 User Y could be Primary contact for Measurement 1, but have a Manager role for Measurement 2 The department managers are many individual users (per Measurement), if Manager Z had a login for Measurement 1, we would like to use that login again if he participates in Measurement 2. URL structure These are typical urls in our application: http://host/login - the login screen http://host/project - the account/product overview screen (measurement selection) http://host/project/1000 - measurement (id:1000) details http://host/project/1000/planning - planning overview (for primary contact/superuser) http://host/project/1000/reports - report downloads (manager department X can only access report X) We will also create a document url, where you can request a specific document by it's GUID. The system will have to check if the user has rights to the document. The document is related to a Measurement, the User or specific roles have specific rights to the document. What's the problem? (finally ;)) Roles aren't enough to determine what a user is allowed to see/access/download a specific item. It's not enough to say that a certain navigation item is accessible to Managers. When the user requests Measurement 1000, we have to check that the user not only has a Manager role, but a Manager role for Measurement 1000. Summarized: How can we limit users to their accounts/measurements? (remember superusers see all measurements, some managers only specific measurements) How can we apply roles at a product/measurement level? (user X could be primarycontact for measurement 1, but just a manager for measurement 2) How can we limit manager access to the reports screen and only to their department's reports? All with the magic of asp.net classes, perhaps with a custom roleprovider implementation. Similar Stackoverflow question/problem http://stackoverflow.com/questions/1367483/asp-net-how-to-manage-users-with-different-types-of-roles

    Read the article

  • CUDA not working in 64 bit windows 7

    - by Programmer
    I have cuda toolkit 4.0 installed in a 64 bit windows 7. I try building my cuda code, #include<iostream> #include"cuda_runtime.h" #include"cuda.h" __global__ void kernel(){ } int main(){ kernel<<<1,1>>>(); int c = 0; cudaGetDeviceCount(&c); cudaDeviceProp prop; cudaGetDeviceProperties(&prop, 0); std::cout<<"the name is"<<prop.name; std::cout<<"Hello World!"<<c<<std::endl; system("pause"); return 0; } but operation fails. Below is the build log: Build Log Rebuild started: Project: god, Configuration: Debug|Win32 Command Lines Creating temporary file "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BAT0000482007500.bat" with contents [ @echo off echo "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" sample.cu "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\sample.cu" if errorlevel 1 goto VCReportError goto VCEnd :VCReportError echo Project : error PRJ0019: A tool returned an error code from "Compiling with CUDA Build Rule..." exit 1 :VCEnd ] Creating command line """c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BAT0000482007500.bat""" Creating temporary file "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\RSP0000492007500.rsp" with contents [ /OUT:"C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.exe" /LIBPATH:"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\lib\x64" /MANIFEST /MANIFESTFILE:"Debug\god.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.pdb" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 cudart.lib cuda.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib ".\Debug\sample.cu.obj" ] Creating command line "link.exe @"c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\RSP0000492007500.rsp" /NOLOGO /ERRORREPORT:PROMPT" Output Window Compiling with CUDA Build Rule... "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" sample.cu sample.cu sample.cu.obj : error LNK2019: unresolved external symbol _cudaLaunch@4 referenced in function "enum cudaError cdecl cudaLaunch(char *)" (??$cudaLaunch@D@@YA?AW4cudaError@@PAD@Z) sample.cu.obj : error LNK2019: unresolved external symbol ___cudaRegisterFunction@40 referenced in function "void __cdecl _sti_cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1(void)" (?sti__cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1@@YAXXZ) sample.cu.obj : error LNK2019: unresolved external symbol _cudaRegisterFatBinary@4 referenced in function "void __cdecl _sti_cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1(void)" (?sti__cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1@@YAXXZ) sample.cu.obj : error LNK2019: unresolved external symbol _cudaGetDeviceProperties@8 referenced in function _main sample.cu.obj : error LNK2019: unresolved external symbol _cudaGetDeviceCount@4 referenced in function _main sample.cu.obj : error LNK2019: unresolved external symbol _cudaConfigureCall@32 referenced in function _main C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.exe : fatal error LNK1120: 7 unresolved externals Results Build log was saved at "file://c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BuildLog.htm" god - 8 error(s), 0 warning(s) I will be highly obliged if someone could help me. Thanks

    Read the article

  • MySQL and INT auto_increment fields

    - by PHPguy
    Hello folks, I'm developing in LAMP (Linux+Apache+MySQL+PHP) since I remember myself. But one question was bugging me for years now. I hope you can help me to find an answer and point me into the right direction. Here is my challenge: Say, we are creating a community website, where we allow our users to register. The MySQL table where we store all users would look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, from this snippet you can see that we have a unique and automatically incrementing for every new user 'uid' field. As on every good and loyal community website we need to provide users with possibility to completely delete their profile if they want to cancel their participation in our community. Here comes my problem. Let's say we have 3 registered users: Alice (uid = 1), Bob (uid = 2) and Chris (uid = 3). Now Bob want to delete his profile and stop using our community. If we delete Bob's profile from the 'users' table then his missing 'uid' will create a gap which will be never filled again. In my opinion it's a huge waste of uid's. I see 3 possible solutions here: 1) Increase the capacity of the 'uid' field in our table from SMALLINT (int(2)) to, for example, BIGINT (int(8)) and ignore the fact that some of the uid's will be wasted. 2) introduce the new field 'is_deleted', which will be used to mark deleted profiles (but keep them in the table, instead of deleting them) to re-utilize their uid's for newly registered users. The table will look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `is_deleted` int(1) unsigned NOT NULL default '0' COMMENT 'If equal to "1" then the profile has been deleted and will be re-used for new registrations', `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; 3) Write a script to shift all following user records once a previous record has been deleted. E.g. in our case when Bob (uid = 2) decides to remove his profile, we would replace his record with the record of Chris (uid = 3), so that uid of Chris becomes qual to 2 and mark (is_deleted = '1') the old record of Chris as vacant for the new users. In this case we keep the chronological order of uid's according to the registration time, so that the older users have lower uid's. Please, advice me now which way is the right way to handle the gaps in the auto_increment fields. This is just one example with users, but such cases occur very often in my programming experience. Thanks in advance!

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • sqlite 3 opening issue

    - by anonymous
    I'm getting my data ,with several similar methods, from sqlite3 file like in following code: -(NSMutableArray *) getCountersByID:(NSString *) championID{ NSMutableArray *arrayOfCounters; arrayOfCounters = [[NSMutableArray alloc] init]; @try { NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *databasePath = [[[NSBundle mainBundle] resourcePath ]stringByAppendingPathComponent:@"DatabaseCounters.sqlite"]; BOOL success = [fileManager fileExistsAtPath:databasePath]; if (!success) { NSLog(@"cannot connect to Database! at filepath %@",databasePath); } else{ NSLog (@"SUCCESS getCountersByID!!"); } if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK){ NSString *tempString = [NSString stringWithFormat:@"SELECT COUNTER_ID FROM COUNTERS WHERE CHAMPION_ID = %@",championID]; const char *sql = [tempString cStringUsingEncoding:NSASCIIStringEncoding]; sqlite3_stmt *sqlStatement; int ret = sqlite3_prepare(database, sql, -1, &sqlStatement, NULL); if (ret != SQLITE_OK) { NSLog(@"Error calling sqlite3_prepare: %d", ret); } if(sqlite3_prepare_v2(database, sql, -1, &sqlStatement, NULL) == SQLITE_OK){ while (sqlite3_step(sqlStatement)==SQLITE_ROW) { counterList *CounterList = [[counterList alloc]init]; CounterList.counterID = [NSString stringWithUTF8String:(char *) sqlite3_column_text(sqlStatement,0)]; [arrayOfCounters addObject:CounterList]; } } else{ NSLog(@"problem with database prepare"); } sqlite3_finalize(sqlStatement); } else{ NSLog(@"problem with database openning %s",sqlite3_errmsg(database)); } } @catch (NSException *exception){ NSLog(@"An exception occured: %@", [exception reason]); } @finally{ sqlite3_close(database); return arrayOfCounters; } //end } then i'm getting access to data with this and other similar lines of code: myCounterList *MyCounterList = [[myCounterList alloc] init]; countersTempArray = [MyCounterList getCountersByID:"2"]; [countersArray addObject:[NSString stringWithFormat:@"%@",(((counterList *) [countersTempArray objectAtIndex:i]).counterID)]]; I'm getting a lot of data like image name and showing combination of them that depends on users input with such code: UIImage *tempImage = [UIImage imageNamed:[NSString stringWithFormat:@"%@_0.jpg",[countersArray objectAtIndex:0]]]; [championSelection setBackgroundImage:tempImage forState:UIControlStateNormal]; My problem: When i'm run my app for some time and get a lot of data it throws error: " problem with database openning unable to open database file - error = 24 (Too many open files)" My guess is that i'm opening my database every time when getCountersByID is called but not closing it. My question: Am i using right approach to open and close database that i use? Similar questions that have not helped me to solve this problem: unable to open database Sqlite Opening Error : Unable to open database UPDATE: I made assumption that error is showing up because i use this lines of code too much: NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *databasePath = [[[NSBundle mainBundle] resourcePath ]stringByAppendingPathComponent:@"DatabaseCounters.sqlite"]; BOOL success = [fileManager fileExistsAtPath:databasePath]; and ending up with error 24. So i made them global but sqlite3_errmsg shows same err 24, but app runs much faster now I'll try to debug my app, see what happens

    Read the article

  • Installing drivers for switchable graphics

    - by Anonymous
    I recently bought a laptop that came with Windows 7 64-bit installed. I have some older (16-bit and 32-bit) software that doesn't work with 64-bit Windows, but works just fine with 32-bit. Since I also wanted to get rid of all of the pre-installed spam, I decided to wipe the hard drive and install a fresh copy of Windows 7 32-bit. I can't get the graphics cards working. This laptop uses switchable graphics, an Intel card and a Radeon card. I first tried installing this driver from Intel, which works for the Intel card. Of course, the Radeon card doesn't work with this driver and I need it for some of the newer games I have. I also tried this driver. Windows's device manager will recognize the Radeon card, but it will still use the Intel card. Also, even though that package says it contains the Intel driver, the Intel card still isn't properly recognized by Windows (leaving me with a nasty 800x600 resolution). On top of that, the Catalyst Control Center won't open (saying "The Catalyst Control Center is not supported by the driver version of your enabled graphics adapter") I tried installing HP's driver then installing Intel's driver on top of it. Device manager will then recognize both graphics cards properly. However, the laptop still uses the Intel card. The CCC still won't start (saying the same thing as before) and I can't find any of 'switching' graphics cards. Before formatting, I could right-click the desktop and click "Configure Switchable Graphics" This option hasn't been in the context menu regardless of what driver(s) I've installed. After some research, I found out that this menu entry runs the command "cli.exe Start PowerXpressHybrid" I've tried manually running this command, but I get the same unsupported message from CCC. So, does anyone know how I can get this working? I would like to be able to switch between the Intel and Radeon. But, if there's some way to disable the Intel and use only the Radeon, that would be fine I dual-boot with Linux (framebuffer uses the Intel, haven't even tried getting X set up yet) Here's the output of lspci # lspci -v | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] (prog-if 00 [VGA controller]) The laptop is a HP Pavilion g6t-1d00. HP doesn't support installing anything but Windows 7 64-bit, so calling tech support isn't an option. Thanks for any help UPDATE: I finally got it working. After a fresh install of Windows 7, I installed the HP driver (the one linked above). Then, there's an optional Windows update I installed (don't remember the exact name, but it'll stick out). After that, graphics switching works just like it's supposed to. Moab, thanks anyways for your help

    Read the article

  • How to fix an SSD

    - by anonymous
    I have a Samsung 128 go SSD : MZ5PA128HMCD-01000. When I'm using it : there is always I/O errors. I tried to secure-erase it. But it is impossible to create a new NTFS (or any filesystem...) partition because I still get I/O errors. I tought maybe upgrading the firmware will solve the problem. Unfortunately, I'm not able to upgrade the firmware with Samsung SSD Magician Tool ... because Magician says there is no Samsung SSD on my computer... Is there a way to make Magician recognize this Samsung MZ5PA128HMCD-01000 SSD? Is another tool available to flash any firmware on a SSD? What should I do to fix this SSD?

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • Linux udev persistent net rule

    - by Anonymous
    I have a Linux system (Slackware Linux 13.0) with two network interfaces. Let's call them NIC0 and NIC1 My goal is to make NIC0 to appear as eth0 in the system. I know this can be achieved via udev rules that map network aliases to MAC addresses of network interfaces. In Slackware Linux the file /etc/udev/rules.d/70-persistent-net.rules contains such rules. The trickiest part of my problem is that I need to fake the MAC address of NIC0. I know I can dynamically change the MAC addres of a network interface with the command: ifconfig eth0 hw ether <new MAC address> Do you see the problem? This supposes that the network interfaces are already set up. So my question is: If I would have an udev rule for NIC1(the one that shall go up as eth1, with its original MAC address), would it be enough for the system to bring the other network interface (NIC0) as eth0 by default? This way I could change its MAC address later, after the udev machinery completes and the network aliases are brought up.

    Read the article

  • NGINX : Proxy pass intercepting 5xx errors - Possible to differentiate between ones fired by backed vs ones fired by nginx itself?

    - by anonymous-one
    We use proxy_intercept_errors ( http://wiki.nginx.org/HttpProxyModule#proxy_intercept_errors ) with our backends. We intercept a number of status codes, including a few 5xx ones. Our 5xx (each 500 has its own) handler has an access_log so we can see all the 5xx errors returned to the user in a nice clean logged format. The issue with this is that as it stands now, we cannot tell weather a 5xx was returned to the user by nginx or intercepted from our backend. Is there any way to differentiate between the two? Thanks.

    Read the article

  • Installing Windows 7 on a BSOD Windows Vista (ASAP)

    - by anonymous
    On a previous question i posted, i asked for help on fixing my windows vista box because it keeps going to blue screen. No one seems to have the answer, so now i want to install Windows 7. I want to know if i can install 7 without having to reformat my hard drive and having to lose all my files. I've already confirmed the hardware is working because i installed Ubuntu 9.10 on my external hard drive and it runs on my system fine. I tested the memory using vista's memory test and Ubuntu's memory test. here's the previous post: http://superuser.com/questions/125897/i-really-need-help-resolving-a-window-vista-bsod-blue-screen-crash-on-my-deskto

    Read the article

  • Remote monitoring with screen capture

    - by Anonymous
    Is there anyway i can perodically take screenshots of a remote computer using nagios? I am experimenting with Nagios, and i am trying to explore different monitoring. So my question is apart from using nagios to monitor cpu usage, bandwidth utilization, uptime etc.. can i monitor my worker's productivity by checking what is he doing on his computer in a form of image output. Being able to monitor processes would not be of much help to me, as i only know if he or she is running firefox.exe for example he maybe using excessive use of firefox for facebook or other stuff but he claims he is troubleshooting and looking for solutions on forums. I saw a check_vnc script but i am unable to install the requsite vnc server anyone succesfully tried the vnc script care to share how to go about it? If not anyother way to try this?

    Read the article

  • Can't install any drivers at all on Windows 8. Error 0x000003F9

    - by ABarney
    I suddenly can't install any drivers at all on my Windows 8 Pro x64 install. It doesn't matter what kind of driver it is, nothing will install. Everything ends with error 0x000003F9: The system has attempted to load or restore a file into the registry, but the specified file is not in a registry file format. When Windows Update tries to install a driver, it just gives error code 800703F9 and says that "Windows Update ran into a problem." I've already done a scan of system files with sfc, tried another user account, done a chkdsk, and a few more things, but nothing works. The problem started when I tried to install drivers for my printer earlier today and suddenly started getting messages saying that "Windows Modules Installer has stopped working." I decided to restart and was being greeted with the recovery boot options. I shut the computer down, but when I booted it back up the same thing happened, so I did a repair your pc, and was able to boot into the OS properly. Then I rebooted into my external drive and did a chkdsk on the Windows 8 install that started acting funny. When I booted back into Windows 8, I wasn't able to install any drivers. They all keep coming up with the same error. And I can't seem to find anything at all on this issue. Any help would be much appreciated. Here's an install log from a failed driver install: >>> [Device Install (DiInstallDriver) - F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf] >>> Section start 2012/12/06 20:15:20.714 cmd: "F:\Windows\System32\InfDefaultInstall.exe" "F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf" inf: {SetupCopyOEMInf: F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf} 20:15:20.716 sto: {Import Driver Package: F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf} 20:15:20.719 sto: Driver Store = F:\Windows\System32\DriverStore [Online] (6.2.9200) sto: Driver Package = F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf sto: Architecture = amd64 sto: Flags = 0x00000000 inf: Provider = Google, Inc. inf: Class GUID = {3f966bd9-fa04-4ec5-991c-d326973b5128} inf: Driver Version = 08/27/2012,7.0.0.1 inf: Catalog File = androidwinusba64.cat inf: Version Flags = 0x00000011 ! sto: Unable to determine presence of driver package 'android_winusb.inf'. Error = 0x000003F9 flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\amd64\WdfCoInstaller01009.dll' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WdfCoInstaller01009.dll'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\amd64\WinUSBCoInstaller2.dll' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WinUSBCoInstaller2.dll'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\android_winusb.inf' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf'. flq: Copying 'F:\Android\android-sdk\extras\google\usb_driver\androidwinusba64.cat' to 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\androidwinusba64.cat'. pol: {Driver package policy check} 20:15:20.814 pol: {Driver package policy check - exit(0x00000000)} 20:15:20.814 sto: {Stage Driver Package: F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf} 20:15:20.815 ! sto: Unable to determine presence of driver package 'android_winusb.inf'. Error = 0x000003F9 inf: {Query Configurability: F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf} 20:15:20.820 inf: Driver package uses WDF. inf: Driver package 'android_winusb.inf' is configurable. inf: {Query Configurability: exit(0x00000000)} 20:15:20.823 flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WdfCoInstaller01009.dll' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64\WdfCoInstaller01009.dll'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\amd64\WinUSBCoInstaller2.dll' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64\WinUSBCoInstaller2.dll'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\android_winusb.inf' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf'. flq: Copying 'F:\Users\ALEXBA~1\AppData\Local\Temp\{5da5e23e-2f82-2b4f-b73d-9d77c2978b0e}\androidwinusba64.cat' to 'F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat'. sto: {DRIVERSTORE IMPORT VALIDATE} 20:15:20.875 sig: {_VERIFY_FILE_SIGNATURE} 20:15:20.881 sig: Key = android_winusb.inf sig: FilePath = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf sig: Catalog = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat ! sig: Verifying file against specific (valid) catalog failed! (0x800b0109) ! sig: Error 0x800b0109: A certificate chain processed, but terminated in a root certificate which is not trusted by the trust provider. sig: {_VERIFY_FILE_SIGNATURE exit(0x800b0109)} 20:15:20.893 sig: {_VERIFY_FILE_SIGNATURE} 20:15:20.893 sig: Key = android_winusb.inf sig: FilePath = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\android_winusb.inf sig: Catalog = F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\androidwinusba64.cat sig: Success: File is signed in Authenticode(tm) catalog. sig: Error 0xe0000242: The publisher of an Authenticode(tm) signed catalog has not yet been established as trusted. sig: {_VERIFY_FILE_SIGNATURE exit(0xe0000242)} 20:15:20.907 ! sig: Driver package signer is unknown, but user trusts signer. sto: {DRIVERSTORE IMPORT VALIDATE: exit(0x00000000)} 20:15:22.701 sig: Signer Score = 0x0F000000 sig: Signer Name = Google Inc sto: {DRIVERSTORE IMPORT BEGIN} 20:15:22.702 sto: {DRIVERSTORE IMPORT BEGIN: exit(0x00000000)} 20:15:22.702 cpy: {Copy Directory: F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}} 20:15:22.703 cpy: Target Path = F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3 cpy: {Copy Directory: F:\Windows\System32\DriverStore\Temp\{30801e6d-d30f-2f4b-87dc-c80122d5f248}\amd64} 20:15:22.704 cpy: Target Path = F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\amd64 cpy: {Copy Directory: exit(0x00000000)} 20:15:22.705 cpy: {Copy Directory: exit(0x00000000)} 20:15:22.706 ! sto: Unable to determine if driver package 'android_winusb.inf' is already registered. Error = 0x000003F9 idb: {Register Driver Package: F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\android_winusb.inf} 20:15:22.707 !!! idb: Failed to create driver package object 'android_winusb.inf_amd64_f7c4b212c9d862a3' in DRIVERS database node. Error = 0x000003F9 !!! idb: Failed to register driver package 'F:\Windows\System32\DriverStore\FileRepository\android_winusb.inf_amd64_f7c4b212c9d862a3\android_winusb.inf'. Error = 0x000003F9 idb: {Register Driver Package: exit(0x000003f9)} 20:15:22.709 sto: {DRIVERSTORE IMPORT END} 20:15:22.710 sto: {DRIVERSTORE IMPORT END: exit(0x000003f9)} 20:15:22.710 sto: Rolled back driver package import. !!! sto: Failed to import driver package into Driver Store. Error = 0x000003F9 sto: {Stage Driver Package: exit(0x000003f9)} 20:15:22.736 sto: {Import Driver Package: exit(0x000003f9)} 20:15:22.766

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >