Search Results

Search found 22714 results on 909 pages for 'password tools'.

Page 190/909 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • MVC OnActionExecuting to Redirect

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/12/mvc-onactionexecuting-to-redirect.aspxI recently had the following requirements in an MVC application: Given a new user that still has the default password When they first login Then the user must change their password and optionally provide contact information I found that I can override the OnActionExecuting method in a BaseController class.public class BaseController : Controller { [Inject] public ISessionManager SessionManager { get; set; } protected override void OnActionExecuting(ActionExecutingContext filterContext) { // call the base method first base.OnActionExecuting(filterContext); // if the user hasn't changed their password yet, force them to the welcome page if (!filterContext.RouteData.Values.ContainsValue("WelcomeNewUser")) { var currentUser = this.SessionManager.GetCurrentUser(); if (currentUser.FusionUser.IsPasswordChangeRequired) { filterContext.Result = new RedirectResult("/welcome"); } } } } Better yet, you can use an ActionFilterAttribute (and here) and apply the attribute to the Base or individual controllers./// <summary> /// Redirect the user to the WelcomePage if the FusionUser.IsPasswordChangeRequired is true; /// </summary> public class WelcomePageRedirectActionFilterAttribute : ActionFilterAttribute { [Inject] public ISessionManager SessionManager { get; set; } public override void OnActionExecuting(ActionExecutingContext actionContext) { base.OnActionExecuting(actionContext); // if the user hasn't changed their password yet, force them to the welcome page if (actionContext.RouteData.Values.ContainsValue("WelcomeNewUser")) { return; } var currentUser = this.SessionManager.GetCurrentUser(); if (currentUser.FusionUser.IsPasswordChangeRequired) { actionContext.Result = new RedirectResult("/welcome"); } } }  [WelcomePageRedirectActionFilterAttribute] public class BaseController : Controller { ... } The requirement is now met.

    Read the article

  • How to Configure OpenLDAP on Ubuntu 10.04 Server

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • How to configure ldap on ubuntu 10.04 server

    - by user3215
    I am following the link to configure ldap on ubuntu 10.04 server but could not. when I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Anybody could help me?

    Read the article

  • unable to format usb 1204 [daemon inhibited ]

    - by santosamaru
    i try to format my usb 1st time its work all data gone but i can't save any file at this usb . then i try to check is it working or broken here the report santos@santos:~$ sudo badblocks -v /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: Checking blocks 0 to 7824383 Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone Pass completed, 0 bad blocks found. (0/0/0 errors) santos@santos:~$ sudo badblocks -v -w /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: /dev/sdb is apparently in use by the system; it's not safe to run badblocks! santos@santos:~$ how to format and fix this issues? i have read this link Formatting Pen Drive causes 'Daemon Is Inhibited' Error and it said like this when i try to move any items from desktop " the destination is read only also in this case i use google and find this http://ubuntuforums.org/showthread.php?t=1955353 article as same its not helped following user13509 suggestion ..

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • Can't login to Guest Session or Standard Account?

    - by Rory
    Thanks in advance for any help. Ubuntu 12.04 Up until recently my mother has been using the Guest Session when she logs on; now, the Guest Session will not login. When I try to login to it, it bounces me back to the login screen. I then tried to login on a Standard account (Alberta) which I made prior to not being able to login to Guest, and it turns out I cannot login - it gives the "Invalid Password" error. So then I tried to change her password from my own account (Rory) which is the master account. Under the Login Options where the Password option is, it says "Account disabled" and it will not let me change this; I try to apply a password, get no error, but it just still says "Account disabled". Then I tried to delete the Alberta account altogether, but got this error: I also tried sudo userdel -r Alberta but got this error: "userdel: cannot lock /etc/shadow; try again later." I don't know what to do now. Any help appreciated.

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • WCF Versioning, Naming and Endpoint URL

    - by Vinothkumar VJ
    I have a WCF Service and a Main Lib1. Say, I have a Save Profile Service. WCF gets data (with predefined data contract) from client and pass the same to the Main Class Lib1, generate response and send it back to client. WCF Method : SaveProfile(ProfileDTO profile) Current Version 1.0 ProfileDTO have the following UserName Password FirstName DOB (In string yyyy-mm-dd) CreatedDate (In string yyyy-mm-dd) Next Version (V2.0) ProfileDTO have the following UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) Version 3.0 ProfileDTO have the following (With change in UserName and Password length validation) UserName Password FirstName DOB (In UnixTimeStamp) CreatedDate (In UnixTimeStamp) In simple we have DataContract and Workflow change between each version 1. How do I name the methods in WCF Service and Main Class Lib1? 2. Do I have to go with any specific pattern for ease development and maintenance? 3. Do I have to have different endpoints for different version? In the above example I have a method named “SaveProfile”. Do I have to name the methods like “SaveProfile1.0”, “SaveProfile2.0”, etc. If that is the case when there is no change between Version “3.0” and “4.0” then there will difficult in maintenance. I’m looking for a approach that will help in ease maintenance

    Read the article

  • The future continues to be brighter than ever for JD Edwards as the first ERP suite to run on Apple iPad.

    - by mseika
    Announcing JD Edwards Tools JD EdwardsLatest and Greatest Live Demo and Webcast of the New Applications User Interface & Tools on Apple iPad Tuesday December 6, 2011 at 8:00 a.m. Pacific Click here to register Oracle’s JD Edwards Development Team just completed an exciting new EnterpriseOne User Interface and a massive number of feature innovations for users and system administrators. We are looking forward to demonstrating the new User Interface and Tools. We have a panel of experts lined up just for you and we will be sure to answer all your questions. Lyle Ekdahl – Oracle Group Vice President Gary Grieshaber – Oracle Strategy Senior Director Brian Stanz – Oracle Development Senior Director The future continues to be brighter than ever for JD Edwards as the first ERP suite to run on Apple iPad. Please join us for this important webcast and see why we are so excited about these cool tools that make your work more mobile and efficient. Click here to register for the live webcast on Tuesday, December 6th, 2011 at 8:00 a.m. Pacific time! Copyright © 2011, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Unable to install mysql in ubuntu

    - by Anand
    I purged my existing mysql-server from ubuntu and re-installed the same. This was due there was an upgrade from my ubuntu and I was unable to start my sql-server. After cleaned up and taking backup of my data , I re-insalled mysql. During installaion I received following message popup. Unable to set password for the MySQL "root" user An error occurred while setting the password for the MySQL administrative user. This may have happened because the account already has a password, or because of a communication problem with the MySQL server. You should check the account's password after the package installation. Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more information. ¦ ¦ After the installation was failed, following error were received. 121109 20:36:18 InnoDB: Initializing buffer pool, size = 128.0M 121109 20:36:18 InnoDB: Completed initialization of buffer pool 121109 20:36:18 InnoDB: highest supported file format is Barracuda. 121109 20:36:18 InnoDB: Waiting for the background threads to start 121109 20:36:19 InnoDB: 1.1.8 started; log sequence number 2395841 ERROR: 130 Incorrect file format 'user' 121109 20:36:19 [ERROR] Aborting 121109 20:36:19 InnoDB: Starting shutdown... 121109 20:36:20 InnoDB: Shutdown completed; log sequence number 2395841 121109 20:36:20 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Help with setting up AIM account

    - by Aaron
    Can anyone help me configure my AIM account? I have Oneiric and I want to set up my AIM account with what I believe is a chat client (the envelope icon on the top right). It is called Empathy. \n I tried to do this using Broadcast Account. It let me choose what account (Jabber, Pidgin, Aim, etc...). Once I entered my password for my AIM account, I got a pop-up asking for the master password for my Keyring. I didn't know what that was at the time so I closed that window after trying to enter my account password. Keyring apparently asks for a master password and it holds any keys you want to remember in the future. It gave me an error so I couldn't set up my AIM account completely. Now I'm trying to get back to that screen but I can only set up a Twitter or Facebook. Thanks, Aaron P.S. Can anyone tell me how to break the message up so it doesn't appear all on one line? I tried 'coding' a \n...seemed to work.

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • About to smash my keyboard!! Ubuntu 13.1 issues with AMD driver & Audio

    - by DNex
    Let me preface with saying that this is my 2nd day on Linux. I really want to make it work but these issues are driving me up the wall! I've done exhaustive google searches but have not been able to figure anything out. I am on Ubuntu 13.10, my graphics card is AMD Radeon HD4200. My sound card is a realtek HDMI. I've tried downloading and installing both drivers but nothing works. Graphics card: When I run the .run file (from http://www2.ati.com/drivers/legacy/amd-driver-installer-catalyst-13.1-legacy-linux-x86.x86_64.zip) I get an error. I check the fglrx-install log and it says this: Check if system has the tools required for installation. fglrx installation requires that the system have kernel headers. /lib/modules/3.11.0-12-generic/build/include/linux/version.h cannot be found on this system. One or more tools required for installation cannot be found on the system. Install the required tools before installing the fglrx driver. Optionally, run the installer with --force option to install without the tools. Forcing install will disable AMD hardware acceleration and may make your system unstable. Not recommended. Audio: Since my first install I've had no audio. I've tried everything outlined in this site: http://itsfoss.com/fix-sound-ubuntu-1304-quick-tip/ to no avail. I've download the linux drivers from Realtek HDMI audio but have had no luck. Any help would be extremely appreciated.

    Read the article

  • Rosegarden plugin list is empty

    - by Wes
    I've just installed rosegarden through apt. I've also installed jack and a low latency kernel through a PPA https://launchpad.net/~abogani/+archive/ppa. I've started jack-server and through the control panel I can see its wiring things through rosegarden. I've also tried installing qsynth and dssi both through apt. However I can't see any plug ins in the synth plug-in list. Therefore I'm unable to test if this works. I've tried launching qsynth before rosegarden and I've tried a few things however I just can't see any plugins. Does anyone know how to get this to work? I'm using ubuntu 11.04 or 11.10 I think. sudo apt-get install rosegarden -o APT::Install-Suggests=true synaptic sudo apt-get install synaptic synaptic sudo synaptic synaptic sudo apt-add-repository ppa:abogani/ppa sudo apt-get install linux-lowlatency sudo apt-get install dssi sudo apt-get install alsa-firmware-loaders alsa-tools alsa-tools-gui alsa-firmware sudo apt-get install alsa-firmware-loaders alsa-tools alsa-tools-gui sudo apt-get install blop caps cmt fil-plugins rev-plugins swh-plugins tap-plugins sudo apt-get install blepvco mcp-plugins omins

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32

    - by Microkernel
    I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two Windows XP machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VirtualBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is the following. Shared folders on ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (Even if I provide admin login details). I changed workgroup value to the domain_name in office laptop, but still the problem persists. Here is the smdb.conf I am using: [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail.

    Read the article

  • Are Intel compilers really better than Microsoft ones?

    - by Rocket Surgeon
    Years ago I was surprised when discovered that Intel sells Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really did it for me just now, Wow, amazing tools with nanoseconds resolution, unbeleivable. But the trial ended and team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is a winner ? It is not broad or vague question or attemt to spark a holy war. This sort of question about 2 very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the "grass greener" argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled ? What if AMD hardware is the target and everything will slow down for no reason ? Or on other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement in the compiler ? What if both are the same exactly, actually a single codebase just wrapped into 2 different boxes and licensed to both vendors by some 3rd party shop? And so on. But someone knows some answers.

    Read the article

  • Java ME SDK 3.0.5 Integrated with NetBeans 7.1.1

    - by SungmoonCho
    NetBeans 7.1.1 now integrates Java ME SDK 3.0.5, so you do not have to download them separately. Java ME SDK was packaged in NetBeans Mobility Pack, a mobile application development toolkit for NetBeans. Therefore, Java ME SDK is no longer a separate menu on NetBeans. For those who have not downloaded Java ME SDK yet, please simply visit NetBeans website and download the latest version. For those who already have Java ME SDK integrated with NetBeans 7.1 or earlier, and want to update NetBeans IDE to 7.1.1, don't worry. They can co-exist. To use NetBeans plug-ins such as Device Selector, profiler, and Internationalization Resource Manager, you have to install "Java ME SDK Tools" from NetBeans. Here is how. 1.  Go to "Tools - Plug-ins" from NetBeans menu. You can find all the plug-ins you can install into NetBeans. Locate "Java ME SDK Tools" from the list. 2. Follow the instruction to install Java ME SDK Plug-ins. 3. Once completed, you will see new menu options. For example, you can find Device Selector under Tools - Java ME. (If you used old version of Java ME, you will notice that there is not 'Java ME' menu any more. This is because all the sub-menus were integrated into appropriate places in NetBeans.) There is one thing to keep in mind; Since NetBeans 7.1.1 already includes Java ME SDK 3.0.5 and Java ME SDK 3.0.5 plug-ins must be installed through NetBeans plug-in menu, you should not download Java ME SDK 3.0.5 separately and try to integrate it with NetBeans. This may cause issues.

    Read the article

  • Mapping XFCE4/xRDP sessions to users

    - by garrilla
    I have Ubuntu 13.10 with Xubuntu Desktop - XFCE4. I'm trying to use XDRP to allow MS Windows users to login to the machine with their own user. I've been a lot around the houses with this! I've find two half-way solutions, but can't get them to work as I'd like... 1) in /etc/xrdp/xrdp.ini I set the port to -1 [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=-1 each time any user logs on they get a new session - they can never go back to their original session 2) in /etc/xrdp/xrdp.ini I set the port to 5912 (e.g) [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5912 each time any user logs on they always log on to the same session irrespective of their logon details ??) I found a mid-way solution, to create a lot of sessions by adding adding additional options in the xrdp.ini e.g. [xrdp8] name=Bob's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5913 [xrdp9] name=Jill's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5914 and so on, but he problem with this is that Jill can still log into Bob's remote session ??? Is it possible to to do what I'm trying to do? Maybe I have to use different tools?

    Read the article

  • What is the role of traditional issue tracker when Scrum / Kanban board is used?

    - by Borek
    From a very high level view, to me it seems there are generally 2 types of Project Management tools: Traditional issue trackers like Fogbugz, JIRA, BugZilla, Trac, Redmine etc. Virtual card boards / agile project management tools like Pivotal Tracker, GreenHopper, AgileZen, Trello etc. Sure, they overlap in one way or another, e.g. Pivotal Tracker tasks can be imported to JIRA, GreenHopper itself is implemented on top of JIRA issue base etc. but I think one can still see the difference in orientation between those two types of tools. Traditional issue tracker seems to be used even in companies otherwise doing agile project management. My question is, why do they do that? I also feel that we should use an issue tracker in my company but when I'm thinking about it, I'm not actually sure why should we need it. For example, Trello development seems to be managed by using Trello itself (see this virtual wall) even though they have access to Fogbugz, one of the best issue trackers around. So maybe we don't need traditional issue tracker when we'll be doing 100% of our work in an agile manner using one of the agile PM tools?

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • Advantages of relational databases over VSAM, ISAM and hierarchical data stores

    - by llaszews
    When migrating companies from legacy environments to the cloud, invariably you run into older hierarchical, flat file, VSAM, ISAM and other legacy data stores. There are many advantages to moving these databases into a relational database structure. The most important which is that most cloud providers run on relational database models. AWS, for example, supports Oracle, SQL Server, and MySQL. The top three 'other reasons' for moving to a relational database are: 1. Data Access – Thousands of database access tools from query creation to business intelligence. 2. Management and monitoring – Hundreds of tools for management and monitoring of the database. 3. Leverage all the free tools from relational database vendors. Free Oracle database tools include: -Application Express – WYSIWIG browse based application development and deployment. -SQL Developer – SQL and PL/SQL development. Database object maintenance. What is interesting is that Big Data NoSQL databases and XML databases are taking us back to the days of VSAM (key value databases) with NoSQL and IMS (hierarchical) with XML databases?

    Read the article

  • Are Intel compilers really better than the Microsoft ones?

    - by Rocket Surgeon
    Years ago, I was surprised when I discovered that Intel sells Visual Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really do it for me just now, wow, amazing tools with nanoseconds resolution, unbelievable. But the trial ended and the team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is the winner? It is not a broad or vague question or attemt to spark a holy war. This sort of question is about two very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the grass is always greener argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled? What if AMD hardware is the target and everything will slow down for no reason? Or on the other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement it in the compiler? What if both are the same exactly, actually a single codebase just wrapped into two different boxes and licensed to both vendors by some third-party shop? And so on. But someone knows some answers.

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >