Search Results

Search found 91968 results on 3679 pages for 'wp user role'.

Page 43/3679 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • What is a typical scenario for and end-user reports design?

    - by Sebastian
    Hello! I'm wondering what would be the typical scenario for using an end-user report designer. What I'm thinking of is to have a base report with all the columns that I can have, also with a basic view of the report (formatting, order of columns, etc.) and then let the user to change that format and order, take out or add (from the available columns) data to it, etc. Is that a common way to address what is called end-user designer for reports or I'm off track? I know it depends on the user (if it's someone that can handle SQL or not for example), but is it common to have a scenario where the user can build everthing from the sql query to the formatting? Thanks! Sebastian

    Read the article

  • can not login to windows

    - by LoRdiE
    Dear, When i login to windows using domain user account, it take a min show welcome and then automatically logoff. I think user profile error, so login with the administrator account and create new local account. When i login using the local user account, it happened the same as domain user account. Only Administrator Level can login to windows. Any know how can i fix this case? Thanks

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • UI suggestions on how to display suggested tags for a given text to a user?

    - by Danny
    I am writing a web-app that uses a tagging system to organize the user's submitted reports. Part of it uses ajax to get suggestions for tags to present to the user based on the content of their report. I am looking for suggestions on how to present this information for the user. I'm not quite certain what a friendly way to do this would be. Edit: Well, most of the responses here seem to be focused on the user typing in keywords. The idea I'm trying to define here is more towards presenting the user a set of suggested keywords that they may accept or decline without having to type a tag in manually. (That option is of course still available to them) --------------------------- # say they can checkoff or select tags they like. | o[tag2] x[foo] o[moo] | | x[tag1] o[bar] | ---------------------------

    Read the article

  • OSX root user keeps re-enabling itself on reboot

    - by geodave
    Running Snow Leopard. Completely inexplicably, I seem to have enabled the OSX root user by accident. I honestly have no idea how it happened, but if memory serves I was looking at the login pane (with my two user accounts) when I must have hit something, and suddenly the two accounts were replaced by one that just said "Other..." Clicking the "Other..." account allows me to type a username and password, but neither of the normal two accounts would work. Since I never set a root password, it wouldn't let me in that way either. So I booted into Single User mode and ran these commands: /sbin/mount -uw / fsck -fy launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist dscl . -passwd /Users/root newpassword and that let me login as root. Then, I went to System Preferences, Accounts, Login Options, clicked Join, Open Directory Utility, and lastly in the Edit menu I clicked "Disable Root User" Great, I thought, back to normal. Except rebooting, I still only have the Other... account visible, and the root password I set beforehand doesn't work anymore! I have to reboot into Single User Mode and go through the whole process again just to get back into the system (as root) How on Earth did I accidentally enable this? I didn't even know about the Directory Utility before now. And most importantly, why the heck would it be re-enabling the root user on boot? Thanks in advance to any help!

    Read the article

  • How to reset the postgres super user password on mac os x

    - by Andrew Barinov
    I installed postgres on my mac running 10.6.8 and I would like to reset the password for the postgres user (I believe this is the super user password) and then restart it. All the directions I found do not work because I think my user name is not recognized by pg as having authority to change the password. (I am on the admin account of my mac) Here is what I tried: Larson-2:~ larson$ psql -U postgres Password for user postgres: psql (9.0.4, server 9.1.2) WARNING: psql version 9.0, server version 9.1. Some psql features might not work. Type "help" for help. postgres=# ALTER USER postgres with password 'mypassword' postgres-# \q and for restart I did: Larson-2:~ larson$ su postgres -c 'pg_ctl -D /opt/local/var/db/postgresql84/defaultdb/ restart > Which didn't work, as the password remained the same as it was before. Can someone provide directions for doing this and for making sure it's recognized by PG? Update I went ahead and edited the pg_hba.conf file located in /Library/PostgreSQL/9.1/data and set the settings as follows: # TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust However, like before, the password stayed the same after I changed it. I am not sure what further steps I can take from here.

    Read the article

  • chrooted sftp user with write permissions to /var/www

    - by matthew
    I am getting confused about this setup that I am trying to deploy. I hope someone of you folks can lend me a hand: much much appreciated. Background info Server is Debian 6.0, ext3, with Apache2/SSL and Nginx at the front as reverse proxy. I need to provide sftp access to the Apache root directory (/var/www), making sure that the sftp user is chrooted to that path with RWX permissions. All this without modifying any default permission in /var/www. drwxr-xr-x 9 root root 4096 Nov 4 22:46 www Inside /var/www -rw-r----- 1 www-data www-data 177 Mar 11 2012 file1 drwxr-x--- 6 www-data www-data 4096 Sep 10 2012 dir1 drwxr-xr-x 7 www-data www-data 4096 Sep 28 2012 dir2 -rw------- 1 root root 19 Apr 6 2012 file2 -rw------- 1 root root 3548528 Sep 28 2012 file3 drwxr-x--- 6 www-data www-data 4096 Aug 22 00:11 dir3 drwxr-x--- 5 www-data www-data 4096 Jul 15 2012 dir4 drwxr-x--- 2 www-data www-data 536576 Nov 24 2012 dir5 drwxr-x--- 2 www-data www-data 4096 Nov 5 00:00 dir6 drwxr-x--- 2 www-data www-data 4096 Nov 4 13:24 dir7 What I have tried created a new group secureftp created a new sftp user, joined to secureftp and www-data groups also with nologin shell. Homedir is / edited sshd_config with Subsystem sftp internal-sftp AllowTcpForwarding no Match Group <secureftp> ChrootDirectory /var/www ForceCommand internal-sftp I can login with the sftp user, list files but no write action is allowed. Sftp user is in the www-data group but permissions in /var/www are read/read+x for the group bit so... It doesn't work. I've also tried with ACL, but as I apply ACL RWX permissions for the sftp user to /var/www (dirs and files recursively), it will change the unix permissions as well which is what I don't want. What can I do here? I was thinking I could enable the user www-data to login as sftp, so that it'll be able to modify files/dirs that www-data owns in /var/www. But for some reason I think this would be a stupid move securitywise.

    Read the article

  • Managing arbitrary user permissions under PureFTPd

    - by Sebastián Grignoli
    I need to provide an FTP service that needs to be web-managed in the simplest way possible. My customer wants to create folders and users, and give them read only or read/write access arbitrarily. For example: The folder 'Documents' should be read only for several users, writable for internal users, and invisible for the rest. The folder 'Pictures' should be read only for journalists, writable for associates, and invisible for the rest. The folder 'Media' should be read only, writable or invisible for arbitrary users specified on the admin. There could be a large number of users and folders. I can't find a good way to accomplish that. I thought that I could give each user a home folder and put symlinks for the folders he has read access to, and make the user part of the folder's group when he has write access too, but now I think that this wouldn't work, because with PureFTPd (or ProFTPd) I can only specify the virtual user's mapping to a system user, and only one GUID for each virtual user. My approach requires that I could specify several GUIDs for each user (one by each folder he has write access to). I need to start programming this admin and I still don't know wich approach would work, if any. ¿Any ideas?

    Read the article

  • Win7 - Opening "Programs and Features" as Admin from command line (logged in as regular user)

    - by user1741264
    We have Win7 machines on a domain that we'd like to open the "Programs and Features" control applet via the command line while a regular user is logged in. Heres the catch: I know how to do this using runas from command line BUT after "Programs and Features" opens, I dont truly have the ability to remove a program. I am told that I need to be an Admin to do so. Here are the commands I have tried: runas /user:%computername%\administrator cmd.exe then in the new cmd window running: control appwiz.cpl runas /user:%companydomain%\%domainadminacct% cmd.exe then in the new cmd window running: control appwiz.cpl runas /user:%computername%\administrator cmd.exe then in the new cmd window running: rundll32.exe shell32.dll,Control_RunDLL appwiz.cpl runas /user:%companydomain%\%domainadminacct% cmd.exe then in the new cmd window running: rundll32.exe shell32.dll,Control_RunDLL appwiz.cpl I have also tried all of the above as one long line of code instead of launching a cmd.exe as Admin As you can see, I have tried running the command using both a local admin account (Administrator) AND a domain admin account. I have alos tried launching the runas command as one long command (opening the "programs and features") AND 1st launching a cmd.exe with admin rights and THEN launching the "Prgrams and Features" window. The result is the same: The "Programs and Features" windows opens but when I try to perform an uninstall, I am told I need Admin rights. Thus I am lead to believe that this instance of "Programs and Features" is not truly being run as an admin I am trying to avoid logging the regular user out. I am also aware that every program has its own uninstaller, I do not want to uninstall that way. I want to use the uninstaller in "Programs and Features". Any help is appreciated.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • List of Users and Role using Membership Provider

    - by Jemes
    I’m trying to produce a view to show a list of users and their role using the built in membership provider. My model and controller are picking up the users and roles but I’m having trouble displaying them in my view. Model public class AdminViewModel { public MembershipUserCollection Users { get; set; } public string[] Roles { get; set; } } Controller public ActionResult Admin() { AdminViewModel viewModel = new AdminViewModel { Users = MembershipService.GetAllUsers(), Roles = RoleService.GetRoles() }; return View(viewModel); } View Inherits="System.Web.Mvc.ViewPage<IEnumerable<Account.Models.AdminViewModel>>" <table> <tr> <td>UserName</td> <td>Email</td> <td>IsOnline</td> <td>CreationDate</td> <td>LastLoginDate</td> <td>LastActivityDate</td> </tr> <% foreach (var item in Model) { %> <tr> <td><%=item.UserName %></td> <td><%=item.Email %></td> <td><%=item.IsOnline %></td> <td><%=item.CreationDate %></td> <td><%=item.LastLoginDate %></td> <td><%=item.LastActivityDate %></td> <td><%=item.ROLE %></td> </tr> <% }%> </table>

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • How to enable and connect to RDP on a Windows Azure Web Role Instance?

    - by Enrique Lima
    We all know there have been some updates to Windows Azure, and one of the biggest I would say is the capability of being able to remote into the “OS level” of the image running a role.  And I am not talking about VM Role, I am talking about a Web Role for example. As developers we use Visual Studio, and when we are getting ready to deploy a project, we have the option of enabling this. Here is how: 1.  We publish our Project 2. On the Deployment dialog, provide all the details for your account, and before clicking OK, click on Configure Remote Desktop connections. 3.  Enable connections and the rest of the configuration.  Now, here is where there is an extra set of steps.  The first thing to know: The certificate used here is different from the other certs you have in place.  I created a new one, the went into certmgr.msc, then to Personal, then I selected the cert I just created.  Did a right-click, then All Tasks > Export.  Because what is needed is a pfx package, make sure when exporting you select to export the private key. 4. Click OK, on the Remote Desktop Configuration screen, now before you click OK on the Deployment, you will need to visit the Azure Portal. And perform the following: Go to your hosted services. Then with the service available, select the Certificates folder location. Then, select Add Certificate from the toolbar (more like Azure Portal Ribbon) Provide the details to upload the recently create pfx file. That will create the Certificate. Click OK on the deployment dialog, this kick off the deployment process. 5. Now, we need to go to the Windows Azure Portal.  Here we will select the Web Role deployed and Configure RDP. 6. Time to test.  Click on the Instance (not the role), this will make the Remote Access Connect Button available.  A file will start the process to be downloaded too 7. You will then be prompted for the credentials you configured. 8.  Validate connectivity … 9. Open IIS Manager … From here on, this is a way to manage and work with your Instance.

    Read the article

  • Apache-Mina FTPServer Issue — unable to login into apache ftp server while using database user manager

    - by piyush
    I am unable to login into apache ftp server while using database user manager: while entering username and password,I am getting following error in log file: [ INFO] 2013-02-07 20:51:07,779 [] [0:0:0:0:0:0:0:1] RECEIVED: USER piyush [ INFO] 2013-02-07 20:51:07,781 [piyush] [0:0:0:0:0:0:0:1] SENT: 331 User name okay, need password for piyush. [ INFO] 2013-02-07 20:51:07,784 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: PASS ***** [ WARN] 2013-02-07 20:51:07,785 [piyush] [0:0:0:0:0:0:0:1] User failed to log in [ WARN] 2013-02-07 20:51:08,285 [piyush] [0:0:0:0:0:0:0:1] Login failure - piyush [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] SENT: 530 Authentication failed. [ INFO] 2013-02-07 20:51:08,286 [piyush] [0:0:0:0:0:0:0:1] RECEIVED: QUIT [ INFO] 2013-02-07 20:51:08,290 [piyush] [0:0:0:0:0:0:0:1] SENT: 221 Goodbye. [ INFO] 2013-02-07 20:51:08,291 [piyush] [0:0:0:0:0:0:0:1] CLOSED here is my xml file ftpd-typical.xml: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <server xmlns="http://mina.apache.org/ftpserver/spring/v1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xsi:schemaLocation=" http://mina.apache.org/ftpserver/spring/v1 http://mina.apache.org/ftpserver/ftpserver-1.0.xsd " id="Prometheus"> <listeners> <nio-listener name="default" port="2121" /> </listeners> <db-user-manager encrypt-passwords="salted"> <data-source> <beans:bean class="org.apache.commons.dbcp.BasicDataSource" > <beans:property name="driverClassName" value="com.mysql.jdbc.Driver" /> <beans:property name="url" value="jdbc:mysql://localhost/apache_test" /> <beans:property name="username" value="amy" /> <beans:property name="password" value="piyush" /> </beans:bean> </data-source> <insert-user>INSERT INTO FTP_USER (userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate) VALUES ('{userid}', '{userpassword}', '{homedirectory}', {enableflag}, {writepermission}, {idletime}, {uploadrate}, {downloadrate}) </insert-user> <update-user>UPDATE FTP_USER SET userpassword='{userpassword}',homedirectory='{homedirectory}',enableflag={enableflag},writepermission={writepermission},idletime={idletime},uploadrate={uploadrate},downloadrate={downloadrate} WHERE userid='{userid}' </update-user> <delete-user>DELETE FROM FTP_USER WHERE userid = '{userid}' </delete-user> <select-user>SELECT userid, userpassword, homedirectory, enableflag, writepermission, idletime, uploadrate, downloadrate, maxloginnumber, maxloginperip FROM FTP_USER WHERE userid = '{userid}' </select-user> <select-all-users>SELECT userid FROM FTP_USER ORDER BY userid </select-all-users> <is-admin>SELECT userid FROM FTP_USER WHERE userid='{userid}' AND userid='admin' </is-admin> <authenticate>SELECT userpassword from FTP_USER WHERE userid='{userid}'</authenticate> </db-user-manager> </server>

    Read the article

  • mount_afp on linux, user rights

    - by Antonio Sesto
    I need to mount a remote filesystem on a linux box using the afp protocol. The linux box runs an old Debian 4. I downloaded the source code of mount_afp, compiled it and installed it with all the required packages. Then created /etc/fuse with the following command: mknod /dev/fuse c 10 229 (according to the instructions here) I can mount the remote filesystem as root by executing: mount_afp afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ but the same command fails when run as normal user (of the local machine). After reading here and there, I created a group fuse, and added my normal user U to the group fuse: [prompt] groups U U fuse Then modified the group of /dev/fuse, that now has the following rights: 0 crwxrwx--- 1 root fuse 10, 229 Feb 8 15:33 /dev/fuse However, if the user U tries to mount the remote filesystem by using the same command as above, U gets the following error: Incorrect permissions on /dev/fuse, mode of device is 20770, uid/gid is 0/1007. But your effective uid/gid is 1004/1004 But the user U with uid 1004 has also gid 1007 (group fuse). I might think the problem is related to real/effective/etc. ID, but I do not know how to proceed and could not find any clear instructions. Could you please help me? There is also another problem. If I mount /mnt/MOUNTPOINT as root and run ls -l /mnt, I get: drwxrwxrwx 15 root root 466 Feb 8 16:34 MONTPOINT If I run ls -l /mnt as normal user U I get: ? ?????????? ? ? ? ? ? MOUNTPOINT in fact when I try to cd /mnt/MOUNTPOINT I get: $-> cd /mnt/MOUNTPOINT -sh: cd: /mnt/MOUNTPOINT: Not a directory Then I unmount /mnt/MOUNTPOINT as root and run again ls -l /mnt as normal user U I get: 0 drwxr-xr-x 2 root root 6 Feb 8 15:32 MOUNTPOINT/ After reading Frank's answer, I killed every shell/process running with privileges of user U. Still U cannot mount the remote filesystem, but the error message has changed. Now it is: "Login error: Authentication failed". The problem is not related to remote login/password since the same command works perfectly when run as root of the local machine. Since I cannot get mount_afp to work with normal users, I decided to follow mgorven's suggestion. So I run the commands: mount_afp -o allow_other afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ and mount_afp -o user=U afp://USER:PASSWD@REMOTE_SERVER/FOLDER /mnt/MOUNTPOINT/ The mount succeeds but user U cannot access the mount point. If U executes ls -l in /mnt U@LOCAL_HOST [/mnt] $-> ls -l ls: cannot access MOUNT_POINT: Permission denied total 0 ? ?????????? ? ? ? ? ? MOUNT_POINT Is it so hard to have this utility working?

    Read the article

  • Cannot install any Feature/Role on Windows 2008 R2 Standard

    - by Parsa
    I was trying to install Exchange 2010 prerequisites, when I encountered some error. All look like the same. Like this one: Error: Installation of [Windows Process Activation Service] Configuration APIs failed. the server needs to be restarted to undo the changes. My server is running Windows Server 2008 R2 Standard Edition. UPDATE: I tried installing the prerequisites one by one using PowerShell. Now I have errors on RPC over Http proxy: Installation of [Web Server (IIS)] Tracing Failed, Attempt to install tracing failed with error code 0x80070643. Fatal error during installation. Searching about the error code doesn't tell me much more than something went wrong when trying to update windows. Installing Http Tracing alone also doesn't make any difference.

    Read the article

  • Multi-Role Domain Controllers for Small Offices (< 50 clients)

    - by kce
    Warning: I'm a Linux/*NIX admin so this is all new to me. I understand that it's not considered a good idea to have only a single domain controller, and that it is also probably a good idea for a domain controller to only do AD/DHCP/DNS (Here). We have two offices, location A with 30 users and location B with 10 users. Our two offices are separated by a WAN that is not particularly robust so I have be instructed that we need to have standalone services in each office. This means that according to "best practices" we will need to build a domain controller and a separate file server in each office. Again, I am not knowledgeable in the ways of Windows but this seems a little unnecessary for an organization of 40 users. People have commented that I could "get away with" running file services on the domain controller as long as the "load is light". That just seems to generate more questions than it answers. What constitutes light load? What are the potential consequences of mixing these roles? Ideally I would prefer to only have one physical machine at each location. The one in location A (the location with IT staff) can act as the primary domain controller and the one in the smaller office can act as the backup domain controller. If either domain controller fails we can still use the other one for authentication (albeit with some latency) and if the WAN connection fails each office still has access to their respective "local" domain controller. If the file services are ALSO run on each server (and synchronized with something like DFS), a similar arrangement in terms of redundancy can be had without having to purchase, build and install two additional separate servers. It's not that I'm adverse to that (well, any more adverse than I am to whole thing to begin with) but to my simple mind it just seems, well a bit overkill. I can definitely see the benefits of functional separation when we're talking larger organizations, but I need to consider the additional overhead too. None of this excludes having a DRP setup for the domain controller/s. I assume you can lose two domain controllers just as easily as one.

    Read the article

  • Cannot install any Feature/Role on Windows Server 2008 R2 Standard

    - by Parsa
    I was trying to install Exchange 2010 prerequisites, when I encountered some error. All look like the same. Like this one: Error: Installation of [Windows Process Activation Service] Configuration APIs failed. the server needs to be restarted to undo the changes. My server is running Windows Server 2008 R2 Standard Edition. UPDATE: I tried installing the prerequisites one by one using PowerShell. Now I have errors on RPC over Http proxy: Installation of [Web Server (IIS)] Tracing Failed, Attempt to install tracing failed with error code 0x80070643. Fatal error during installation. Searching about the error code doesn't tell me much more than something went wrong when trying to update windows. Installing Http Tracing alone also doesn't make any difference.

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • Is it possible for root to execute a command as non-root

    - by adnan kamili
    I am root user and suppose i want to run any application as another user. is it possible, without switching to another user. Something like # google-chrome user=abc I am actually executing a cli program as a non root user. I have set the sticky bit on and i am using setuid. So the program runs with root privileges. Now i am using system() with in the program to invoke gui app. But i dont want to run it as root. so i want to temporarily drop root privileges only for that call.

    Read the article

  • Secure method of changing a user's password via Python script/non-interactively

    - by Matthew Rankin
    I've created a Python script using Fabric to configure a freshly built Slicehost Ubuntu slice. In case you're not familiar with Fabric, it uses Paramiko, a Python SSH2 client, to provide remote access "for application deployment or systems administration tasks." One of the first things I have the Fabric script do is to create a new admin user and set their password. Unlike Pexpect, Fabric cannot handle interactive commands on the remote system, so I need to set the user's password non-interactively. At present, I'm using the chpasswd command to change the password. This transmits the password as clear text over SSH to the remote system. Questions Is my current method of setting the password a security concern? Currently, the drawback I see is that Fabric shows the password as clear text on my local system as follows: [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd. Since I only run the Fabric script from my laptop, I don't think this is a security issue, but I'm interested in others' input. Is there a better method for setting the user's password non-interactively? Another option, would be to use Pexpect from within the Fabric script to set the password. Current Code # Fabric imports and host configuration excluded for brevity root_password = getpass.getpass("Root's password given by SliceManager: ") admin_username = prompt("Enter a username for the admin user to create: ") admin_password = getpass.getpass("Enter a password for the admin user: ") env.user = 'root' env.password = root_password # Create the admin group and add it to the sudoers file admin_group = 'admin' run('addgroup {group}'.format(group=admin_group)) run('echo "%{group} ALL=(ALL) ALL" >> /etc/sudoers'.format( group=admin_group) ) # Create the new admin user (default group=username); add to admin group run('adduser {username} --disabled-password --gecos ""'.format( username=admin_username) ) run('adduser {username} {group}'.format( username=admin_username, group=admin_group) ) # Set the password for the new admin user run('echo "{username}:{password}" | chpasswd'.format( username=admin_username, password=admin_password) ) Local System Terminal I/O $ fab config_rebuilt_slice Root's password given by SliceManager: Enter a username for the admin user to create: johnsmith Enter a password for the admin user: [xxx.xx.xx.xxx] run: addgroup admin [xxx.xx.xx.xxx] out: Adding group `admin' (GID 1000) ... [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "%admin ALL=(ALL) ALL" >> /etc/sudoers [xxx.xx.xx.xxx] run: adduser johnsmith --disabled-password --gecos "" [xxx.xx.xx.xxx] out: Adding user `johnsmith' ... [xxx.xx.xx.xxx] out: Adding new group `johnsmith' (1001) ... [xxx.xx.xx.xxx] out: Adding new user `johnsmith' (1000) with group `johnsmith' ... [xxx.xx.xx.xxx] out: Creating home directory `/home/johnsmith' ... [xxx.xx.xx.xxx] out: Copying files from `/etc/skel' ... [xxx.xx.xx.xxx] run: adduser johnsmith admin [xxx.xx.xx.xxx] out: Adding user `johnsmith' to group `admin' ... [xxx.xx.xx.xxx] out: Adding user johnsmith to group admin [xxx.xx.xx.xxx] out: Done. [xxx.xx.xx.xxx] run: echo "johnsmith:supersecretpassw0rd" | chpasswd [xxx.xx.xx.xxx] run: passwd --lock root [xxx.xx.xx.xxx] out: passwd: password expiry information changed. Done. Disconnecting from [email protected]... done.

    Read the article

  • Cannot install any Feature/Role on Win2K R2 SE

    - by Parsa
    I was trying to install Exchange 2010 prerequisites, when I encountered some error. All look like the same. Like this one: Error: Installation of [Windows Process Activation Service] Configuration APIs failed. the server needs to be restarted to undo the changes. My server is running Windows Server 2008 R2 Standard Edition.

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >