Search Results

Search found 3919 results on 157 pages for 'role providers'.

Page 1/157 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • ASP.NET Universal Providers (System.Web.Providers)

    - by shiju
    Microsoft Web Platform and Tools (WPT)  team has announced the release of ASP.NET Universal Providers that allows you to use Session, Membership, Roles and Profile providers along with all editions of SQL Server 2005 and later. This support includes Sql Server Express, Sql Server CE and Sql Azure.ASP.NET Universal Providers is available as a NuGet package and the following command will install the package via NuGet. PM> Install-Package System.Web.Providers The support for Sql Azure will help the Azure developers to easily migrate their ASP.NET applications to Azure platform. System.Web.Providers.DefaultMembershipProvider is the equivalent name for the current SqlMembershipProvider and you can put right connectionstring name in the configuration and it will work with any version of Sql Server based on the copnnection string. System.Web.Providers.DefaultProfileProvider is the equivalent provider name for existing System.Web.Profile.SqlProfileProvider and  System.Web.Providers.DefaultRoleProvider is the equivalent provider name for the existing System.Web.Security.SqlRoleProvider.

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • Managing User & Role Security with Oracle SQL Developer

    - by thatjeffsmith
    With the advent of SQL Developer v3.0, users have had access to some powerful database administration features. Version 3.1 introduced more powerful features such as an interface to Data Pump and RMAN. Today I want to talk about some very simple but frequently ran tasks that SQL Developer can assist with, like: identifying privs granted to users managing role privs assigning new roles and privs to users & roles Before getting started, you’ll need a connection to the database with the proper privileges. The common ROLE used to accomplish this is the ‘DBA‘ role. Curious as to what the DBA role is actually comprised of? Let’s find out! Open the DBA Console First make sure you’re connected to the database you want to manage security on with a privileged administrator account. Then open the View menu and select ‘DBA.’ Accessing the DBA panel ‘Create’ a Connection Click on the green ‘+’ button in the DBA panel. It will ask you to choose a previously defined SQL Developer connection. Defining a DBA connection in Oracle SQL Developer Once connected you will see a tree list of DBA features you can start interacting with. Expand the ‘Security’ Tree Node As you click on an object in the DBA panel, the ‘viewer’ will open on the right-hand-side, just like you are accustomed to seeing when clicking on a table or stored procedure. Accessing the DBA role If I’m a newly hired Oracle DBA, the first thing I might want to do is become very familiar with the DBA role. People will be asking you to grant them this role or a subset of its privileges. Once you see what the role can do, you will become VERY protective of it. My favorite 3-letter 4-letter word is ‘ANY’ and the DBA role is littered with privileges like this: ANY TABLE privs granted to DBA role So if this doesn’t freak you out, then maybe you should re-consider your career path. Or in other words, don’t be granting this role to ANYONE you don’t completely trust to take care of your database. If I’m just assigned a new database to manage, the first thing I might want to look at is just WHO has been assigned the DBA role. SQL Developer makes this easy to ascertain, just click on the ‘User Grantees’ panel. Who has the keys to your car? Making Changes to Roles and Users If you mouse-right-click on a user in the Tree, you can do individual tasks like grant a sys priv or expire an account. But, you can also use the ‘Edit User’ dialog to do a lot of work in one pass. As you click through options in these dialogs, it will build the ‘ALTER USER’ script in the SQL panel, which can then be executed or copied to the worksheet or to your .SQL file to be ran at your discretion. A Few Clicks vs a Lot of Typing These dialogs won’t make you a DBA, but if you’re pressed for time and you’re already in SQL Developer, they can sure help you make up for lost time in just a few clicks!

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • Advise on how to move from a .net developer role to a web developer role

    - by dermd
    I've been working primarily as a .net developer for the past 4 years for a financial services company. I've worked on .net 1.1, 2.0, 3.5 and have done the 3.5 enterprise app developer cert (not that that's worth a whole lot!). Before that I worked as a java developer with a bit of Flex thrown in for just over a year. My educational background is an Electronic and computer engineering degree, a higher diploma in systems analysis as well as one in web development (this was mainly java - JSP, Spring, etc) and a science masters in software design and development. I really feel like a change and would like to move to a different field to experience something different. I've done some courses in RoR and played around with it a bit in my spare time. Similarly I've done various web and mobile courses and done up some mobile webapps along with android and ios equivalents (haven't tried pushing them up to the app stores yet but may be worth tidying them up and doing that). I currently work long enough hours so find it hard to find time to work on too many side projects to get a decent portfolio together. But when I do work on the web stuff I do find it really enjoyable so think it's something I'd like to do full time. However, since my experience is pretty much all .net and financial services I find it very hard to get my foot in the door anywhere or get past a phone screen unless their specifically looking for someone with .net knowledge. What is the best way to move into a web development role without starting from scratch again. I do think a lot of the skills I have translate over but I seem to just get paired with .net jobs whenever I look around? Apart from js, jquery, html5, objective C are there any other technologies I should be looking into?

    Read the article

  • Role provider and Role management

    - by AspOnMyNet
    When the CacheRolesInCookie property is set to true in the Web.config file, role information for each user is stored in a cookie. When role management checks to see whether a user is in a particular role, the roles cookie is checked before the role provider is called to check the list of roles at the data source. The cookie is dynamically updated to cache the most recently validated role names. a) As far as I understand the above text, even though role management checks the roles cookie, role provider still checks the list of roles at the data source? b) Above text talks about role management, which is invoked before role provider is called. What class acts as a role management? thanx

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • EMERGENCY - Major Problems After Perl Module Installed via WHM

    - by Russell C.
    I attempted to install the perl module Net::Twitter::Role::API::Lists using WHM and after doing so my whole site came down. It seems that something that was updated with the install isn't functioning correctly and since our website it written in Perl none of our site scripts will run. In almost 8 years of working with Perl I've never had any issues arise after installing a perl module so I have no idea how to even start troubleshooting. The error I see when trying to compile any of our Perl scripts is below. I'd appreciate any advice on what might be wrong and steps on how I can go about resolve it. Thanks in advance for your help! Attribute (+type_constraint) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::String has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4fd5c4)', 'Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'undef', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 10 require MooseX/AttributeHelpers/String.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 25 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_per

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • iphone app with role based login?

    - by chaitanya
    Can iPhone apps have role based login? In my application I have to display the content according to the role of the user (employee, visitor). Till now I havent seen any app with role based login for iphone. Can I develop role based login? is there any restriction from apple side for these kind of logins to approve the app?

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Azure Learning Plan - Compute

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the "compute" function of Windows Azure, which includes Configuration Files, the Web Role, the Worker Role, and the VM Role. There is a general programming guide for Windows Azure that you can find here to help with the overall process.   Configuration Files Configuration Files define the environment for a Windows Azure application, similar to an ASP.NET application. This section explains how to work with these. General Introduction and Overview http://blogs.itmentors.com/bill/2009/11/04/configuration-files-and-windows-azure/ Service Definition File Schema http://msdn.microsoft.com/en-us/library/ee758711.aspx Service Configuration File Schema http://msdn.microsoft.com/en-us/library/ee758710.aspx  Windows Azure Web Role The Web Role runs code (such as ASP pages) that require a User Interface. Web Role "Boot Camp" Video  https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470854&CountryCode=US Web Role Deployment Checklist http://blogs.infragistics.com/blogs/anton_staykov/archive/2010/06/30/windows-azure-web-role-deployment-checklist.aspx  Using a Web Role as a Worker Role for Small Applications http://www.31a2ba2a-b718-11dc-8314-0800200c9a66.com/2010/12/how-to-combine-worker-and-web-role-in.html Windows Azure Worker Role  The Worker Role is used for code that does not require a direct User Interface. Worker Role "Boot Camp" Video https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032470871&CountryCode=US Worker Role versus Web Roles http://msdn.microsoft.com/en-us/library/gg433012.aspx Deploying other applications (like Java) in a Windows Azure Worker Role http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx Windows Azure VM Role The Windows Azure VM Role is an Operating System-level mechanism for code deployment. VM Role Overview and Details  http://msdn.microsoft.com/en-us/library/gg465398.aspx  The proper use of the VM Role http://blogs.msdn.com/b/buckwoody/archive/2010/12/28/the-proper-use-of-the-vm-role-in-windows-azure.aspx

    Read the article

  • ASP.Net forms authentication - multiple providers

    - by Chris Klepeis
    I have an ASP.Net 4.0 application, and within it is a folder called "Forum", setup as a sub application in IIS 7. This forum package implements a custom provider for .net membership. The forum is running in .net 3.5. I'd like to setup the main site so that when users login, it logs them into both my site and the forum site. Both the main site and the forum have separate .Net membership tables. How can I specify which provider to use with formsauthentication? right now I have FormsAuthentication.SetAuthCookie(...); this, however, just uses my default provider and does nothing with the provider for the forum I tried setting the forum app and my web app to have the same cookie name, as well as setting the machinekey on each: <machineKey validationKey="AutoGenerate" validation="SHA1" /> no dice. I googled and didnt really come up with any example of how to use multiple providers like I want to. I updated my web.config to have both provideers but this is useless if I cannot specify in my code which one to use.

    Read the article

  • AWS EC2 - How to specify an IAM role for an instance being launched via awscli

    - by Skaperen
    I am using the "aws ec2 run-instances" command (from the awscli package) to launch an instance in AWS EC2. I want to set an IAM role on the instance I am launching. The IAM role is configured and I can use it successfully when launching an instance from the AWS web UI. But when I try to do this using that command, and the "--iam-instance-profile" option, it failed. Doing "aws ec2 run-instances help" shows Arn= and Name= subfields for the value. When I try to look up the Arn using "aws iam list-instance-profiles" it gives this error message: A client error (AccessDenied) occurred: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/shell/i-15c2766d is not authorized to perform: iam:ListInstanceProfiles on resource: arn:aws:iam::xxxxxxxxxxxx:instance-profile/ (where xxxxxxxxxxxx is my AWS 12-digit account number) I looked up the Arn string via the web UI and used that via "--iam-instance-profile Arn=arn:aws:iam::xxxxxxxxxxxx:instance-profile/shell" on the run-instances command, and that failed with: A client error (UnauthorizedOperation) occurred: You are not authorized to perform this operation. If I leave off the "--iam-instance-profile" option entirely, the instance will launch but it will not have the IAM role setting I need. So the permission seems to have something to do with using "--iam-instance-profile" or accessing IAM data. I repeated several times in case of AWS glitches (they happen sometimes) and no success. I suspected that perhaps there is a restriction that an instance with an IAM role is not allowed to launch an instance with a more powerful IAM role. But in this case, the instance I am doing the command in has the same IAM role that I am trying to use. named "shell" (though I also tried using another one, no luck). Is setting an IAM role not even permitted from an instance (via its IAM role credentials)? Is there some higher IAM role permission needed to use IAM roles, than is needed for just launching a plain instance? Is "--iam-instance-profile" the appropriate way to specify an IAM role? Do I need to use a subset of the Arn string, or format it in some other way? Is it possible to set up an IAM role that can do any IAM role accesses (maybe a "Super Root IAM" ... making up this name)? FYI, everything involves Linux running on the instances. Also, I am running all this from an instance because I could not get these tools installed on my desktop. That and I do not want to put my IAM user credentials on any AWS storage as advised by AWS here. after answered: I did not mention the launching instance permission of "PowerUserAccess" (vs. "AdministratorAccess") because I did not realize additional access was needed at the time the question was asked. I assumed that the IAM role was "information" attached to the launch. But it really is more than that. It is a granting of permission.

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • Adding a Role to a Responsibility for Use with the Oracle E-Business Suite SDK for Java JAAS Implementation

    - by Juan Camilo Ruiz
    This new post on the series of ADF integration with Oracle E-Business Suite, was written by Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team. Based on a previous post of the series, a reader asked what to do if you have an existing responsibility assigned to lots of users, instead of the UMX role that the Oracle E-Business Suite SDK for Java JAAS Implementation requires.  It would be tedious to assign a new role directly to hundreds or thousands of users, so naturally we’d like to avoid that if possible. Most people don’t know this, but it’s possible to assign a UMX role to a responsibility in Oracle User Management. Once you do that, users with your responsibility will all inherit your UMX role automatically. You can then proceed with using your UMX role with JAAS for ADF. Here is how to assign a UMX role to a responsibility in Oracle E-Business Suite: In the User Management responsibility, go to the Roles & Role Inheritance page. Search for the responsibility you want. In the search results table, click the “View In Hierarchy” icon for your responsibility. Note that the codes for responsibilities start with FND_RESP, while the codes for roles start with UMX. In the Role Inheritance Hierarchy, click on the Add Node icon (green plus + ) for your responsibility. Now you will see what appears to be the same page again but it is a little different (note the text at the top telling you the role you select will be inherited…).  This time, either search or expand nodes until you find your custom UMX role.  Use the Quick Select to choose that role. You will be sent back to the first screen, where you should see a confirmation message at the top. On the same page you can verify that the custom UMX role is underneath the responsibility.  You may need to expand one or more nodes to see the UMX role under the responsibility. You might see some other roles that have been inherited as well. Now that your users have the UMX role, you can test that the UMX role is being passed through to your ADF application through the Oracle E-Business Suite SDK for Java JAAS feature. Happy coding!

    Read the article

  • Send Email from worker role (Azure) with attachment in c#

    - by simplyvaibh
    I am trying to send an email(in c#) from worker role(Azure) with an attachment(from blob storage). I am able to send an email but attachment(word document) is blank. The following function is called from worker role. public void sendMail(string blobName) { InitStorage();//Initialize the storage var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); container = blobStorage.GetContainerReference("Container Name"); CloudBlockBlob blob = container.GetBlockBlobReference(blobName); if (File.Exists("demo.doc")) File.Delete("demo.doc"); FileStream fs = new FileStream("demo.doc", FileMode.OpenOrCreate); blob.DownloadToStream(fs); Attachment attach = new Attachment(fs,"Report.doc"); System.Net.Mail.MailMessage Email = new System.Net.Mail.MailMessage("[email protected]", "[email protected]"); Email.Subject = "Text fax send via email"; Email.Subject = "Subject Of email"; Email.Attachments.Add(attach); Email.Body = "Body of email"; System.Net.Mail.SmtpClient client = new SmtpClient("smtp.live.com", 25); client.DeliveryMethod = SmtpDeliveryMethod.Network; client.EnableSsl = true; client.Credentials = new NetworkCredential("[email protected]", Password); client.Send(Email); fs.Flush(); fs.Close(); Email.Dispose(); } Please tell me where I am doing wrong?

    Read the article

  • C# Role Provider for multiple applications

    - by Juventus18
    I'm making a custom RoleProvider that I would like to use across multiple applications in the same application pool. For the administration of roles (create new role, add users to role, etc..) I would like to create a master application that I could login to and set the roles for each additional application. So for example, I might have AppA and AppB in my organization, and I need to make an application called AppRoleManager that can set roles for AppA and AppB. I am having troubles implementing my custom RoleProvider because it uses an initialize method that gets the application name from the config file, but I need the application name to be a variable (i.e. "AppA" or "AppB") and passed as a parameter. I thought about just implementing the required methods, and then also having additional methods that pass application name as a parameter, but that seems clunky. i.e. public override CreateRole(string roleName) { //uses the ApplicationName property of this, which is set in web.config //creates role in db } public CreateRole(string ApplicationName, string roleName) { //creates role in db with specified params. } Also, I would prefer if people were prevented from calling CreateRole(string roleName) because the current instance of the class might have a different applicationName value than intended (what should i do here? throw NotImplementedException?). I tried just writing the class without inheriting RoleProvider. But it is required by the framework. Any general ideas on how to structure this project? I was thinking make a wrapper class that uses the role provider, and explicitly sets the application name before (and after) and calls to the provider something like this: static class RoleProviderWrapper { public static CreateRole(string pApplicationName, string pRoleName) { Roles.Provider.ApplicationName = pApplicationName; Roles.Provider.CreateRole(pRoleName); Roles.Provider.ApplicationName = "Generic"; } } is this my best-bet?

    Read the article

  • What email providers are case sensitive? [on hold]

    - by Thanatos
    According to RFC 5321, the local-part of email addresses is case-sensitive. However, most providers that I know of (e.g., GMail) are not case-sensitive. (It's actually more complex than that: GMail ignores .s in emails as well.) Is there a list, or source, of the various rules, including case-sensitivity, for various major email providers? Is there a large-ish email provider than has case-sensitive email addresses?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >