Search Results

Search found 7238 results on 290 pages for 'step through'.

Page 22/290 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Oracle Identity Manager ADF Customization

    - by Arda Eralp
    This blog entry includes an example about customization Oracle Identity Manager (OIM) Self Service screen. Before customization all users that can be logged in OIM Self Service can see "Administration" tab on left menu. On this example we create "Managers" role and only users that have managers role can see "Administration" tab. Step 1: Create "Manager" role  Step 2: Create Sandbox  Step 3: Customize ADF Select "Customize" on the top menu Select "Source" instead of "Design" on top  Select "Administration" tab with blue rectangle and edit component Edit "visible" with expression builder #{oimcontext.currentUser.roles['Manager'] != null} Apply Step 4: Apply to All and Publish sandbox Notes:  This table objects can use for expression. Objects Description #{oimcontext.currentUser['ATTRIBUTE_NAME']} #{oimcontext.currentUser['UDF_NAME']} #{oimcontext.currentUser.roles} #{oimcontext.currentUser.roles['SYSTEM ADMINISTRATORS'] != null} Boolean #{oimcontext.currentUser.adminRoles['OrclOIMSystemAdministrator'] != null} Boolean

    Read the article

  • How to capture a Header or Trailer Count Value in a Flat File and Assign to a Variable

    - by Compudicted
    Recently I had several questions concerning how to process files that carry a header and trailer in them. Typically those files are a product of data extract from non Microsoft products e.g. Oracle database encompassing various tables data where every row starts with an identifier. For example such a file data record could look like: HDR,INTF_01,OUT,TEST,3/9/2011 11:23 B1,121156789,DATA TEST DATA,2011-03-09 10:00:00,Y,TEST 18 10:00:44,2011-07-18 10:00:44,Y B2,TEST DATA,2011-03-18 10:00:44,Y B3,LEG 1 TEST DATA,TRAN TEST,N B4,LEG 2 TEST DATA,TRAN TEST,Y FTR,4,TEST END,3/9/2011 11:27 A developer is normally able to break the records using a Conditional Split Transformation component by employing an expression similar to Output1 -- SUBSTRING(Output1,1,2) == "B1" and so on, but often a verification is required after this step to check if the number of data records read corresponds to the number specified in the trailer record of the file. This portion sometimes stumbles some people so I decided to share what I came up with. As an aside, I want to mention that the approach I use is slightly more portable than some others I saw because I use a separate DFT that can be copied and pasted into a new SSIS package designer surface or re-used within the same package again and it can survive several trailer/footer records (!). See how a ready DFT can look: The first step is to create a Flat File Connection Manager and make sure you get the row split into columns like this: After you are done with the Flat File connection, move onto adding an aggregate which is in use to simply assign a value to a variable (here the aggregate is used to handle the possibility of multiple footers/headers): The next step is adding a Script Transformation as destination that requires very little coding. First, some variable setup: and finally the code: As you can see it is important to place your code into the appropriate routine in the script, otherwise the end result may not be as expected. As the last step you would use the regular Script Component to compare the variable value obtained from the DFT above to a package variable value obtained say via a Row Count component to determine if the file being processed has the right number of rows.

    Read the article

  • How do I connect to a wireless network?

    - by Keith Groben
    I just installed 10.10 x64 and cannot even find my wireless network let alone connect to it. I've searched all over SE and Ubuntu forums and cannot find out how to do this simple thing. Can some one please give me the answer? It is plugged in right now and is 100% updated. It is a Desktop with wireless card. 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no Here's the output: *-network DISABLED description: Wireless interface product: RT2860 vendor: RaLink physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 70:1a:04:f4:de:e9 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=2.6.35-27-generic firmware=N/A latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:fcff0000-fcffffff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 03 serial: 00:23:54:fd:c2:32 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.1.14 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:44 ioport:d800(size=256) memory:fceff000-fcefffff memory:ddffc000-ddffffff memory:fcec0000-fcedffff === Update === I have discovered that this is a know issue with the rt2860. I have been following step by step the instructions found here: http://www.ctbarker.info/2010/05/ubuntu-1004-wireless-chipsets-and-wpa.html I decided to stat over because I was getting stuck on step 5: 'sudo rmmod rt2860sta' is was giving me this problem: 'ERROR: Module rt2860sta does not exist in /proc/modules' Since I started over I cannot even get past step 5 'sudo make' I get this: 'make: * No targets specified and no makefile found. Stop.' I am lost. Any help would be appreciated.

    Read the article

  • Webmatrix The Site has Stopped Fix

    - by Tarun Arora
    I just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’” Step 00 – Analysis It took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up. So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure Step 1 – Fixing “The following site has stopped ‘xxx’” Further analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them). Come back to CMD and run IISExpress /trace:Error IIS Express successfully started and parked itself in the system tray icon. I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures. Step 2 – Download WordPress Azure WebSite using WebMatrix Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out… Open up WebMatrix and go to the Remote tab, click on Download Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step Now you should have your Azure WordPress website all set up & running from WebMatrix. Enjoy!

    Read the article

  • installing linux froom usb pen drive

    - by zulu
    I'm new to Linux. I'm using Ubuntu 11.04. Now i want to install Ubuntu 12.04 . I got an ISO image of Ubuntu 12.04 Desktop. I put this image in to a pen drive which is formated,set the boot option boot from usb but nothing happened . I searched this over the net and on Ubuntu website but nobody has given the complete steps . someone say u can install from the Ubuntu also ,someone says u can do a fresh installation from usb pen drive u need to make you pen drive bootable etc. etc. . My problem is that i don't know the exact steps how ton install Ubuntu from usb pen drive? All I want to do is to completely remove my Ubuntu 11.04 and install Ubuntu 12.04 from usb pen-drive. Can any body tell me how to make a pen drive bootable ? How to install Ubuntu 12.04 from pen-drive? Please give me a step by step procedure. please explain me how to do it step by step . Thanx in advance

    Read the article

  • Enemy collision detection with movie clips

    - by user18080
    I have created multiple movieclips with animations within them. It is an obstacle avoidance game and I cannot seem to be able to get my enemies to contact my playableCharacter. The enemies I have created are each embedded on certain levels of my game. I have created an array, enemiesArray to have each of my enemies placed within it. Here is the code for that: //step 1: make sure array exists if(enemiesArray!=null && enemiesArray.length!=0) { //step 2: check all enemies against villain for(var i:int = 0;i < enemiesArray.length; i++) { //step 3: check for collision if(villain.hitTestObject(enemiesArray[i])) { //step 4: do stuff trace("HIT!"); removeChild(enemiesArray[i]); enemiesArray.splice(i,1); removeChild(villain); villain = null; } } } What I am unsure of is whether or not my enemiesArray is actually holding the movieclips I have suggested. If it was, this code would be tracing back a "HIT" for every time I ran into an enemy and would kill my character. It is not doing that however. I am thinking I have to push my movieclips into my array but I don't know how to do that or where for that matter. Any and all help would be much appreciated.

    Read the article

  • Windows partition UNKNOWN after Ubuntu installation attempt at dual boot - How to fix?

    - by user285645
    The idea was to install Win 7 and Ubuntu with dual boot. However, after installation, Gparted shows a /dev/sda1 as an 'unknown' filesystem and its size is 278 GB. All my windows files, data are in this partition. THen, there's /dev/sda2 with 'EXT4' filesystem (size-9.54 GB) - created during Ubuntu install. Then, there's /dev/sda3 with 'extended' filesystem (size- 10.5 GB) - created during Ubuntu install. Then, there's /dev/sda5 with 'linux swap' filesystem (size- 2 GB) - created during Ubuntu install. Then, there's /dev/sda6 with 'ext4' filesystem (size- 8.5 GB) - created during Ubuntu install. MY questions are: What exactly does this Gparted output above mean? How to recover my previous Windows 7 installation that's in /dev/sda1 (NTFS). I have some important files I need. Also, I had a PGP encryption on the disk before installing Ubuntu. Now, it just boots straight into Ubuntu... why? How to uninstall Ubuntu (the Try ubuntu and uninstall did not work. the boot-repair did not work) I have read other topics but noone has provided a proper step by step answer to how to recover my 278GB WIndows partition. The testdisk step by step procedure did not work. It says the NTFS disk is unrecognized.

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

  • Using Teleriks new LINQ implementation to create OData feeds

    This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed. Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using Data Services Update for .NET Framework 3.5 SP1. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)  Step 1 (15-20 seconds): Building your Domain Model In your web project in Visual Studio, right click on the project and select Add|New Item and select Telerik OpenAccess Domain Model as your item template. Give the file a meaningful name as well. Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService. Step 2 (20-25 seconds): Adding and Configuring your Data Service In your web project in Visual Studio, right click on the project and select Add|New Item and select ADO .NET Data Service as your item template and name your service. In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a * to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.) 1: using System.Data.Services; 2: using System.Data.Services.Common; 3:   4: namespace Telerik.RLINQ.Astoria.Web 5: { 6: public class NorthwindService : DataService<NorthwindEntities> 7: { 8: //change the IDataServiceConfigurationto DataServiceConfiguration 9: public static void InitializeService(DataServiceConfiguration config) 10: { 11: config.SetEntitySetAccessRule("*", EntitySetRights.All); 12: //take advantage of the "Astoria3.5 Update" features 13: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 14: } 15: } 16: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Step 3 (~30 seconds): Adding the DataServiceKeys You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but dont worry, Ive bribed some of the developers and our next update will eliminate this step completely. Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the keys field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, dont manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.) 1: using System.Data.Services.Common; 2:   3: namespace Telerik.RLINQ.Astoria.Web 4: { 5: [DataServiceKey("CustomerID")] 6: public partial class Customer 7: { 8: } 9:   10: [DataServiceKey("OrderID")] 11: public partial class Order 12: { 13: } 14:   15: [DataServiceKey(new string[] { "OrderID", "ProductID" })] 16: public partial class OrderDetail 17: { 18: } 19:   20: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Done! Time to run the service. Now, lets run the service! Select the svc file and right click and say View in Browser. You will see your OData service and can interact with it in the browser. Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc. Happy Data Servicing! Technorati Tags: Telerik,Astoria,Data Services Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • How to use Ajax : Hovermenu Extender in ASP.NET

    - by SAMIR BHOGAYTA
    // It is a simple method, Other properties set by you which you want Step 1. Take the control that the extender is targeting.When the mouse cursor is over this control,the hover menu popup will be displayed. Step 2. Take one panel to display when mouse is over the target control Step 3. Set the following properties: TargetControlID = "ID of the panel or control which display when mouse is over the target control" PopupControlID = "ID of the control that the extender is targeting" PopupPosition = Left (Default), Right, Top, Bottom, Center.

    Read the article

  • SharePoint 2010 Hosting :: How to Customize SharePoint 2010 Global Navigation

    - by mbridge
    Requirements - SharePoint Foundation or SharePoint Server 2010 site - SharePoint Designer 2010 Steps 1. The first step in my process was to download from codeplex a starter masterpage http://startermasterpages.codeplex.com/ . 2. Once you downloaded the starter master page, open up your SharePoint site in SharePoint Designer 2010 and on the left in the “Site Objects “ area click on the folder “All Files” and drill down to catalogs >> masterpages . Once you are in the Masterpage folder copy and paste the _starter.master into this folder. 3. The first step in the customization process is to create your custom style sheet. To create your custom style sheet, click on the “all Files” folder and click on “Style Library.” Right click in the style library section and choose Style sheet. Once the style sheet is created, rename it style.css. Now open the style sheet you created in SharePoint Designer. 4. In this next step you will copy and paste the SharePoint core styles for the global navigation into your custom style sheet. Copy and paste the css below into the style sheet and save file .s4-tn{ padding:0px; margin:0px; } .s4-tn ul.static{ white-space:nowrap; } .s4-tn li.static > .menu-item{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#3b4f65; white-space:nowrap; border:1px solid transparent; padding:4px 10px; display:inline-block; height:15px; vertical-align:middle; } .s4-tn ul.dynamic{ /* [ReplaceColor(themeColor:"Light2")] */ background-color:white; /* [ReplaceColor(themeColor:"Dark2-Lighter")] */ border:1px solid #D9D9D9; } .s4-tn li.dynamic > .menu-item{ display:block; padding:3px 10px; white-space:nowrap; font-weight:normal; } .s4-tn li.dynamic > a:hover{ font-weight:normal; /* [ReplaceColor(themeColor:"Light2-Lighter")] */ background-color:#D9D9D9; } .s4-tn li.static > a:hover { /* [ReplaceColor(themeColor:"Accent1")] */ color:#44aff6; text-decoration:underline; } 5. Once you created the style sheet, go back to the masterpage folder and open the _starter.master file and in the Customization category click edit file. 6. Next, when the edit file opens make sure you view it in split view. Now you are going to search for the reference to our custom masterpage in the code. Make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste and then click find next. 7. Now, in the code replace You have now referenced your custom style sheet in your masterpage. 8. The next step is to locate your Global Navigation control, make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste ID="TopNavigationMenuV4” and then click find next. Once you find ID="TopNavigationMenuV4” , you should see the following block of code which is the global navigation control: ID="TopNavigationMenuV4" Runat="server" EnableViewState="false" DataSourceID="topSiteMap" AccessKey="" UseSimpleRendering="true" UseSeparateCss="false" Orientation="Horizontal" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="1" SkipLinkText="" CssClass="s4-tn" 9. In the global navigation code above you should see CssClass="s4-tn" . As an additional step you can replace "s4-tn" your own custom name like CssClass="MyNav" . If you can the name of the CSS class make sure you update your custom style sheet with the new name, example below: .MyNav{ padding:0px; margin:0px; } .MyNav ul.static{ white-space:nowrap; } 10. At this point you are ready to brand your global navigation. The next step is to modify your style.css with your customizations to the default SharePoint styles. Have fun styling and make sure you save your work often. Hope it helps!!

    Read the article

  • UppercuT &ndash; Custom Extensions Now With PowerShell and Ruby

    Arguably, one of the most powerful features of UppercuT (UC) is the ability to extend any step of the build process with a pre, post, or replace hook. This customization is done in a separate location from the build so you can upgrade without wondering if you broke the build. There is a hook before each step of the build has run. There is a hook after. And back to power again, there is a replacement hook. If you dont like what the step is doing and/or you want to replace its entire functionality,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Logic behind a bejeweled-like game

    - by Joe
    In a prototype I am doing, there is a minigame similar to bejeweled. Using a grid that is a 2d array (int[,]) how can I go about know when the user formed a match? I only care about horizontally and vertically. Off the top of my head I was thinking I would just look each direction. Something like: int item = grid[x,y]; if(grid[x-1,y]==item) { int step=x; int matches =2; while(grid[step-1,y]==item) { step++; matches++ } if(matches>2) //remove all matching items } else if(grid[x+1,y]==item //.... else if(grid[x,y-1==item) //... else if(grid[x,y+1]==item) //... It seems like there should be a better way. Is there?

    Read the article

  • Doing a P2V in OVM 3.0.3

    - by Steen Schmidt
    The other day I was talking to a customer about how you can do a P2V in OVM. I had already written about this topic earlier in my Blog and there was also some good documentation on the topic on how you do this. But what about seing the whole process from start to end, so I have include a link to a demo on the topic. Here is demo that has been divide into three steps: Step 1. Taget System,   Step 2. Import into OVM, and    Step 3. Use the new Template.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Problems creating a debdiff

    - by Chris Wilson
    I'm following this guide to create a debdiff for a package I'm patching. Everything goes fine until step number 8 and I attempt to create the debdiff after committing the changes. The package in question is Zim, pulled form Launchpad using bzr branch lp:zim and according to this guide I should execute the following command to create the debdiff: debdiff zim_0.49.dsc zim_0.49ubuntu1.dsc > zim_0.49ubuntu1.debdiff however, when I actually try to execute this command, I get the following error: debdiff: fatal error at line 314: Can't read file: zim_0.49.dsc Upon inspection of the directory in which the files created from debuild -S (step 6) are deposited, I find zim_0.49ubuntu1_source.changes zim_0.49ubuntu1.dsc zim_0.49ubuntu1.tar.gz zim_0.49ubuntu1_source.build but no sign of zim_0.49.dsc. I could probably create one by debuilding the package as soon as I check out the code, before starting work, but that would add an extraneous entry in the changelog. Is there a step missing from the guide that creates zim_0.49.dsc or is the file itself missing from the source?

    Read the article

  • Migrating an Existing ASP.NET App to run on Windows Azure

    - by kaleidoscope
    Converting Existing ASP.NET application in to Windows Azure Method 1:  1. Add a Windows Azure Cloud Service to the existing solution                         2. Add WebRole Project in solution and select Existing Asp.Net project in it. Method 2: The other option would have been to create a new Cloud Service project and add the ASP.NET project (which you want to deploy on Azure) to it using -                    1. Solution | Add | Existing Project                    2. Add | Web Role Project in solution Converting Sql server database to SQL Azure - Step 1: Migrate the <Existing Application>.MDF Step 2: Migrate the ASP.NET providers Step 3: Change the connection strings More details can be found at http://blogs.msdn.com/jnak/archive/2010/02/08/migrating-an-existing-asp-net-app-to-run-on-windows-azure.aspx   Ritesh, D

    Read the article

  • Wifi problem in ubuntu using macbook pro when it restart

    - by Amro
    I was read that subject : http://ubuntuforums.org/showthread.php?t=2011756 and i follow it step by step in the page n 1 , then i was connect but after i restart my macbook again , i was lost the wifi connection.i dont know why or whats the problem exactly. every time I run this command: dmesg | grep -e b43 -e bcma I get this output: [ 2012.769684] bcma-pci-bridge 0000:02:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 2012.769701] bcma-pci-bridge 0000:02:00.0: setting latency timer to 64 [ 2012.769775] bcma: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x25, class 0x0) [ 2012.769808] bcma: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x1D, class 0x0) [ 2012.769889] bcma: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x13, class 0x0) [ 2012.770175] bcma: PMU resource config unknown for device 0x4331 [ 2012.824527] bcma: Bus registered [ 2012.831744] b43-phy0: Broadcom 4331 WLAN found (core revision 29) [ 2013.371031] b43-phy0: Loading firmware version 666.2 (2011-02-23 01:15:07) and to get the connection again every time i must entery that code in the step of reload driver. How i can let the ubuntu see my wifi and wireless device automatically when i reboot my computer????

    Read the article

  • How to stream on Twitch.Tv

    - by John
    Alright so I've had Linux Ubuntu 12.04 for over a year now and I still don't know anything about it. That only thing I can do with it is use the internet. I want to start streaming games that I play on my computer to Twitch.tv., but I don't know how. All of the downloads are only for windows. I found a website that tells you how to do it, but since I know nothing about linux, I can't do it. I haven't been able to get past the first step yet. Can someone please give me a step by step tutorial on how to do it. Please do not think you are being to specific, because I am sure it will help me. The link to the website is this -http://www.creativetux.com/2012/11/streaming-to-twitchtv-with-linux.html

    Read the article

  • Locomotion-system with irregular IK

    - by htaunay
    Im having some trouble with locomtions (Unity3D asset) IK feet placement. I wouldn't call it "very bad", but it definitely isn't as smooth as the Locomotion System Examples. The strangest behavior (that is probably linked to the problem) are the rendered foot markers that "guess" where the characters next step will be. In the demo, they are smooth and stable. However, in my project, they keep flickering, as if Locomotion changed its "guess" every frame, and sometimes, the automatic defined step is too close to the previous step, or sometimes, too distant, creating a very irregular pattern. The configuration is (apparently)Identical to the human example in the demo, so I guessing the problem is my model and/or animation. Problem is, I can't figure out was it is =S Has anyone experienced the same problem? I uploaded a video of the bug to help interpreting the issue (excuse the HORRIBLE quality, I was in a hurry).

    Read the article

  • UppercuT &ndash; Custom Extensions Now With PowerShell and Ruby

    Arguably, one of the most powerful features of UppercuT (UC) is the ability to extend any step of the build process with a pre, post, or replace hook. This customization is done in a separate location from the build so you can upgrade without wondering if you broke the build. There is a hook before each step of the build has run. There is a hook after. And back to power again, there is a replacement hook. If you dont like what the step is doing and/or you want to replace its entire functionality,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Kill Android Apps without Task Manager

    - by Gopinath
    Android is for geeks. It best fits for the users who know how to get around sloppy areas and find their way out. If you are an heavy Android user you would have noticed Apps crashing often. A well written App should not crash, if crashes should exit the process gracefully. But unfortunately Google Play has many apps that not only just crash, they hang in a where they don’t respond and you can’t access the application. The only option left to you is to forcefully close them. If you encounter a situation to forcefully close an App you have two options. First one is to use Task Manager application to close them and the second option is use built in Android OS features. Here are the steps to forcefully close an Android App without using Task Manager Step 1: Go to Settings and select Apps Step 2: Switch to All apps tab and select the application you want to close Step 3: Touch on Force Stop button to forcefully close the app That’s the simplest way to forcefully kill Android Apps.

    Read the article

  • why the next button doesn't work in Joomla! 2.5.4 Installation [closed]

    - by rahul
    I was trying to insall the Joomla! 2.5.4 Installation. But I got stucked in the first step only. The button doesn't respond on clicks. I tried previous version like 1.5.26. But here also the process got hang after the 3rd step. In the 4th step the next button doesn't work as before. what to do, I am in dilemma. I am using XAMPP server for my localhost,please guide me, I lost complete one day in installing Joomla.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >