Search Results

Search found 83878 results on 3356 pages for 'google data api'.

Page 192/3356 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Google maps z-index problem in IE

    - by Bas van de Lustgraaf
    I'm loading my google maps into div class="extra" style="display: none;" /. As soon as the AJAX request is complete, the map_canvas div is placed inside the hidden div and the hidden div will be vissible with the toggleDown jquery effect. In FF it's working perfect, but in IE the Google maps (map_canvas div) is already visible before the toggleDown effect is started. I think the z-index and the relative position of the map_canvas div wich is loaded into the hidden div will place the map_canvas div on top of the hidden div. What do i have to change to make sure the map_canvas div is not on top of the hidden div? While toggleDown in FF: http:// img169.imageshack.us/img169/9274/50485429.jpg While toggleDown in IE: http://img188.imageshack.us/img188/2110/93959677.jpg

    Read the article

  • Call a JavaScript function in a Google Chrome window from a vb.net program

    - by user1464827
    I am having a real problem with this. I want to know if it is possible to run/call a javascript function in a Google Chrome web browser window from a VB.net application. The scenario is that i want to monitor the pc activity (which i know how to) and then if a certain event is met (for example, high ram usage) then it calls a JavaScript function in a Google Chrome web browser window so the website is updated. Sort of like a bridge. The only bit it need to know is the vb.net code for how to access a chrome window and invoke a javascript function if its possible. I assume i will need to use process handlers? Any help is appreciated.

    Read the article

  • Data driven charts and graphs from xml to svg

    - by garymlewis
    I asked this question a week ago, but did not do a good job of describing the problem. Here's a second attempt. I'd like to produce data-driven charts, graphs, and other data visualizations, starting with data in an xml database and ending up with the visualizations as SVG. Here's an example from the W3C. It uses Javascript to create a stacked bar chart as SVG from xml. I'd like to do something similar but use a graphics library (or ???) instead of js to handle the construction of axes, labels, titles, data points, etc. My question, then: what are the options that I should consider ... things like Raphael I suppose, but initially I'd like to cast a wide net and look at many different options. My experience is all with static data visualizations using statistics packages like R, but eventually I'd like to create interactive data visualizations with html5/css3/svg. Any help would be much appreciated. Thanks.

    Read the article

  • Google CDN not gzipping jquery

    - by thermal7
    If I navigate here: http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js I download 70k using Firefox 3.6.3 and I can confirm it is sending Accept-Encoding: gzip. If I use the Microsoft one: http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js I download 30k (and it comes through as Content-Encoding: gzip) I am also experiencing this when using jquery 1.4.2 in regular sites eg jquery.com. Funily enough, stack overflow which references jquery 1.3.2 on the google cdn, is coming through gzipped. Why is this happening? Is it some kind of issue with google or am I missing something? I live in Melbourne, Australia.

    Read the article

  • Google Maps and jQuery Tabs

    - by Dom
    Hello All, I have slight problem with Google maps included in simple jQuery Tabs. Below I pasted the code: jQuery: $(document).ready(function() { //Default Action $(".tab_content").hide(); $("ul.tabs li:first").addClass("active").show(); $(".tab_content:first").show(); //On Click Event $("ul.tabs li").click(function() { $("ul.tabs li").removeClass("active"); $(this).addClass("active"); $(".tab_content").hide(); var activeTab = $(this).find("a").attr("href"); $(activeTab).fadeIn(); return false; }); }); Here is the HTML for the tabs: <div class="bluecontainer"> <ul class="tabs"> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> <li><a href="#tab3">Tab3</a></li> <li><a href="#tab4">Tab4</a></li> </ul> <div class="tab_container"> <div id="tab1" class="tab_content"> <h2>Tab1</h2> </div> <div id="tab2" class="tab_content"> <h2>Tab2</h2> </div> <div id="tab3" class="tab_content"> <div>google Map</div> </div> <div id="tab4" class="tab_content"> <h2>Tab4</h2> </div> </div> </div> I really don't know what to do to. Is that a general problem with google maps or there is something with my tabs? But they are working just fine with everything else. Thank you for your help in advance

    Read the article

  • Passing through lists from jQuery to the service

    - by thedixon
    I'm sure I've done this in another solution, but I can't seem to find any solution as to do it again and wondered if anyone can help me... This is my WebAPI code: public class WebController : ApiController { public void Get(string telephone, string postcode, List<Client> clients) { } } And, calling this from jQuery: function Client(name, age) { this.Name = name; this.Age = age; } var Clients = []; Clients.push(new Client("Chris", 27)); $.ajax({ url: "/api/Web/", data: { telephone: "999", postcode: "xxx xxx", clients: Clients } }); But the "clients" object always comes back as null. I've also tried JSON.stringify(Clients), and this is the same result. Can anyone see anything obvious I'm missing here?

    Read the article

  • Jquery Google Maps Problem

    - by Matias
    Here is the problem: Lets say a Jquery toggle button which loads a Google Map upon request and hides its later when toggled: $('#showmeMap').toggle(function() { var map = new GMap2($("#map").get(0)); var mapCenter = new GLatLng(-2, 20); map.setCenter(mapCenter, 12); $('#map').show(); } }, function() { $('#map').hide(); }); Then I add some random markers and later another function which removes markers from the map: $('#destroyMarkersButton').click(function() { for (var i=0; i<gmarkers.length; i++) { map.removeOverlay(gmarkers[i]); } }); When clicking on the button I´ve got the error Map is undefined. My thought was defining Google Map object globally: map = new GMap2($("#map").get(0)); Which works perfectly in Firefox, however, map fails to load on internet explorer!! Any suggestions ?

    Read the article

  • How to save coordinates with Google Map?

    - by Pavel
    Hey everyone. I'm currently developing an app which uses tabs and google map. What I want to do is to get the gps positions, say 3, and store them in sql db (which I'm already doing) and then display them on the map. I already created canvas, added to overlay but those points disappear when I'm changing tabs so I thought if there is a way to somehow store those coords with google map so I can retrieve them and display them nicely whenever I'm clicking the "map tab"? Please can anyone help?

    Read the article

  • Getting time from a cell in Google Spreadsheet

    - by lostInTransit
    Hi I have created a Google Spreadsheet form in which for one of the fields I get the time (hh:mm:ss) in 24-hour format. I am using Google Apps Scripts to read values from this spreadsheet and put it into another file. But the other file gets the entire date (don't know from where) and shows a date from 1899. Also the time is shown in PDT (not my time zone). Even the time it prints is not correct How do I read the column value as a string and put it in the other file OR get just the time from the column value (and the correct time) I am using a simple row[0][3] to get the values from an array after doing range.getValues() Thanks.

    Read the article

  • how to google a symbol keyword like "$?"

    - by ZhengZhiren
    i saw a trick in a book: in a linux shell, we can use &? to get the return value of a command. For example,we run a command,if it exit normally, the return value is 0. And then we type $?,we will get 0 in the screen. i want to google this kind of usage, so i have to type these two symbol $? in the search blank.But the search engine just return nothing to me... i have looked at the google help page, but still can't find a solution. so my question is: how can i search with this kind of keyword. or if you can give me some advise of the usage of $? or sort of thing, that will be also appreciated.

    Read the article

  • jQuery .ajax doesn't load Google Adsense

    - by Sahas Katta
    Hey Everyone, Just ran into an odd issue. I have a simple WP loop and instead of regular NEXT/BACK pages, I use a jQuery powered $.ajax get to append the following page to the current page. It works perfectly. However, I choose to insert a Google Adsense unit every 5th story. Unfortunately, the Adsense unit that is brought in with a second, third, or etc page load don't render. Here's my loop: 10 stories per page, Adsense after the 4th one. <?php $count = 0; ?> <?php if ( have_posts() ) : ?> <?php while ( have_posts() ) : the_post(); ?> <?php $count++; ?> <div class="card"> <div class="title"> <a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"><span><?php the_title(); ?></span></a> </div> </div> <?php if ($count == 4) : ?> <div class="card"> <!-- ADSENSE CODE HERE (Straight from Google Adsense Panel, no tweaks.) --> </div> <?php endif; ?> As for my jQuery script, here's how that looks: $.ajax({ url: nextPageLink, type: 'GET', success: function(data) { $(data).find('#reviews .card').appendTo('#reviews'); }, error: function(xhr, status, error) { $('.loadination').addClass('hidden'); } }); Keep in mind, I just simplified my code to give you guys an example. The code above was just the essentials. All the loading stuff works perfectly. Images, text, links, etc all load just fine. However, the Google Adsense unit doesn't. Any help would be appreciated. Thanks and Happy Holidays!

    Read the article

  • How to parse results from google blog search?

    - by Jooj
    Hello! I'm trying to parse the number of results from google seach blog. Could somebody please help me! http://blogsearch.google.com/blogsearch?hl=en&ie=UTF-8&q=a&btnG=Search+Blogs returns a complete page. On the right side you can see (Results 1 - 10 of about 2,504,830,546 for a. (0.05 seconds) ). How could I get 2,504,830,546??? Thanks. Regards.

    Read the article

  • OpenId + Bort + google

    - by zakurahime
    Hi I'm new in using ruby and i wanted to implement the openid feature that came with the bort template... I used the google openid url https://www.google.com/accounts/o8/id in the sign up but it cant get the email that i used in the openid login.. here's a part of my code... its the standard code from the bort template def create logout_keeping_session! if using_open_id? authenticate_with_open_id(params[:openid_url], :return_to => open_id_create_url, :required => [:nickname, :email]) do |result, identity_url, registration| if result.successful? create_new_user(:identity_url => identity_url, :login => registration['nickname'], :email => registration['email']) else failed_creation(result.message || "Sorry, something went wrong") end end else create_new_user(params[:user]) end end i will really appreciate any help on this.. i've been stuck with this for a few days now.. thanks

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >