Search Results

Search found 59041 results on 2362 pages for 'data replication'.

Page 10/2362 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Development Environment for Testing MySQL Replication

    - by Dave Morris
    Is there an easy way to setup an environment on one machine (or a VM) with MySQL replication? I would like to put together a proof of concept of MySQL replication with one Master write instance and two slave instances for reads. I can see doing it across 2 or 3 VMs running on my computer, but that would really bog down my system. I'd rather have everything running on the same VM. What's the best way to proof out scalability solutions like this in a local dev environment? Thanks for your help, Dave

    Read the article

  • Cassandra replication system - how it works

    - by inquisitor
    Does cassandra replicates only on write procedure (with choosen consistency level)? Is there any auto-replicate option for absent nodes, if i want symetric data in every node? If I plug in new node to cluster there is no auto replication - how to sync data from others nodes with new one? If I want replication like multimaster (2 nodes) with slave backup (1 node) known from MySQL, what is the proper logic setup and manage that on cassandra (3 nodes)? How about two nodes?

    Read the article

  • Switching MySql Replication Format

    - by NNN
    I'm currently using statement-based replication. After upgrading to MySql 5.1, I'm considering using row-based replication. After reading the docs it seems that you can change the format of the master on the fly. Will the slave automatically adapt to whatever type of binary log it is sent? Do I have to make any changes to the slave or master to get ready for switching or can I simply modify the binlog_format variable on the master?

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Echange Server 2003 Replication

    - by Campo
    We have 2 Exchange 2003 Servers. 1 is the master and is currently hosting all the mailboxes and public stores. I would like to setup a standby exchange server that is replicating all the mailboxes and public stores from the primary exchange server. This way if the primary goes down we can default to the standby. I have searched all over for some guidance but have not been able to find anything detailing this for Exchange 2003. Any help is much appreciated.

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • ZFS replication between 2 ZFS file systems

    - by XO01
    I initially replicated tank/storage1 -- usb1/storage1-slave (depicted below), and then (deliberately) destroyed the snapshot I replicated from. By doing this, did I lose the ability to incrementally (zfs send -i) replicate between these 2 file systems? What's the best way to approach SYNC'ing these file systems after destroying this snapshot? # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 128G 100G 23K /tank tank/storage1 128G 100G 128G /tank/storage1 usb1 122G 563G 24K /usb1 usb1/storage1-slave 122G 563G 122G /usb1/storage1-slave usb1/storage2 21K 563G 21K /usb1/storage2 What if I initially RSYNC'd tank/storage1 -- usb1/storage1-slave, and decided to incrementally replicate 'via zfs send -i'.

    Read the article

  • MySQL slave replication Seconds behind master increasing?

    - by geekmenot
    I started a MySQL slave using innobackupexand the Read_Master_Log_Pos: and the Relay_Log_Pos: are updating however the seconds behind master keeps on increasing (it's at Seconds_Behind_Master:496637 currently and increasing). Any ideas on how to fix this? mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 10.8.25.111 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.005021 Read_Master_Log_Pos: 279162266 Relay_Log_File: mysql-relay-bin.000004 Relay_Log_Pos: 378939436 Relay_Master_Log_File: mysql-bin.004997 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 378939290 Relay_Log_Space: 26048998487 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 497714 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 1 row in set (0.00 sec)

    Read the article

  • SqlServer2005 Enterprise Fast Recovery, SqlAgent Availability, and Replication

    - by automatic
    I have a database under SqlServer2005 Enterprise 64bit sp3, that is in phase 3 of 3 of recovery after a reboot without normal shutdown. It looks like with fast Recovery, the database became available when recovery moved into phase 3. However, it seems (based on message in SqlAgent log), that SqlAgent is "started" but not available to run jobs until recovery completes. I have other databases on the same server that are transactional publications. It seems to me that if I let users update the published databases, transactions will start to build up in the log, but won't be moved to the distribution database or onto the subscribers because SqlAgent isn't running jobs. Should I be overly concerned about performing updates before

    Read the article

  • Replication from OSX OpenDirectory to OpenLDAP

    - by natacado
    I have an OpenDirectory server running on an OSX Server machine, and I'd like to increase the reliability of the service by having a slave server. The problem is, I only have 1 OSX Server but I have plenty of Linux servers available. I'm happy with Apple's tools that integrate with OpenDirectory, but given Apple's recent discontinuation of the XServe, I'm not particularly interested in continuing with Apple hardware. I remember hearing that OpenDirectory is (now-distantly) based off the OpenLDAP code base; is there any way I could replicate from OpenDirectory to OpenLDAP instead of having to purchase another OSX server?

    Read the article

  • Using TrueCrypt with File Replication on Windows Server

    - by neildeadman
    We have a few folders that are set to replicate to a DR file server off-site. One of these folders contains a file that is a TrueCrypt volume container. When this file is mounted in TrueCrypt, the file won't replicate (fair enough!). I'm looking at alternatives to improve this situation. One solution I currently have is to have a scheduled task to unmount the volume and then every morning as the volume is needed, have someone mount it. This is a pain slightly because the password is known by a few people (I'm not one and neither are my colleagues who would be performing the mounting operation) so we'd need to continually get them to come over and type it in. The other I had was to have one TrueCrypt container on each server and replicate the contents when they are mounted. I wasn't able to get TrueCrypt to see the mounted volume so I guess this is a no go. Any other solutions I have missed or a fix for the above?

    Read the article

  • Mysql replication, Slow resyncing of slave after an error

    - by James Hackett
    I have a slave that got an error about a months or so ago and got way behind the master. I fixed the error and now playing catchup with the master but its going very slowly. Its going at 1.3x real time. I was using less that 10% of the db resources when these writes were first happening so the speed of the server shouldn't be an issue. Is there any settings I can switch to help the slave catch up with the master?

    Read the article

  • How often do you use data structures (ie Binary Trees, Linked Lists) in your jobs/side projects?

    - by Chris2021
    It seems to me that, for everyday use, more primitive data structures like arrays get the job done just as well as a binary tree would. My question is how common is to use these structures when writing code for projects at work or projects that you pursue in your free time? I understand the better insertion time/deletion time/sorting time for certain structures but would that really matter that much if you were working with a relatively small amount of data?

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • Why would you use data structures (ie Binary Trees, Linked Lists) in your jobs/side projects? [closed]

    - by Chris2021
    It seems to me that, for everyday use, more primitive data structures like arrays get the job done just as well as a binary tree would. My question is how common is to use these structures when writing code for projects at work or projects that you pursue in your free time? I understand the better insertion time/deletion time/sorting time for certain structures but would that really matter that much if you were working with a relatively small amount of data?

    Read the article

  • Selective replication with CouchDB

    - by FRotthowe
    I'm currently evaluating possible solutions to the follwing problem: A set of data entries must be synchonized between multiple clients, where each client may only view (or even know about the existence of) a subset of the data. Each client "owns" some of the elements, and the decision who else can read or modify those elements may only be made by the owner. To complicate this situation even more, each element (and each element revision) must have an unique identifier that is equal for all clients. While the latter sounds like a perfect task for CouchDB (and a document based data model would fit my needs perfectly), I'm not sure if the authentication/authorization subsystem of CouchDB can handle these requirements: While it should be possible to restict write access using validation functions, there doesn't seem to be a way to authorize read access. All solutions I've found for this problem propose to route all CouchDB requests through a proxy (or an application layer) that handles authorization. So, the question is: Is it possible to implement an authorization layer that filters requests to the database so that access is granted only to documents that the requesting client has read access to and still use the replication mechanism of CouchDB? Simplified, this would be some kind of "selective replication" where only some of the documents, and not the whole database is replicated. I would also be thankful for directions to some detailed information about how replication works. The CouchDB wiki and even the "Definite Guide" Book are not too specific about that.

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • New version of SQL Server Data Tools is now available

    - by jamiet
    If you don’t follow the SQL Server Data Tools (SSDT) blog then you may not know that two days ago an updated version of SSDT was released (and by SSDT I mean the database projects, not the SSIS/SSRS/SSAS stuff) along with a new version of the SSDT Power Tools. This release incorporates a an updated version of the SQL Server Data Tier Application Framework (aka DAC Framework, aka DacFX) which you can read about on Adam Mahood’s blog post SQL Server Data-Tier Application Framework (September 2012) Available. DacFX is essentially all the gubbins that you need to extract and publish .dacpacs and according to Adam’s post it incorporates a new feature that I think is very interesting indeed: Extract DACPAC with data – Creates a database snapshot file (.dacpac) from a live SQL Server or Windows Azure SQL Database that contains data from user tables in addition to the database schema. These packages can be published to a new or existing SQL Server or Windows Azure SQL Database using the SqlPackage.exe Publish action. Data contained in package replaces the existing data in the target database. In short, .dacpacs can now include data as well as schema. I’m very excited about this because one of my long-standing complaints about SSDT (and its many forebears) is that whilst it has great support for declarative development of schema it does not provide anything similar for data – if you want to deploy data from your SSDT projects then you have to write Post-Deployment MERGE scripts. This new feature for .dacpacs does not change that situation yet however it is a very important pre-requisite so I am hoping that a feature to provide declaration of data (in addition to declaration of schema which we have today) is going to light up in SSDT in the not too distant future. Read more about the latest SSDT, Power Tools & DacFX releases at: Now available: SQL Server Data Tools - September 2012 update! by Janet Yeilding New SSDT Power Tools! Now for both Visual Studio 2010 and Visual Studio 2012 by Sarah McDevitt SQL Server Data-Tier Application Framework (September 2012) Available by Adam Mahood @Jamiet

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Adding to the schema of a subscribing database.

    - by NTDLS
    Transactional Replication on SQL Server 2005 Enterprise x64 (SP3). I need to add check constraints to a databases that is the target for a replication, but I cannot add the check constraints to the publishing database. The Problem is that the replication process keeps removing my constraints. How do I prevent this?

    Read the article

  • How to synchronize two (or n) replication processes for MS SQL databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above?

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • Database cluster... without Master/Slave?

    - by RadiantHex
    Hi there! I'm wondering if it is possible to have a set of SQLdb servers to which data is written and have them replicate, avoiding conflicting information. I imagine that a Master/Slave structure would be mandatory, I would like to know if a system where servers have no hierarchy could support replication. Currently I'm using MySQL, but I would be happy to move to another database if needed. Any ideas? :)

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >