Search Results

Search found 63361 results on 2535 pages for 'dynamic data'.

Page 7/2535 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • How to get better at solving Dynamic programming problems

    - by newbie
    I recently came across this question: "You are given a boolean expression consisting of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of ways to parenthesize the expression such that it will evaluate to true. For example, there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to true." I knew it is a dynamic programming problem so i tried to come up with a solution on my own which is as follows. Suppose we have a expression as A.B.C.....D where '.' represents any of the operations and, or, xor and the capital letters represent true or false. Lets say the number of ways for this expression of size K to produce a true is N. when a new boolean value E is added to this expression there are 2 ways to parenthesize this new expression 1. ((A.B.C.....D).E) ie. with all possible parenthesizations of A.B.C.....D we add E at the end. 2. (A.B.C.(D.E)) ie. evaluate D.E first and then find the number of ways this expression of size K can produce true. suppose T[K] is the number of ways the expression with size K produces true then T[k]=val1+val2+val3 where val1,val2,val3 are calculated as follows. 1)when E is grouped with D. i)It does not change the value of D ii)it inverses the value of D in the first case val1=T[K]=N.( As this reduces to the initial A.B.C....D expression ). In the second case re-evaluate dp[K] with value of D reversed and that is val1. 2)when E is grouped with the whole expression. //val2 contains the number of 'true' E will produce with expressions which gave 'true' among all parenthesized instances of A.B.C.......D i) if true.E = true then val2 = N ii) if true.E = false then val2 = 0 //val3 contains the number of 'true' E will produce with expressions which gave 'false' among all parenthesized instances of A.B.C.......D iii) if false.E=true then val3=( 2^(K-2) - N ) = M ie. number of ways the expression with size K produces a false [ 2^(K-2) is the number of ways to parenthesize an expression of size K ]. iv) if false.E=false then val3 = 0 This is the basic idea i had in mind but when i checked for its solution http://people.csail.mit.edu/bdean/6.046/dp/dp_9.swf the approach there was completely different. Can someone tell me what am I doing wrong and how can i get better at solving DP so that I can come up with solutions like the one given above myself. Thanks in advance.

    Read the article

  • How often do you use data structures (ie Binary Trees, Linked Lists) in your jobs/side projects?

    - by Chris2021
    It seems to me that, for everyday use, more primitive data structures like arrays get the job done just as well as a binary tree would. My question is how common is to use these structures when writing code for projects at work or projects that you pursue in your free time? I understand the better insertion time/deletion time/sorting time for certain structures but would that really matter that much if you were working with a relatively small amount of data?

    Read the article

  • Why would you use data structures (ie Binary Trees, Linked Lists) in your jobs/side projects? [closed]

    - by Chris2021
    It seems to me that, for everyday use, more primitive data structures like arrays get the job done just as well as a binary tree would. My question is how common is to use these structures when writing code for projects at work or projects that you pursue in your free time? I understand the better insertion time/deletion time/sorting time for certain structures but would that really matter that much if you were working with a relatively small amount of data?

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • New version of SQL Server Data Tools is now available

    - by jamiet
    If you don’t follow the SQL Server Data Tools (SSDT) blog then you may not know that two days ago an updated version of SSDT was released (and by SSDT I mean the database projects, not the SSIS/SSRS/SSAS stuff) along with a new version of the SSDT Power Tools. This release incorporates a an updated version of the SQL Server Data Tier Application Framework (aka DAC Framework, aka DacFX) which you can read about on Adam Mahood’s blog post SQL Server Data-Tier Application Framework (September 2012) Available. DacFX is essentially all the gubbins that you need to extract and publish .dacpacs and according to Adam’s post it incorporates a new feature that I think is very interesting indeed: Extract DACPAC with data – Creates a database snapshot file (.dacpac) from a live SQL Server or Windows Azure SQL Database that contains data from user tables in addition to the database schema. These packages can be published to a new or existing SQL Server or Windows Azure SQL Database using the SqlPackage.exe Publish action. Data contained in package replaces the existing data in the target database. In short, .dacpacs can now include data as well as schema. I’m very excited about this because one of my long-standing complaints about SSDT (and its many forebears) is that whilst it has great support for declarative development of schema it does not provide anything similar for data – if you want to deploy data from your SSDT projects then you have to write Post-Deployment MERGE scripts. This new feature for .dacpacs does not change that situation yet however it is a very important pre-requisite so I am hoping that a feature to provide declaration of data (in addition to declaration of schema which we have today) is going to light up in SSDT in the not too distant future. Read more about the latest SSDT, Power Tools & DacFX releases at: Now available: SQL Server Data Tools - September 2012 update! by Janet Yeilding New SSDT Power Tools! Now for both Visual Studio 2010 and Visual Studio 2012 by Sarah McDevitt SQL Server Data-Tier Application Framework (September 2012) Available by Adam Mahood @Jamiet

    Read the article

  • Is there a real difference between dynamic analysis and testing?

    - by user970696
    Often testing is regarded as a dynamic analysis of a software. Yet while writing my thesis, the reviewer noted to me that dynamic analysis is about analyzing the program behind the scenes - e.g. profiling and that it is not the same as testing because its "analysis" which looks inside and observes. I know that "static analysis" is not testing, should we then separate this "dynamic analysis" also from testing? Some books do refer to dynamic analysis in this sense. I would maybe say that testing is a one mean of dynamic analysis?

    Read the article

  • ASP.NET Dynamic Data Browser Compatibility

    - by Petras
    Could any experienced users of Dynamic Data comment on whether there are issues with it in: Internet Explorer 6 Safari Chrome Opera We are looking to use it on a public facing website and good old IE6 has many important users in government departments and large companies so it has to work there. The other browsers could also become an issue.

    Read the article

  • Faking a dynamic schema in Core Data?

    - by Gouldsc
    From reading the Apple Docs on Core Data, I've learned that you should not use Core Data when you need a dynamic schema. If I wanted to provide the user the ability to create their own properties, in a core data model would it work if I created some "dummy" attributes like "custom decimal 1", "custom decimal 2", "custom text 1", "custom text 2" etc that the user could name and use for their own purposes? Obviously this won't work for relationships, but for simple properties it seems like a reasonable workaround. Will creating a bunch of dummy attributes on my entities that go unused by most users noticeably decrease performance for them? Have any of you tried something like this? Thanks!

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • TStringList, Dynamic Array or Linked List in Delphi?

    - by lkessler
    I have a choice. I have an array of ordered strings that I need to store and access. It looks like I can choose between using: A TStringList A Dynamic Array of strings, and A Linked List of strings In what circumstances is each of these better than the others? Which is best for small lists (under 10 items)? Which is best for large lists (over 1000 items)? Which is best for huge lists (over 1,000,000 items)? Which is best tor minimize memory use? Which is best to minimize loading and/or access time? For reference, I am using Delphi 2009.

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • Excel Help: Data Input Help

    - by B-Ballerl
    Everyday I download data from a site that will have rows each filled with individual data for clients. I'm able to input the data into excel as a whole but after that I'm having trouble figuring out how to put it into a chart. For example Web visits time. So say Client 1 stayed for 5 min increasing his total time on the site to 20 min and Client 2 stayed for 0 min keeping his time of 10 min and they were both registered on new years eve, and R1's last login was today and R2's was yesterday. (R for some reason repersents Client, no idea why...). Client 3 hasn't been on since he registered keeping his total at 4 min So my data would look something like this for Today (20110104) R1,20101231,20110104,20 R2,20101231,20110103,10 R3,20101231,20101231,4 And this for the day before (201101030), R1,20101231,20110102,15 R2,20101231,20110103,10 R3,20101231,20101231,4 I get about 200+ client rows each day where even the names of the Client list are changing. Is it possible to import the data each day and fill it in a excel sheet where the Client number is off on the left hand side in a table, and the amount of time (Whole Number ex. 4) each day it spends on the site extend to the right under it's specific date see Picture? I've manage to create a manual sheet but have been unsucessful at getting excel to do any of it for me. Here are two pictures:

    Read the article

  • Recover data from hard drive with partitions (but not most data) overwritten

    - by Macha
    I have a 500GB hard drive I've been keeping around to recover data from that I removed from a failing NAS drive that got sort of... erratic at the end. I finally got rid of the NAS when during a firmware update it removed the partition table. Fast forward to a week ago, when I was building a new PC, and a mixup resulted in me placing the hard drive in question in the new PC and installing Windows XP on the first 100GB. I'm presuming any data on that first 100GB is now gone, but for the rest of it, is there any way I can recover it at home, as professional data recovery is currently too expensive? I have a blank 1TB HDD if I can store any images of that hard drive on. The problem was definitely with the NAS and not the hard drive, as the hard drive had a successful install of Windows when mistakenly place in the new PC, and there were capacitors in the NAS's circuitry clearly broken. The data I want to recover (in order of priority) is: High: Some jpgs of family photos. Medium: Some RAW files. (There are also jpg versions of all of these) Low: Some mp3s, avis and ISOs, I can re-rip most of these if need be, but it'd be handy not to have to. (I don't need a backup lecture, and if you can hold it in from nagging Jeff Atwood for it, you can hold it in from nagging me for it) In short: The partition tables are gone and overwritten. The data is not overwritten, except for an amount equal to the size of a Windows XP SP3 installation.

    Read the article

  • Why Oracle Delivers More Value than IBM in Data Integration Solutions

    - by irem.radzik(at)oracle.com
    For data integration projects, IT organization look for a robust but an easy-to-use solution, which simplifies enterprise data architecture while providing exceptional value-- not one that adds complexity and costs. This is a major challenge today for customers who are using IBM InfoSphere products like DataStage or Change Data Capture. Whereas, Oracle consistently delivers higher level value with its data integration products such as Oracle Data Integrator, Oracle GoldenGate. There are many differentiators for Oracle's Data Integration offering in comparison to IBM. Here are the top five: Lower cost of ownership Higher performance in both real-time and bulk data movement Ease of use and flexibility Reliability Complete, Open, and Integrated Middleware Offering Architectural differences between products contribute a great deal to these differences. First of all, Oracle's ETL architecture does not require a middle-tier transformation server, something IBM does require. Not only it costs more to manage an additional transformation server including energy costs, but it adds a performance bottleneck as well. In addition, IBM's data integration products are complex and often require lengthy professional services engagements to integrate. This translates to higher costs and delayed time to market. Then there's the reliability factor. Our customers choose Oracle GoldenGate over IBM's InfoSphere Change Data Capture product because Oracle GoldenGate is designed for mission-critical systems that require guaranteed data delivery and automatic recovery in case of process interruptions. On Thursday we will discuss these key differentiators in detail and provide customer examples that chose Oracle over IBM in data integration projects. Join us on Thursday Feb 10th at 11am PT to learn how Oracle delivers more value than IBM in data integration solutions.

    Read the article

  • Stopping duplicate H1 and title from dynamic content

    - by codemonkey
    I have a web site where there are lots of dynamically (database driven) created pages. These pages are basically used to show uploaded images The pages look a bit like this URL: http://www.mywebsite.com/page-id/page-title/ H1: View from the sea This is a big issue because I might have 10 other pages with the title: 'View from the sea'. I know the simple solution would be to make sure the pages are named differently but I have lots of users on the web site so it's not that simple. What do you guys think to putting the page-id with the page-title in the H1 tag? So it might read 437 - View from the sea. I need to differentiate the h1 titles. I think using the page-id would help but if anyone has a better solution that would be great! Thanks in advance

    Read the article

  • adding tagged / dynamic pages in sitemap

    - by sam
    ive got a blog thats been running for about a year ive made about 200 posts, and there should be about 220 pages to index (additional pages for about / contact ect). When i go to crawl the site i get 1900 pages because of all the pages that are related to tags ive used in my blogs these 70% of these pages only contain one blog post. When submitting my site map to google should i exclude all pages with /tagged/ in the url so ill only be submitting unqiue pages, or should i submit the full site map ?

    Read the article

  • Is it good or bad to have dynamic content in page titles and/or description

    - by Gunjan
    In a local listing website, I append number of search results found in the description(not in title currntly) meta tag of the page as I think this is valuable for users for e.g. "Find address, phone numbers, blah blah blah for 21 outlets in locality. some more stuff after this..." as more places are added to the database, the description for the same page will change frequently. is this good or bad for SEO how about doing the same for title tags?

    Read the article

  • Are dynamic languages at disadvantage for agile development?

    - by Gerenuk
    From what I've read agile development often involves refactoring or reverse engineering code into diagrams. Of course there is much more than that, but if we consider the practices that rely on these two methods, are dynamically typed languages at disadvantage? It seem static typing would make refactoring and reverse engineering much easier? Refactoring or (automated) reverse engineering is hard if not impossible in dynamically typed languages? What does real world projects tell about usage of dynamically typed languages for agile methodology?

    Read the article

  • Issue with dynamic array Queue data structure with void pointer

    - by Nazgulled
    Hi, Maybe there's no way to solve this the way I'd like it but I don't know everything so I better ask... I've implemented a simple Queue with a dynamic array so the user can initialize with whatever number of items it wants. I'm also trying to use a void pointer as to allow any data type, but that's the problem. Here's my code: typedef void * QueueValue; typedef struct sQueueItem { QueueValue value; } QueueItem; typedef struct sQueue { QueueItem *items; int first; int last; int size; int count; } Queue; void queueInitialize(Queue **queue, size_t size) { *queue = xmalloc(sizeof(Queue)); QueueItem *items = xmalloc(sizeof(QueueItem) * size); (*queue)->items = items; (*queue)->first = 0; (*queue)->last = 0; (*queue)->size = size; (*queue)->count = 0; } Bool queuePush(Queue * const queue, QueueValue value, size_t val_sz) { if(isNull(queue) || isFull(queue)) return FALSE; queue->items[queue->last].value = xmalloc(val_sz); memcpy(queue->items[queue->last].value, value, val_sz); queue->last = (queue->last+1) % queue->size; queue->count += 1; return TRUE; } Bool queuePop(Queue * const queue, QueueValue *value) { if(isEmpty(queue)) return FALSE; *value = queue->items[queue->first].value; free(queue->items[queue->first].value); queue->first = (queue->first+1) % queue->size; queue->count -= 1; return TRUE; } The problem lies on the queuePop function. When I call it, I lose the value because I free it right away. I can't seem to solve this dilemma. I want my library to be generic and modular. The user should not care about allocating and freeing memory, that's the library's job. How can the user still get the value from queuePop and let the library handle all memory allocs/frees?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >