Search Results

Search found 65872 results on 2635 pages for 'core data migration'.

Page 16/2635 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • SQL Developer Data Modeler v3.3 Early Adopter: Search

    - by thatjeffsmith
    photo: Stuck in Customs via photopin cc The next version of Oracle SQL Developer Data Modeler is now available as an Early Adopter (read, beta) release. There are many new major feature enhancements to talk about, but today’s focus will be on the brand new Search mechanism. Data, data, data – SO MUCH data Google has made countless billions of dollars around a very efficient and intelligent search business. People have become accustomed to having their data accessible AND searchable. Data models can have thousands of entities or tables, each having dozens of attributes or columns. Imagine how hard it could be to find what you’re looking for here. This is the challenge we have tackled head-on in v3.3. Same location as the Search toolbar in Oracle SQL Developer (and most web browsers) Here’s how it works: Search as you type – wicked fast as the entire model is loaded into memory Supports regular expressions (regex) Results loaded to a new panel below Search across designs, models Search EVERYTHING, or filter by type Save your frequent searches Save your search results as a report Open common properties of object in search results and edit basic properties on-the-fly Want to just watch the video? We have a new Oracle Learning Library resource available now which introduces the new and improved Search mechanism in SQL Developer Data Modeler. Go watch the video and then come back. Some Screenshots This will be a pretty easy feature to pick up. Search is intuitive – we’ve already learned how to do search. Now we just have a better interface for it in SQL Developer Data Modeler. But just in case you need a couple of pointers… The SYS data dictionary in model form with Search Results If I type ‘translation’ in the search dialog, then the results will come up as hits are ‘resolved.’ By default, everything is searched, although I can filter the results after-the-fact. You can see where the search finds a match in the ‘Content’ column Save the Results as a Report If you limit the search results to a category and a model, then you can save the results as a report. All of the usual suspects You can optionally include the search string, which displays in the top of of the report as ‘PATTERN.’ You can save you common reporting setups as a template and reuse those as well. Here’s a sample HTML report: Yes, I like to search my search results report! Two More Ways to Search You can search ‘in context’ by opening the ‘Find’ dialog from an active design. You can do this using the ‘Search’ toolbar button or from a model context menu. Searching a specific model Instead of bringing up the old modal Find dialog, you now get to use the new and improved Search panel. Notice there’s no ‘Model’ drop-down to select and that the active Search form is now in the Search panel versus the search toolbar up top. What else is new in SQL Developer Data Modeler version 3.3? All kinds of goodies. You can send your model to Excel for quick edits/reviews and suck the changes back into your model, you can share objects between models, and much much more. You’ll find new videos and blog posts on the subject in the new few days and weeks. Enjoy! If you have any feedback or want to report bugs, please visit our forums.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • The data reader returned by the store data provider does not have enough columns

    - by molgan
    Hello I get the following error when I try to execute a stored procedure: "The data reader returned by the store data provider does not have enough columns" When I in the sql-manager execute it like this: DECLARE @return_value int, @EndDate datetime EXEC @return_value = [dbo].[GetSomeDate] @SomeID = 91, @EndDate = @EndDate OUTPUT SELECT @EndDate as N'@EndDate' SELECT 'Return Value' = @return_value GO It returns the value properly.... @SomeDate = '2010-03-24 09:00' And in my app I have: if (_entities.Connection.State == System.Data.ConnectionState.Closed) _entities.Connection.Open(); using (EntityCommand c = new EntityCommand("MyAppEntities.GetSomeDate", (EntityConnection)this._entities.Connection)) { c.CommandType = System.Data.CommandType.StoredProcedure; EntityParameter paramSomeID = new EntityParameter("SomeID", System.Data.DbType.Int32); paramSomeID.Direction = System.Data.ParameterDirection.Input; paramSomeID.Value = someID; c.Parameters.Add(paramSomeID); EntityParameter paramSomeDate = new EntityParameter("SomeDate", System.Data.DbType.DateTime); SomeDate.Direction = System.Data.ParameterDirection.Output; c.Parameters.Add(paramSomeDate); int retval = c.ExecuteNonQuery(); return (DateTime?)c.Parameters["SomeDate"].Value; Why does it complain about columns? I googled on error and someone said something about removing RETURN in sp, but I dont have any RETURN there. last like is like SELECT @SomeDate = D.SomeDate FROM .... /M

    Read the article

  • ASP.NET server data persistence

    - by Wayne Werner
    Hi, I'm not really sure exactly how the question should be phrased, so please be patient if I ask the wrong thing. I'm writing an ASP.NET application using VB as the code behind language. I have a data access class that connects to the DB to run the query (parameterized, of course), and another class to perform the validation tasks - I access this class from my aspx page. What I would like is to be able to store the data server side and wait for the user to choose from a few options based on the validity of the data. But unless my understanding is completely off, having persistent data objects on the server will give problems when multiple users connect? My ultimate goal is that once the data has been validated the end user can't modify it. Currently I'm validating the data, but I still have to retrieve it from the web form AFTER the user says OK, which obviously leaves open the possibility of injecting bad data either accidentally (unlikely) or on purpose (also unlikely for the use, but I'd prefer not to take the chance). So am I completely off in my understanding? If so, can someone point me to a resource that provides some instructions on keeping persistent data on the server, or provide instruction? Thanks!

    Read the article

  • SQLAuthority News – Best Practices for Data Warehousing with SQL Server 2008 R2

    - by pinaldave
    An integral part of any BI system is the data warehouse—a central repository of data that is regularly refreshed from the source systems. The new data is transferred at regular intervals  by extract, transform, and load (ETL) processes. This whitepaper talks about what are best practices for Data Warehousing. This whitepaper discusses ETL, Analysis, Reporting as well relational database. The main focus of this whitepaper is on mainly ‘architecture’ and ‘performance’. Download Best Practices for Data Warehousing with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Nagy dobás készül az Oracle adatányászati felületen, Oracle Data Mining

    - by Fekete Zoltán
    Ahogyan már a tavaly oszi Oracle OpenWorld hírekben és eloadásokban is láthattuk a beharangozót, az Oracle nagy dobásra készül az adatbányászati fronton (Oracle Data Mining), mégpedig a remekül használható adatbányászati motor grafikus felületének a kiterjesztésével. Ha jól megfigyeljük ezt az utóbbi linket, az eddigi grafikus felület már Oracle Data Miner Classic néven fut. Hogyan is lehet használni az Oracle Data Mining-ot? - Oracle Data Miner (ingyenesen letöltheto GUI az OTN-rol) - Java-ból és PL/SQL-bol, Oracle Data Mining JDeveloper and SQL Developer Extensions - Excel felületrol, Oracle Spreadsheet Add-In for Predictive Analytics - ODM Connector for mySAP BW Oracle Data Mining technikai információ.

    Read the article

  • Google I/O 2012 - Big Data: Turning Your Data Problem Into a Competitive Advantage

    Google I/O 2012 - Big Data: Turning Your Data Problem Into a Competitive Advantage Ju-kay Kwek, Navneet Joneja Can businesses get practical value from web-scale data without building proprietary web-scale infrastructure? This session will explore how new Google data services can be used to solve key data storage, transformation and analysis challenges. We will look at concrete case studies demonstrating how real life businesses have successfully used these solutions to turn data into a competitive business asset. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1 0 ratings Time: 52:39 More in Science & Technology

    Read the article

  • Data-Driven SOA with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif"; mso-fareast-font-family:"MS Mincho";} By Mike Eisterer, Data integration is more than simply moving data in bulk or in real-time, it is also about unifying information for improved business agility and integrating it in today’s service-oriented architectures. SOA enables organizations to easily define services which may then be discovered and leveraged by varying consumers. These consumers may be applications, customer facing portals, or complex business rules which are assembling services to automate process. Data as a foundational service provider is a key component of today’s successful SOA implementations. Oracle offers the broadest and most integrated portfolio of products to help you define, organize, orchestrate and consume data services. If you are attending Oracle OpenWorld next week, you will have ample opportunity to see the latest Oracle Data Integrator live in action and work with it yourself in two offered Hands-on Labs. Visit the hands-on lab to gain experience firsthand: Oracle Data Integrator and Oracle SOA Suite: Hands-on- Lab (HOL10480) Wed Oct 3rd 11:45AM Marriott Marquis- Salon 1/2 To learn more about Oracle Data Integrator, please visit our Introduction Hands-on LAB: Introduction to Oracle Data Integrator (HOL10481) Mon Oct 1st 3:15PM, Marriott Marquis- Salon 1/2 If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • Core Data vs. SQLitePersistentObjects

    - by Macatomy
    I'm creating an iPhone app and I'm trying to choose between 2 solutions for a persistent store. Core Data, or SQLitePersistentObjects. Basically, all my app needs is a way to store an array of model objects and then load them again to display in a UITableView. Its nothing too complicated. Core Data seems to have a much higher learning curve than the simple to use SQLitePersistentObjects. Are there any obvious benefits of using Core Data over SQLitePersistentObjects in my case?

    Read the article

  • Can Core Data be used on Linux?

    - by glenc
    This might be a stupid question, but I was wondering whether or not you can use the Core Data libraries on Linux at all? I'm planning how to build the server side of an iPhone app that I'm working on, and have found that you can use PyObjC to get access to Core Data in a Python environment, e.g. use Core Data in a TurboGears web application. At this point I'm thinking that you would have to run the web server on Mac OSX, because I can't find any evidence on the internet that you can access the Objective-C libraries on Linux. I've always written webapps on Linux but will obviously make the jump to an OSX server if it allows me to use the same datastore implementation on the iPhone and the server, the only job remaining being the Core Data <- Web Services XML translation that has to happen on the wire.

    Read the article

  • How can Core Data store an NSData?

    - by mystify
    The documentation says, that core data properties can only store NSString, NSNumber and NSDate types. However, a lot of Core Data users claim Core Data could also store an NSData type. But I wasn't able to see that in the documentation, although the Xcode Data Modeler allows to choose a data type called "binary" (which seems to be NSData). Did I miss something? Is there a hidden place in the documentation that indeed lists NSData for binary stuff?

    Read the article

  • Subsetting a data frame in a function using another data frame as parameter

    - by lecodesportif
    I would like to submit a data frame to a function and use it to subset another data frame. This is the basic data frame: foo <- data.frame(var1= c('1', '1', '1', '2', '2', '3'), var2=c('A', 'A', 'B', 'B', 'C', 'C')) I use the following function to find out the frequencies of var2 for specified values of var1. foobar <- function(x, y, z){ a <- subset(x, (x$var1 == y)) b <- subset(a, (a$var2 == z)) n=nrow(b) return(n) } Examples: foobar(foo, 1, "A") # returns 2 foobar(foo, 1, "B") # returns 1 foobar(foo, 3, "C") # returns 1 This works. But now I want to submit a data frame of values to foobar. Instead of the above examples, I would like to submit df to foobar and get the same results as above (2, 1, 1) df <- data.frame(var1=c('1','1','3'), var2=c("A", "B", "C")) When I change foobar to accept two arguments like foobar(foo, df) and use y[, c(var1)] and y[, c(var2)] instead of the two parameters x and y it still doesn't work. Which way is there to do this?

    Read the article

  • Linux core dumps are too large!

    - by themoondothshine
    Hey guys, Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of 2GBs and the information contained within them are no different from the smaller dumps. Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself? Binaries are compiled in this way: Compiled in release mode with debug symbols (ie, -g compiler option in GCC). Debug symbols are copied onto a separate file and stripped from the binary. A GNU debug symbols link is added to the binary. At the beginning of the application, there's a call to setrlimit which sets the core limit to infinity -- Is this the problem?

    Read the article

  • using core data with web services

    - by Jayshree
    Hi. i am a noob in xcode. I am developing an iphone app where i need to send and receive data from a web service. And i need to store them temporarily in my app. i dont want to use sqlite. so i was wondering if i should use core data for this purpose. I read some articles but i still dont have a clear picture of how to do it, coz i have used core data only with sqlite. I want to do the following things : Will receive table data from a web service. Have to perform certain calculations on those fields. Will send the data back in xml format to the server. How do i convert the xml data into int, date or any other data type? and how do i store it in managed data objects? Can anyone please help me with this??? thnx for your time.

    Read the article

  • Managing Data Dependecies of Java Classes that Load Data from the Classpath at Runtime

    - by Martin Potthast
    What is the simplest way to manage dependencies of Java classes to data files present in the classpath? More specifically: How should data dependencies be annotated? Perhaps using Java annotations (e.g., @Data)? Or rather some build entries in a build script or a properties file? Is there build tool that integrates and evaluates such information (Ant, Scons, ...)? Do you have examples? Consider the following scenario: A few lines of Ant create a Jar from my sources that includes everything found on the classpath. Then jarjar is used to remove all .class files that are not necessary to execute, say, class Foo. The problem is that all the data files that class Bar depends upon are still there in the Jar. The ideal deployment script, however, would recognize that the data files on which only class Bar depends can be removed while data files on which class Foo depends must be retained. Any hints?

    Read the article

  • C# or windows equivalent of OS X's Core Data?

    - by Nektarios
    I'm late to the boat and have only just now started using Core Data in OS X / Cocoa - it's incredible and is really changing the way I look at things. Is there an equivalent technology in C# or the modern Windows frameworks? i.e. having managed data types where you get saving, data management, deleting, searching all for free? Also wondering if there's anything like this on Linux.

    Read the article

  • Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly

    - by Eyla
    Greeting, I have Windows Server 2008 Server Core. I want to configure this server to host php websites using IIS 7. I installed and configured IIS7 to run php using the steps in this website: http://blogs.msdn.com/b/philpenn/archive/2009/07/19/deploying-iis-7-5-fastcgi-php-on-server-core.aspx Now I’m facing a problem that when I request my php website I would get this error. Server Error 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. I check the even log and I found these details too: Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8" could not be found. Please use sxstrace.exe for detailed diagnosis. I search about this error and I found a solution for it but which is to install Microsoft Visual C++ 2008 SP1 Redistributable Package (x86). I installed but still I’m getting same error. Please help me to solve this problem and let me know if you want to know more info about my issue.

    Read the article

  • I need help choosing between two configurations of the Dell Studio 14

    - by Adnan
    There are two configurations of the Dell Studio 14 (1458) which I'm looking at: Config 1: Core i7-720QM @ 1.6 GHz; ATI Mobility Radeon HD 5450 1GB; 4gb DDR3 RAM @ 1066 MHz; 500 GB SATA HDD @ 7200 RPM; Price: $999 Config 2: Core i5-430M; ATI Mobility Radeon HD 4530 512MB; 4GB DDR3 RAM @ 1066 MHz; 500 GV SATA HDD @ 7200 RPM; Price: $874 What I want to know is, would config 1 still be able to do decent gaming (maybe some Starcraft II), and is there a great performance difference between the i5 and i7 processors? Is the $130 extra worth it for the i7 and better graphics card? I do more than just basic computing. I plan on getting into web design (specifically using Photoshop and Dreamweaver), and I wish to do gaming. I know Conifg 1 is the better value, but I want to be sure that the $130 more is truly worth it. I dont have too much money and want to spend wisely as possible, yet I am a computer geek and plan on doing a lot more than the average user.

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • Best way to implement user-powered data validation

    - by vegetables
    I run a product recommendation engine and I'm hitting a few snags. I'm looking to see if anyone has any recommendations on what I should do to minimize these issues. Here's how the site works: Users come to the site and are presented with product recommendations based on some criteria. If a user knows of a product that is not in our system, they can add it by providing the product name and manufacturer. We take that information, and: Hit one API to gather all the product meta-data (and to validate the product spelling, etc). If the product is not in this first API, we do not allow it in our system. Use the information from step 1 to hit another API for pricing information (gathered from many places online). For the sake of discussion, assume that I am searching both APIs in the most efficient/successful manner possible. For the most part, this works very well. I'd say ~80% of our data is perfectly accurate, but there are a few issues: Sometimes the pricing API (Step 2) doesn't have any information for the product. The way the pricing API is built, it will always return something (theoretically, the closest possible match), and there's no guarantee that the product name is spelled exactly the same way in both APIs, so there's no automated way of knowing if it's the right product. When the pricing API finds the right product, occasionally it has outdated, or even invalid pricing data (e.g. if it screen-scraped the wrong price from a website). Since the site was fairly small at first, I was able to manually verify every product that was added to the website. However, the site has grown to the point where this is taking several hours per day, and is just not efficient use of my time. So, my question is: Aside from hiring someone (or getting an intern) to validate all the data manually, what would be the best system of letting my userbase self-manage the data. Specifically, how can I allow users to edit the data while minimizing the risk of someone ambushing my website, or accidentally setting the data incorrectly.

    Read the article

  • data handling with javascript

    - by Vincent Warmerdam
    Python has a very neat package called pandas which allows for quick data transformation; tables, aggregation, that sort of thing. A lot of these types of functionality can also be found in the python itertools module. The plyR package in R is also very similar. Usually one woud use this functionality to produce a table which is later visualized with a plot. I am personally very fond of d3, and I would like to allow the user to first indicate what type of data aggregation he wants on the dataset before it is visualized. The visualisation in question involves making a heatmap where the user gets to select the size of the bins of the heatmap beforehand (I want d3 to project this through leaflet). I want to visually select the ideal size of the bins for the heatmap. The way I work now is that I take the dataset, aggregate it with python and then manually load it in d3. This is a process that takes a lot of human effort and I was wondering if the data aggregation can be done through the javascript of the browser. I couldn't find a package for javascript specifically built for data, suggesting (to me) that this is a bad idea and that one should not use javascript for the data handling. Is there a good module/package for javascript to handle data aggregation? Is it a good/bad idea to do the data aggregation in javascript (performance wise)?

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • How to choose how to store data?

    - by Eldros
    Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. - Chinese Proverb I could ask what kind of data storage I should use for my actual project, but I want to learn to fish, so I don't need to ask for a fish each time I begin a new project. So, until I used two methods to store data on my non-game project: XML files, and relational databases. I know that there is also other kind of database, of the NoSQL kind. However I wouldn't know if there is more choice available to me, or how to choose in the first place, aside arbitrary picking one. So the question is the following: How should I choose the kind of data storage for a game project? And I would be interested on the following criterion when choosing: The size of the project. The platform targeted by the game. The complexity of the data structure. Added Portability of data amongst many project. Added How often should the data be accessed Added Multiple type of data for a same application Any other point you think is of interest when deciding what to use. EDIT I know about Would it be better to use XML/JSON/Text or a database to store game content?, but thought it didn't address exactly my point. Now if I am wrong, I would gladely be shown the error in my ways.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >