Search Results

Search found 60488 results on 2420 pages for 'saving data'.

Page 83/2420 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • SSIS Snack: Data Flow Source Adapters

    - by andyleonard
    Introduction Configuring a Source Adapter in a Data Flow Task couples (binds) the Data Flow to an external schema. This has implications for dynamic data loads. "Why Can't I...?" I'm often asked a question similar to the following: "I have 17 flat files with different schemas that I want to load to the same destination database - how many Data Flow Tasks do I need?" I reply "17 different schemas? That's easy, you need 17 Data Flow Tasks." In his book Microsoft SQL Server 2005 Integration Services...(read more)

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Does software testing methodology rely on flawed data?

    - by Konrad Rudolph
    It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications. However, it turns out that this data never existed. The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost). Is there any independent data to corroborate or refute this? And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology?

    Read the article

  • Is data integrity possible without normalization?

    - by shuniar
    I am working on an application that requires the storage of location information such as city, state, zip code, latitude, and longitude. I would like to ensure: Location data is accurate Detroit, CA Detroit IS NOT in California Detroit, MI Detroit IS in Michigan Cities and states are spelled correctly California not Calefornia Detroit not Detriot Cities and states are named consistently Valid: CA Detroit Invalid: Cali california DET d-town The D Also, since city/zip data is not guaranteed to be static, updating this data in a normalized fashion could be difficult, whereas it could be implemented as a de facto location if it is denormalized. A couple thoughts that come to mind: A collection of reference tables that store a list of all states and the most common cities and zip codes that can grow over time. It would search the database for an exact or similar match and recommend corrections. Use some sort of service to validate the location data before it is stored in the database. Is it possible to fulfill these requirements without normalization, and if so, should I denormalize this data?

    Read the article

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Forbes Article on Big Data and Java Embedded Technology

    - by hinkmond
    Whoa, cool! Forbes magazine has an online article about what I've been blogging about all this time: Big Data and Java Embedded Technology, tying it all together with a big bow, connecting small devices to the data center. See: Billions of Java Embedded Devices Here's a quote: By the end of the decade we could see tens of billions of new Internet-connected devices... with billions of Internet- connected devices generating Big Data, are the next big thing. ... That’s why Oracle has put together an ecosystem of solutions for this new, Big Data-oriented device-to-data center world: secure, powerful, and adaptable embedded Java for intelligent devices, integrated middleware... This is the next big thing. Java SE Embedded Technology is something to watch for in the new year. Start developing for it now to get a head-start... Hinkmond

    Read the article

  • Google I/O 2010 - Data migration in App Engine

    Google I/O 2010 - Data migration in App Engine Google I/O 2010 - Data migration in App Engine App Engine 201 Matthew Blain Learn about the App Engine bulk loader and see an example of migrating data from an external data source into the app engine datastore--and back out. Do you have data stored in a traditional, relational DB which you'd like to upload to App Engine? This session will teach you how. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 44:26 More in Science & Technology

    Read the article

  • Is there a recommended approach to handle saving data in response to within-site navigation without

    - by Carvell Fenton
    Hello all, Preamble to scope my question: I have a web app (or site, this is an internal LAN site) that uses jQuery and AJAX extensively to dynamically load the content section of the UI in the browser. A user navigates the app using a navigation menu. Clicking an item in the navigation menu makes an AJAX call to php, and php then returns the content that is used to populate the central content section. One of the pages served back by php has a table form, set up like a spreadsheet, that the user enters values into. This table is always kept in sync with data in the database. So, when the table is created, is it populated with the relevant database data. Then when the user makes a change in a "cell", that change immediately is written back to the database so the table and database are always in sync. This approach was take to reassure users that the data they entered has been saved (long story...), and to alleviate them from having to click a save button of some kind. So, this always in sync idea is great, except that a user can enter a value in a cell, not take focus out of the cell, and then take any number of actions that would cause that last value to be lost: e.g. navigate to another section of the site via the navigation menu, log out of the app, close the browser, etc. End of preamble, on to the issue: I initially thought that wasn't a problem, because I would just track what data was "dirty" or not saved, and then in the onunload event I would do a final write to the database. Herein lies the rub: because of my clever (or not so clever, not sure) use of AJAX and dynamically loading the content section, the user never actually leaves the original url, or page, when the above actions are taken, with the exception of closing the browser. Therefore, the onunload event does not fire, and I am back to losing the last data again. My question, is there a recommended way to handle figuring out if a person is navigating away from a "section" of your app when content is dynamically loaded this way? I can come up with a solution I think, that involves globals and tracking the currently viewed page, but I thought I would check if there might be a more elegant solution out there, or a change I could make in my design, that would make this work. Thanks in advance as always!

    Read the article

  • R: How can I use apply on rows of a data.frame and get out $column_name?

    - by John
    I'm trying to access $a using the following example: df<-data.frame(a=c("x","x","y","y"),b=c(1,2,3,4)) > df a b 1 x 1 2 x 2 3 y 3 4 y 4 test_fun <- function (data.frame_in) { print (data.frame_in[1]) } I can now access $a if I use an index for the first column: apply(df, 1, test_fun) a "x" a "x" a "y" a "y" [1] "x" "x" "y" "y" But I cannot access column $a with the $ notation: error: "$ operator is invalid for atomic vectors" test_fun_2 <- function (data.frame_in) { print (data.frame_in$a) } >apply(df, 1, test_fun_2) Error in data.frame_in$a : $ operator is invalid for atomic vectors Is this not possible?

    Read the article

  • Importing Data from Google Analytics

    - by Adam Tannon
    I am planning on building a web app with many different public-facing HTTP servers; each of which will have Google Analytics (GA) installed on them. I'd like to create a "dashboard" app that consolidates the GA data into one screen. I've been perusing the documentation for this so-called GA API, but I can't tell what the end result of the GA API is: Does the GA API allow me to do exactly what I am looking for it to do? Or... Does the GA API do something entirely different (like allow me to share my data with Google+ or something else weird) Since an API can be used to CRUD any kind of data, I guess I'm asking which way the GA API goes: is it for querying (reading) data from 1+ server instances, or is it for modifying data on those servers or somewhere else? Thanks in advance!

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • Displaying a Sorted, Paged, and Filtered Grid of Data in ASP.NET MVC

    Over the past couple of months I've authored five articles on displaying a grid of data in an ASP.NET MVC application. The first article in the series focused on simply displaying data. This was followed by articles showing how to sort, page, and filter a grid of data. We then examined how to both sort and page a single grid of data. This article looks at how to add the final piece to the puzzle: we'll see how to combine sorting, paging and filtering when displaying data in a single grid. Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • How to implement Self-host WCF data serivces (http://localhost:1234/myDataService.svc/...)

    - by warmcold
    I have a project that needs to implement WCF data services (OData) to retrieve data from a control system (.NET Framework Application). The WCF data service needs to be hosted by the .NET application (No ASP.NET and NO IIS). I have seen many WCF Data Service examples recently; they are all hosted by ASP.NET application. I also see the self-host (console application) examples, but it is for WCF Service (not WCF Data Service). Here is my question: It is possible to have a standalone .NET Applications to host WCF Data Services ((http://localhost:1234/mydataservice.svc/...). If yes, can someone provide an example? Thanks.

    Read the article

  • Writing a Data Access Layer (DAL) for SQL Server

    In this tip, I am going to show you how you can create a Data Access Layer (to store, retrieve and manage data in relational database) in ADO .NET. I will show how you can make it data provider independent, so that you don't have to re-write your data access layer if the data storage source changes and also you can reuse it in other applications that you develop. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • URGENT: Patches Needed to Prevent Data Corruption in Oracle Payments

    - by LuciaC
    Development are seeing a number of datafix bugs being logged related to PPR committing data in Payments (IBY) and missing corresponding payments in Payables.  These bugs have been investigated and fixed, however customers need to proactively apply these fixes to prevent data corruption. There are two root cause patches available for this case of partial data commit.  It is critical that all R12/12.1 Payments customers apply the following two patches ASAP: a) Patch 11699958: R12: Error during PPR Leads to Incomplete Data Commit and Inconsistent Status (Doc ID 1338425.1)b) Patches 15867522: Confirmed PPR Batches Show Payment Initiated - Data Exist Only in IBY Tables (Doc ID 1506611.1)

    Read the article

  • Will using https prevent client from accessing any confidential data inside JavaScript

    - by saveing saving
    I am working on an asp.net mvc web application, on the view i wrote the following JavaScript which calls an external web service :- <script type="text/javascript"> $(function() { $.getJSON("https://MyERPsystem.com/jw/web/json/hr/getsalary/byid?master_username=superadmin&password_hash=9449B5ABCFA9AFDA36B801351ED3DF66&employeeid=A200121", { //code goes here }, function(data) { $.each(data.items, function(i,item){ //code goes here }); }); }) </script> So if the external web service implements https, then does this means that the master_username and password_hash inside the javaScript cannot be seen by external users? Best Regards

    Read the article

  • The Latest In Master Data Management

    Today master data continues to expand while data quality becomes more important. The challenge of clean data is not new, but the stakes and complexities are higher than ever. Fortunately, Oracle has a solution -- Oracle Master Data Management. Hear from Pascal Laik, VP Oracle MDM Product Strategy about the benefits of Master Data Management, the solutions that Oracle offers and why they are unique and what benefits customers are deriving from Oracle MDM products. Learn about the latest product in the Oracle MDM family and where Oracle MDM strategy is heading.

    Read the article

  • How to store and update data table on client side (iOS MMO)

    - by farseer2012
    Currently i'm developing an iOS MMO game with cocos2d-x, that game depends on many data tables(excel file) given by the designers. These tables contain data like how much gold/crystal will be cost when upgrade a building(barracks, laboratory etc..). We have about 10 tables, each have about 50 rows of data. My question is how to store those tables on client side and how to update them once they have been modified on server side? My opinion: use Sqlite to store data on client side, the server will parse the excel files and send the data to client with JSON format, then the client parse the JOSN string and save it to Sqlite file. Is there any better method? I find that some game stores csv files on client side, how do they update the files? Could server send a whole file directly to client?

    Read the article

  • Get your picture on the screen at MIX11: Help me create a repository of sample data

    - by Laurent Bugnion
    Here is your chance to get your picture on the big screen during my MIX11 presentation in April this year. I need to create a small repository of sample data for my demos. So instead of tapping in my imagination and creating dummy users (or reusing past information I already used in other demos), I thought I would appeal to the amazing community: Send me an email with the following information. I will include the first 30 users into my sample data repository and use your info in my demo. First Name Last Name Date of birth Picture Link to Facebook profile (optional) Disclaimer: The data will only be running locally on my hard drive. The demos will however be filmed and the videos made public. By providing this information, you explicitly consent to this data being used in demos at MIX11 and possibly in following conferences. The data will only be used for demo purposes. Thanks for your help!!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Using VBA to model data in Autodesk Inventor?

    - by user108478
    I have a close friend who is using a specific device that records the dimensions of an object as it is eroded and outputs the dimensional data to an excel sheet. The object is spherical in nature but is eroded from the top and bottom, so the shape is constantly changing and a single formula for surface area and volume would not work. This is where Inventor comes in. My friend can plug the dimensional data to Inventor and it immediately returns the surface area and volume. The erosion process takes several minutes to complete and records data at very short intervals, so it would be very arduous to plug in the data thousand of time. Since Inventor supports macros and VBA, is there a way to plug the data into Inventor and output it into another spreadsheet? Any suggestions would be appreciated.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Validating Data

      To continue our series lets look at data validation our business applications. Updating data is great, but when you enable data update you often need to check the data to ensure it is valid.  RIA Services as clean, prescriptive pattern for handling this.   First lets look at what you get for free.  The value for any field entered has to be valid for the range of that data type.  For example, you never need to write code to ensure someone didnt type is forty-two into...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Validating Data

      To continue our series lets look at data validation our business applications. Updating data is great, but when you enable data update you often need to check the data to ensure it is valid.  RIA Services as clean, prescriptive pattern for handling this.   First lets look at what you get for free.  The value for any field entered has to be valid for the range of that data type.  For example, you never need to write code to ensure someone didnt type is forty-two into...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to deal with a ten days debugging session? [on hold]

    - by smonff
    Ten days ago, I fixed a bug on a large application and the hot fix has created a disappearing of some data from the user point of view. Data are not deleted, but have been set to hidden status. It could be possible to get the data back, but the thing seems to be hard: I've already spent 10 days to understand and reproduce the problem (mostly with complex SQL queries but sometimes it is necessary to update the database to test the application logic). Is 10 days a normal amount of time for these kind of problems? Should we keep on and retrieve the data or should we tell these users sorry for the loss, but your data have disappeared? Any advice on when to stop searching how to solve the issue?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >