Search Results

Search found 68762 results on 2751 pages for 'data transfer objects'.

Page 12/2751 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • SQLAuthority News – Best Practices for Data Warehousing with SQL Server 2008 R2

    - by pinaldave
    An integral part of any BI system is the data warehouse—a central repository of data that is regularly refreshed from the source systems. The new data is transferred at regular intervals  by extract, transform, and load (ETL) processes. This whitepaper talks about what are best practices for Data Warehousing. This whitepaper discusses ETL, Analysis, Reporting as well relational database. The main focus of this whitepaper is on mainly ‘architecture’ and ‘performance’. Download Best Practices for Data Warehousing with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Nagy dobás készül az Oracle adatányászati felületen, Oracle Data Mining

    - by Fekete Zoltán
    Ahogyan már a tavaly oszi Oracle OpenWorld hírekben és eloadásokban is láthattuk a beharangozót, az Oracle nagy dobásra készül az adatbányászati fronton (Oracle Data Mining), mégpedig a remekül használható adatbányászati motor grafikus felületének a kiterjesztésével. Ha jól megfigyeljük ezt az utóbbi linket, az eddigi grafikus felület már Oracle Data Miner Classic néven fut. Hogyan is lehet használni az Oracle Data Mining-ot? - Oracle Data Miner (ingyenesen letöltheto GUI az OTN-rol) - Java-ból és PL/SQL-bol, Oracle Data Mining JDeveloper and SQL Developer Extensions - Excel felületrol, Oracle Spreadsheet Add-In for Predictive Analytics - ODM Connector for mySAP BW Oracle Data Mining technikai információ.

    Read the article

  • Google I/O 2012 - Big Data: Turning Your Data Problem Into a Competitive Advantage

    Google I/O 2012 - Big Data: Turning Your Data Problem Into a Competitive Advantage Ju-kay Kwek, Navneet Joneja Can businesses get practical value from web-scale data without building proprietary web-scale infrastructure? This session will explore how new Google data services can be used to solve key data storage, transformation and analysis challenges. We will look at concrete case studies demonstrating how real life businesses have successfully used these solutions to turn data into a competitive business asset. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1 0 ratings Time: 52:39 More in Science & Technology

    Read the article

  • Data-Driven SOA with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif"; mso-fareast-font-family:"MS Mincho";} By Mike Eisterer, Data integration is more than simply moving data in bulk or in real-time, it is also about unifying information for improved business agility and integrating it in today’s service-oriented architectures. SOA enables organizations to easily define services which may then be discovered and leveraged by varying consumers. These consumers may be applications, customer facing portals, or complex business rules which are assembling services to automate process. Data as a foundational service provider is a key component of today’s successful SOA implementations. Oracle offers the broadest and most integrated portfolio of products to help you define, organize, orchestrate and consume data services. If you are attending Oracle OpenWorld next week, you will have ample opportunity to see the latest Oracle Data Integrator live in action and work with it yourself in two offered Hands-on Labs. Visit the hands-on lab to gain experience firsthand: Oracle Data Integrator and Oracle SOA Suite: Hands-on- Lab (HOL10480) Wed Oct 3rd 11:45AM Marriott Marquis- Salon 1/2 To learn more about Oracle Data Integrator, please visit our Introduction Hands-on LAB: Introduction to Oracle Data Integrator (HOL10481) Mon Oct 1st 3:15PM, Marriott Marquis- Salon 1/2 If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • iPhone: save log files and transfer to PC

    - by Dhanesh
    Hi, In my iPhone application, there is a requirement to save a few log files in the phone(files are run-time generated). Then I have to copy/transfer the files into the pc. Can anyone suggest the best way to do this. Pls consider this as a newbee question. Thanks in advance.. ~Dhanesh

    Read the article

  • File transfer through SQL Server connection

    - by wasim
    I have a text file sitting on client machine and want to move it to the database server (MS SQL 2008) but I don't have any access to the server except through the SQL Server client. Can I transfer this file to the server using SQL client connection?

    Read the article

  • Subsetting a data frame in a function using another data frame as parameter

    - by lecodesportif
    I would like to submit a data frame to a function and use it to subset another data frame. This is the basic data frame: foo <- data.frame(var1= c('1', '1', '1', '2', '2', '3'), var2=c('A', 'A', 'B', 'B', 'C', 'C')) I use the following function to find out the frequencies of var2 for specified values of var1. foobar <- function(x, y, z){ a <- subset(x, (x$var1 == y)) b <- subset(a, (a$var2 == z)) n=nrow(b) return(n) } Examples: foobar(foo, 1, "A") # returns 2 foobar(foo, 1, "B") # returns 1 foobar(foo, 3, "C") # returns 1 This works. But now I want to submit a data frame of values to foobar. Instead of the above examples, I would like to submit df to foobar and get the same results as above (2, 1, 1) df <- data.frame(var1=c('1','1','3'), var2=c("A", "B", "C")) When I change foobar to accept two arguments like foobar(foo, df) and use y[, c(var1)] and y[, c(var2)] instead of the two parameters x and y it still doesn't work. Which way is there to do this?

    Read the article

  • What are pinned objects?

    - by sagie
    Hi. I am trying to find a memory leak using ants memory profiler, and I've encountered in a new term: Pinned objects. Can some one give me a good & simple explanation about what this objects are, How can I pinn/Unpinn objects, and detect who pinned objects? Thanks

    Read the article

  • Out-Of-Memory while doing Core Data migration

    - by Kamchatka
    Hello, I'm migrating a CoreData model between two versions of an application. I was storing binary data as blobs in the previous version and I want to take them out of the blobs for performance. My issue is that during the migration it seems that Core Data loads everything into memory which leads to Low Memory Warnings and then to my app being killed. Apple documentation suggests the following : http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmCustomizingTheProcess.html#//apple_ref/doc/uid/TP40005510-SW9 However, it seems to rely on the fact that the large objects are applied different mapping. In my case, all the objects are basically the same and the same mapping has to be applied to each of them. I don't see in this case how I could apply their technique. How should I handle a migration with very large objects ?

    Read the article

  • Managing Data Dependecies of Java Classes that Load Data from the Classpath at Runtime

    - by Martin Potthast
    What is the simplest way to manage dependencies of Java classes to data files present in the classpath? More specifically: How should data dependencies be annotated? Perhaps using Java annotations (e.g., @Data)? Or rather some build entries in a build script or a properties file? Is there build tool that integrates and evaluates such information (Ant, Scons, ...)? Do you have examples? Consider the following scenario: A few lines of Ant create a Jar from my sources that includes everything found on the classpath. Then jarjar is used to remove all .class files that are not necessary to execute, say, class Foo. The problem is that all the data files that class Bar depends upon are still there in the Jar. The ideal deployment script, however, would recognize that the data files on which only class Bar depends can be removed while data files on which class Foo depends must be retained. Any hints?

    Read the article

  • Best way to implement user-powered data validation

    - by vegetables
    I run a product recommendation engine and I'm hitting a few snags. I'm looking to see if anyone has any recommendations on what I should do to minimize these issues. Here's how the site works: Users come to the site and are presented with product recommendations based on some criteria. If a user knows of a product that is not in our system, they can add it by providing the product name and manufacturer. We take that information, and: Hit one API to gather all the product meta-data (and to validate the product spelling, etc). If the product is not in this first API, we do not allow it in our system. Use the information from step 1 to hit another API for pricing information (gathered from many places online). For the sake of discussion, assume that I am searching both APIs in the most efficient/successful manner possible. For the most part, this works very well. I'd say ~80% of our data is perfectly accurate, but there are a few issues: Sometimes the pricing API (Step 2) doesn't have any information for the product. The way the pricing API is built, it will always return something (theoretically, the closest possible match), and there's no guarantee that the product name is spelled exactly the same way in both APIs, so there's no automated way of knowing if it's the right product. When the pricing API finds the right product, occasionally it has outdated, or even invalid pricing data (e.g. if it screen-scraped the wrong price from a website). Since the site was fairly small at first, I was able to manually verify every product that was added to the website. However, the site has grown to the point where this is taking several hours per day, and is just not efficient use of my time. So, my question is: Aside from hiring someone (or getting an intern) to validate all the data manually, what would be the best system of letting my userbase self-manage the data. Specifically, how can I allow users to edit the data while minimizing the risk of someone ambushing my website, or accidentally setting the data incorrectly.

    Read the article

  • data handling with javascript

    - by Vincent Warmerdam
    Python has a very neat package called pandas which allows for quick data transformation; tables, aggregation, that sort of thing. A lot of these types of functionality can also be found in the python itertools module. The plyR package in R is also very similar. Usually one woud use this functionality to produce a table which is later visualized with a plot. I am personally very fond of d3, and I would like to allow the user to first indicate what type of data aggregation he wants on the dataset before it is visualized. The visualisation in question involves making a heatmap where the user gets to select the size of the bins of the heatmap beforehand (I want d3 to project this through leaflet). I want to visually select the ideal size of the bins for the heatmap. The way I work now is that I take the dataset, aggregate it with python and then manually load it in d3. This is a process that takes a lot of human effort and I was wondering if the data aggregation can be done through the javascript of the browser. I couldn't find a package for javascript specifically built for data, suggesting (to me) that this is a bad idea and that one should not use javascript for the data handling. Is there a good module/package for javascript to handle data aggregation? Is it a good/bad idea to do the data aggregation in javascript (performance wise)?

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • How to choose how to store data?

    - by Eldros
    Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. - Chinese Proverb I could ask what kind of data storage I should use for my actual project, but I want to learn to fish, so I don't need to ask for a fish each time I begin a new project. So, until I used two methods to store data on my non-game project: XML files, and relational databases. I know that there is also other kind of database, of the NoSQL kind. However I wouldn't know if there is more choice available to me, or how to choose in the first place, aside arbitrary picking one. So the question is the following: How should I choose the kind of data storage for a game project? And I would be interested on the following criterion when choosing: The size of the project. The platform targeted by the game. The complexity of the data structure. Added Portability of data amongst many project. Added How often should the data be accessed Added Multiple type of data for a same application Any other point you think is of interest when deciding what to use. EDIT I know about Would it be better to use XML/JSON/Text or a database to store game content?, but thought it didn't address exactly my point. Now if I am wrong, I would gladely be shown the error in my ways.

    Read the article

  • Overly accessible and incredibly resource hungry relationships between business objects. How can I f

    - by Mike
    Hi, Firstly, This might seem like a long question. I don't think it is... The code is just an overview of what im currently doing. It doesn't feel right, so I am looking for constructive criticism and warnings for pitfalls and suggestions of what I can do. I have a database with business objects. I need to access properties of parent objects. I need to maintain some sort of state through business objects. If you look at the classes, I don't think that the access modifiers are right. I don't think its structured very well. Most of the relationships are modelled with public properties. SubAccount.Account.User.ID <-- all of those are public.. Is there a better way to model a relationship between classes than this so its not so "public"? The other part of this question is about resources: If I was to make a User.GetUserList() function that returns a List, and I had 9000 users, when I call the GetUsers method, it will make 9000 User objects and inside that it will make 9000 new AccountCollection objects. What can I do to make this project not so resource hungry? Please find the code below and rip it to shreds. public class User { public string ID {get;set;} public string FirstName {get; set;} public string LastName {get; set;} public string PhoneNo {get; set;} public AccountCollection accounts {get; set;} public User { accounts = new AccountCollection(this); } public static List<Users> GetUsers() { return Data.GetUsers(); } } public AccountCollection : IEnumerable<Account> { private User user; public AccountCollection(User user) { this.user = user; } public IEnumerable<Account> GetEnumerator() { return Data.GetAccounts(user); } } public class Account { public User User {get; set;} //This is public so that the subaccount can access its Account's User's ID public int ID; public string Name; public Account(User user) { this.user = user; } } public SubAccountCollection : IEnumerable<SubAccount> { public Account account {get; set;} public SubAccountCollection(Account account) { this.account = account; } public IEnumerable<SubAccount> GetEnumerator() { return Data.GetSubAccounts(account); } } public class SubAccount { public Account account {get; set;} //this is public so that my Data class can access the account, to get the account's user's ID. public SubAccount(Account account) { this.account = account; } public Report GenerateReport() { Data.GetReport(this); } } public static class Data { public static List<Account> GetSubAccounts(Account account) { using (var dc = new databaseDataContext()) { List<SubAccount> query = (from a in dc.Accounts where a.UserID == account.User.ID //this is getting the account's user's ID select new SubAccount(account) { ID = a.ID, Name = a.Name, }).ToList(); } } public static List<Account> GetAccounts(User user) { using (var dc = new databaseDataContext()) { List<Account> query = (from a in dc.Accounts where a.UserID == User.ID //this is getting the user's ID select new Account(user) { ID = a.ID, Name = a.Name, }).ToList(); } } public static Report GetReport(SubAccount subAccount) { Report report = new Report(); //database access code here //need to get the user id of the subaccount's account for data querying. //i've got the subaccount, but how should i get the user id. //i would imagine something like this: int accountID = subAccount.Account.User.ID; //but this would require the subaccount's Account property to be public. //i do not want this to be accessible from my other project (UI). //reading up on internal seems to do the trick, but within my code it still feels //public. I could restrict the property to read, and only private set. return report; } public static List<User> GetUsers() { using (var dc = new databaseDataContext()) { var query = (from u in dc.Users select new User { ID = u.ID, FirstName = u.FirstName, LastName = u.LastName, PhoneNo = u.PhoneNo }).ToList(); return query; } } }

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

  • Data Masking Pack 12.1.0.3 Certified with E-Business Suite 12.1.3

    - by Elke Phelps (Oracle Development)
    I'm pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12.1.0.3. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.     You may scramble data in E-Business Suite cloned environments with EM12.1.0.3 using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 18462641) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application.  How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack to use the Oracle E-Business Suite 12.1.3 template. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    Read the article

  • Oracle Data Warehouse and Big Data Magazine MAY Edition for Customers + Partners

    - by KLaker
    Follow us on The latest edition of our monthly data warehouse and big data magazine for Oracle customers and partners is now available. The content for this magazine is taken from the various data warehouse and big data Oracle product management blogs, Oracle press releases, videos posted on Oracle Media Network and Oracle Facebook pages. Click here to view the May Edition Please share this link http://flip.it/fKOUS to our magazine with your customers and partners This magazine is optimized for display on tablets and smartphones using the Flipboard App which is available from the Apple App store and Google Play store

    Read the article

  • Why do transfer objects need to implement Serializable?

    - by smaye81
    I realized today that I have blindly just followed this requirement for years without ever really asking why. Today, I ran across a NotSerializableException with a model object I created from scratch and I realized enough is enough. I was told this was because of session replication between load-balanced servers, but I know I've seen other objects at session scope that do not implement Serializable. Is this the real reason?

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • What does it mean to treat data as an asset?

    What does it mean to treat data as an asset? When considering this concept, we must define what data is and how it can be considered an asset. Data can easily be defined as a collection of stored truths that are open to interpretation and manipulation.  Expanding on this definition, data can be viewed as a set of captured facts, measurements, and ideas used to make decisions. Furthermore, InvestorsWords.com defines asset as any item of economic value owned by an individual or corporation. Now let’s apply this definition of asset to our definition of data, and ask the following question. Can facts, measurements and ideas be items that are of economic value owned by an individual or corporation? The obvious answer is yes; data can be bought and sold like commodities or analyzed to make smarter business decisions.  We can look at the economic value of data in one of two ways. First, data can be sold as a commodity that can take the form of goods like eBooks, Training, Music, Movies, and so on. Customers are willing to pay to gain access to this data for their consumption. This directly implies that there is an economic value for data in the form of a commodity because customers see a value in obtaining it.  Secondly data can be used in making smarter business decisions that allow for companies to become more profitable and/or reduce their potential for risk in regards to how they operate.  In the past I have worked at companies where we had to analyze previous sales activities in conjunction with current activities to determine how the company was preforming for the quarter.  In addition trends can be formulated based on existing data that allow companies to forecast data so that they can make strategic business decisions based sound forecasted data. Companies that truly value their data are constantly trying to grow and upgrade their data and supporting applications because it is the life blood of a company. If we look at an eBook retailer for example, imagine if they lost all of their data. They would be in essence forced out of business because they would have nothing to sell. In turn, if we look at a company that was using data to facilitate better decision making processes and they lost all of their data then they could be losing potential revenue and/ or increasing the company’s losses by making important business decisions virtually in the dark compared to when they were made on solid data.

    Read the article

  • Using java to send/receive different objects through UDP

    - by AAA
    Hello everyone, I am writing a program in Java where there are communications between two or more machines using UDP. My application sends objects after serializing them through the network to the other machine where it will be deserialized and dealt with it. I was successful in sending one kind of objects so far. My problem is that I want the sender to be able to send different kind of objects, and for the receiver to be able to receive them and cast them again to their appropriate types. However, since UDP allocates a byte buffer then receive the data into the buffer, it is impossible to cast or detect the type of the received object as different objects have different sizes. Is there is a way that I can use to send different kind of objects using UDP and then receive them at the other end? (I don't ask for code here, just some ideas) Thanks

    Read the article

  • Domain transfer and New Hosting Management

    - by Anubhav Saini
    I wanted to migrate from my older registrar to GoDaddy. Main reason because current registrar/hosting provider doesn't support .NET. My old registrar gave me control over the domain and hosting account. So, basically I have everything I would need. ( I know theory only ) I applied for Transfer of domain, bought a hosting package from GoDaddy and uploaded new web site. So, I am waiting for domain transfer and it tells me that I have to wait for 5-7 days for approval. Okay. But today, my old registrar told/taunted me that I really didn't need to apply for transfer. What could possibly I have done differently? My domain expires on this 15th. Now I don't know much about how all of this really works, but I am guessing he meant, "you should have waited for 15 days and let it expire after which you should buy the domain as it is expired". Is it really so(I doubt) or there are some other ways I could have got same result but without transferring domain? (like, changing DNS entries) I have read like all of the documentation available on namecheap/GoDaddy/Whois about domain transfers. But maybe because I am new to this it is all confusing to me. I would also like to know what to do with DNS settings after transfer succeeds. I want to kill the old website. So, what nameserver settings I need to change, new one or old one or both? I have old host+old domain registrar + old working site on one hand, on the other hand, new site + pending domain transfer + new DNS settings.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >