Search Results

Search found 60085 results on 2404 pages for 'data flow diagrams'.

Page 12/2404 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Integrating Data Mining into your BI Solution (Presentation)

    I recently gave a live meeting presentation to the UK User Group on Integrating Data Mining into your BI Solution.  In it I talk about and demo ways of using your data mining models inside Integration Services, Analysis Services and Reporting Services.  This is the first in a series of presentations I will be doing for the UG as I try to get the word out that Data Mining can be for the masses. You can download my deck and my line meeting recording from here.

    Read the article

  • Inside Sweden’s Nuclear Bunker Turned Data Center

    - by Jason Fitzpatrick
    A data center inside a decommissioned nuclear bunker is interesting enough, but one that looks as futuristic and awesome as the center under Stockholm begs to be seen. A hundred feet under the city of Stockholm is a decommissioned nuclear bunker that the government had previously leased out intermittently for various events, but it was never put to serious or extended use. Not until, that is,  Jon Karlung discovered the location and brought his vision of an ultra-modern, stylish, and secure data center to life. The passage from Wired’s write up of their photo tour that best encapsulates the feel of the bunker is: Most often data centers are built in boxy warehouses, so Bahnhof stands out as perhaps the world’s most stylish. In fact, it inspired Cisco IT Architect Douglas Alger to write a book on the world’s best-looking data centers. ”The idea that people were sitting in a design meeting and said, ‘what we need for our data center is waterfalls,’ that must have been a very fascinating discussion,” Alger says. Hit up the link below for the full photo tour. Deep Inside the James Bond Villain Lair That Actually Exists [Wired] Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Big Data Accelerator

    - by Jean-Pierre Dijcks
    For everyone who does not regularly listen to earnings calls, Oracle's Q4 call was interesting (as it mostly is). One of the announcements in the call was the Big Data Accelerator from Oracle (Seeking Alpha link here - slightly tweaked for correctness shown below):  "The big data accelerator includes some of the standard open source software, HDFS, the file system and a number of other pieces, but also some Oracle components that we think can dramatically speed up the entire map-reduce process. And will be particularly attractive to Java programmers [...]. There are some interesting applications they do, ETL is one. Log processing is another. We're going to have a lot of those features, functions and pre-built applications in our big data accelerator."  Not much else we can say right now, more on this (and Big Data in general) at Openworld!

    Read the article

  • Dokuwiki: Moving Just the data directory on other server

    - by amit
    I have installed dokuwiki on IIS7. As per my teams requirement we have to move just the Data directory to other server location. e.g - IIS7 installed Dokuwiki location: C:\inetpub\wwwroot\dokuwiki\conf - data location on the other server we want: U:\Archive\LP_Archive\SH_Systems\DEV01\dokuwiki So for doing that I followed pointers on dokuwiki install iis7 As per the above link, I tried adding IUSR to data folder permissions but its failing due to my insufficient privileges. And without that IUSR permission set on data folder I am getting an error as "The datadir ('pages') at is not found, isn't accessible or writable". Is there any other way to make it work? Is there any other account than IUSR I can use?

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • Extracting data from internet

    - by Ankiov Spetsnaz
    I would like to extract data from internet like www.mozenda.com does but I want to write my own program to do that. Specific data I'm looking for is various event data. Based on my research, I think custom web crawler is my answer but I Would like to confirm the answer and see if there are any suggestion to make custom web crawlers if web crawler indeed is an answer. Personally, I would prefer Java and I'm planning on using Glassfish technology if that matters...

    Read the article

  • HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database

    - by Jane Story
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When working with a Hyperion Profitability and Cost Management (HPCM) Standard Costing application, there can often be a requirement to check data or allocated results using reporting tools e.g Smartview. To do this, you are retrieving data directly from the Essbase databases related to your HPCM model. For information, running reports is covered in Chapter 9 of the HPCM User documentation. The aim of this blog is to provide a quick guide to finding this data for reporting in the HPCM generated Essbase database in v11.1.2.2.x of HPCM. In order to retrieve data from an HPCM generated Essbase database, it is important to understand each of the following dimensions in the Essbase database and where data is located within them: Measures dimension – identifies Measures AllocationType dimension – identifies Direct Allocation Data or Genealogy Allocation data Point Of View (POV) dimensions – there must be at least one, maximum of four. Business dimensions: Stage Business dimensions – these will be identified by the Stage prefix. Intra-Stage dimension – these will be identified by the _Intra suffix. Essbase outlines and reporting is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s02.html For additional details on reporting measures, please review this section of the documentation:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/apas03.html Reporting requirements in HPCM quite often start with identifying non balanced items in the Stage Balancing report. The following documentation link provides help with identifying some of the items within the Stage Balancing report:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/generatestagebalancing.html The following are some types of data upon which you may want to report: Stage Data: Direct Input Assigned Input Data Assigned Output Data Idle Cost/Revenue Unassigned Cost/Revenue Over Driven Cost/Revenue Direct Allocation Data Genealogy Allocation Data Stage Data Stage Data consists of: Direct Input i.e. input data, the starting point of your allocation e.g. in Stage 1 Assigned Input Data i.e. the cost/revenue received from a prior stage (i.e. stage 2 and higher). Assigned Output Data i.e. for each stage, the data that will be assigned forward is assigned post stage data. Reporting on this data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s03.html Dimension Selection Measures Direct Input: CostInput RevenueInput Assigned Input (from previous stages): CostReceivedPriorStage RevenueReceivedPriorStage Assigned Output (to subsequent stages): CostAssignedPostStage RevenueAssignedPostStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the stage business dimensions for the stage you wish to see the Stage data for. All other Dimensions NoMember Idle/Unassigned/OverDriven To view Idle, Unassigned or Overdriven Costs/Revenue, first select which stage for which you want to view this data. If multiple Stages have unassigned/idle, resolve the earliest first and re-run the calculation as differences in early stages will create unassigned/idle in later stages. Dimension Selection Measures Idle: IdleCost IdleRevenue Unassigned: UnAssignedCost UnAssignedRevenue Overdriven: OverDrivenCost OverDrivenRevenue AllocationType DirectAllocation POV One member from each POV dimension Dimensions in the Stage with Unassigned/ Idle/OverDriven Cost All the Stage Business dimensions in the Stage with Unassigned/Idle/Overdriven. Zoom in on each dimension to find the individual members to find which members have Unassigned/Idle/OverDriven data. All other Dimensions NoMember Direct Allocation Data Direct allocation data shows the data received by a destination intersection from a source intersection where a direct assignment(s) exists. Reporting on direct allocation data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s04.html You would select the following to report direct allocation data Dimension Selection Measures CostReceivedPriorStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the SOURCE stage business dimensions and the DESTINATION stage business dimensions for the direct allocations for the stage you wish to report on. All other Dimensions NoMember Genealogy Allocation Data Genealogy allocation data shows the indirect data relationships between stages. Genealogy calculations run in the HPCM Reporting database only. Reporting on genealogy data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s05.html Dimension Selection Measures CostReceivedPriorStage AllocationType GenealogyAllocation (IndirectAllocation in 11.1.2.1 and prior versions) POV One member from each POV dimension Stage Business Dimensions Any stage business dimension members from the STARTING stage in Genealogy Any stage business dimension members from the INTERMEDIATE stage(s) in Genealogy Any stage business dimension members from the ENDING stage in Genealogy All other Dimensions NoMember Notes If you still don’t see data after checking the above, please check the following Check the calculation has been run. Here are couple of indicators that might help them with that. Note the size of essbase cube before and after calculations ensure that a calculation was run against the database you are examing. Export the essbase data to a text file to confirm that some data exists. Examine the date and time on task area to see when, if any, calculations were run and what choices were used (e.g. Genealogy choices) If data does not exist in places where they are expecting, it could be that No calculations/genealogy were run No calculations were successfully run The model/data at feeder location were either absent or incompatible, resulting in no allocation e.g no driver data. Smartview Invocation from HPCM From version 11.1.2.2.350 of HPCM (this version will be GA shortly), it is possible to directly invoke Smartview from HPCM. There is guided navigation before the Smartview invocation and it is then possible to see the selected value(s) in SmartView. Click to Download HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database (Right click or option-click the link and choose "Save As..." to download this pdf file)

    Read the article

  • Integrating Data Mining into your BI Solution (Presentation)

    I recently gave a live meeting presentation to the UK User Group on Integrating Data Mining into your BI Solution.  In it I talk about and demo ways of using your data mining models inside Integration Services, Analysis Services and Reporting Services.  This is the first in a series of presentations I will be doing for the UG as I try to get the word out that Data Mining can be for the masses. You can download my deck and my line meeting recording from here.

    Read the article

  • Exporting Master Data from Master Data Services

    This white paper describes how to export master data from Microsoft SQL Server Master Data Services (MDS) using a subscription view, and how to import the master data into an external system using SQL Server Integration Services (SSIS). The white paper provides a step-by-step sample for creating a subscription view and an SSIS package. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Google Webmaster Tools Data Highlighter says "Failed to load data, please try again later"

    - by George Garside
    I seem to be unable to access the data highlighter in Google Webmaster Tools since I attempted to start a new highlight on a page. Clicking the red Start Highlighting button to open the tagger did nothing, so I refreshed. Now, the page loads without the middle content section, then a few seconds later shows the following error: Failed to load data, please try again later. I can't get any of the middle section to load, even the list of current pages/page sets that have been highlighted—this error shows. I thought it may be a Google service outage, but other sites' data highlighters work fine. It also seems coincidental that it stopped working after I attempted to start highlighting—I was able to list the existing pages and page sets fine before that, and still am able to access the service on other sites. I've tried clearing browser data and have tried Google Chrome as well—same problem. What's happened?

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • The Best Articles for Backing Up and Syncing Your Data

    - by Lori Kaufman
    World Backup Day is March 31st and we decided to provide you with some useful information to make backing up your data easier. We’ve published articles about backing up various types of data and settings both offline and online. There’s all kinds of settings on your computer to backup in addition to your personal data, such as Wi-Fi passwords, drivers, and settings for programs like web browsers, Office, and Windows Live Writer. There are also many tools available to help you keep your data and settings backed up. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Big Data Learning Resources

    - by Lara Rubbelke
    I have recently had several requests from people asking for resources to learn about Big Data and Hadoop. Below is a list of resources that I typically recommend. I'll update this list as I find more resources. Let's crowdsource this... Tell me your favorite resources and I'll get them on the list! Books and Whitepapers Planning for Big Data Free e-book Great primer on the general Big Data space. This is always my recommendation for people who are new to Big Data and are trying to understand it....(read more)

    Read the article

  • Distortion in format of data in wordpad file when shifted from windows XP to winows 2007

    - by Harpreet
    I have many data files which were set to open in wordpad file in windows XP. Those files have a particular format for data, like following: Name of Data file No. of data columns Name of data in column_1 Name of data in column_2 . . . Name of data in column_n column_1 column_2 column_3 ... column_n Now my computer has been formatted and OS is changed to windows 2007, however when I open my data files in wordpad the above format of data is no more present. The format in wordpad in windows 2007 seems to be distorted. Does anyone knows what to do to restore the format as shown above, which is what the data used to look like in XP? I have attached the snap shot of the new distorted format of data as seen in wordpad in windows 2007. The snap shot shows 100 column names, however the data columns present are only 5 when it should be actually 100 data columns.

    Read the article

  • Data Loading Issues? Try the new Demantra Data Load Guided Resolution

    - by user702295
    Hello!   Do you have data loading issues?  Perhaps you are trying the new partial schema export tool.   New to Demantra, the Data Load Guided Resolution, document 1461899.1.  This interactive guide will help you locate known solutions to previously discovered issues quickly.  From performance, ORA and ODPM errors to collections related issues that have no known hard number error.   This guide includes the diagnosis of data being imported into Demantra and data being exported from Demantra.  Contact me with any questions or suggestions.   Thank You!

    Read the article

  • how to recover deleted ntfs patition with data entirely while installing ubuntu 13.04

    - by Anson Varghese
    I've installed ubuntu 13.04 onto my hp 2231tx computer. During installation all of my data was erased. I didn't know all of my three partitions would be deleted. I was shocked after finding out that all of my personal data was erased. I didn't know what to do to resolve this problem so I search google for an answer. I found a program called testdisk and I used it to recover about half of my data. Among this data weren't my personal photos and videos. Is there a way to recover the other half?

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • Best Data Structure For Time Series Data

    - by TriParkinson
    Hi all, I wonder if someone could take a minute out of their day to give their two cents on my problem. I would like some suggestions on what would be the best data structure for representing, on disk, a large data set of time series data. The main priority is speed of insertion, with other priorities in decreasing order; speed of retrieval, size on disk, size in memory, speed of removal. I have seen that B+ trees are often used in database because of their fast search times, but how about for fast insertion times? Is a linked list really the way to go? Thanks in advance for your time, Tri

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >