Search Results

Search found 63372 results on 2535 pages for 'data center'.

Page 40/2535 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • what's the key different between data management and data governance?

    - by Sid Xing
    i just read some articles about these two theories, and i thought they have the similar goal, but DG is more about process management by follow some best practice. So my 1st question is about the difference between DG & DM. I'm confused. There're so many concepts around data management. Data quality, data security, data governance, data profiling, data integration, master data management, metadata management.... It seems like neither of them is EXACTLY separated, they're together. My 2nd question, or ask for your suggestion to help me better understand the relation between these concepts. Appreciate your help.

    Read the article

  • Possibly simple PHP/MYSQL issue with retrieving and showing data

    - by envoys
    I have been racking my brains over this for a while now. Here is the data i have in the sql data base as an example: ID | TYPE | DATA 1 | TXT | TEST 2 | PHP | php 3 | JS | JAVASCRIPT That is just an example, there are multiple listing for TXT, PHP and JS throughout the table. What I want to do is retrive all the data and display it all into separate drop down/select boxes. Meaning, select box one would list all data with type TXT, select box two would list all data with type PHP and select box 3 would list all data with type JS. The only way I have came about doing this is doing individual sql queries for each different type. I know there is a way to do it all in 1 query and then display it the way I want to but I just can't seem to figure out how and I know its going to drive me nuts when someone helps and I see just how they did it. Thanks for the input.

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • Data Recovery needed with a Sector Zero problem on a HDD

    - by Jay Robins
    Left my computer (running XP) copying files to an external drive. Came back a few hours later, and the laptop was pretty hot and had frozen (Ironic aint it). Forced a reboot, and the laptop HDD hasnt worked since. Set it up in an external enclosure, but XP cant mount the HDD to a drive letter, but is able to recognize the manufacturer and drive size (Hitachi-320GB). No noise or rattling when the drive is spinning, but I cant get anything off of it, since i cant mount it, or see much of anything. Computer repair shop ran some software tests and says it came back with a "Zero Sector bad" message, and need to send it to a professional data recovery service. Any other options or ideas, before I have to spend thousands of dollars to recover my data? Any help would be GREATLY appreciated!!! I'm desperate and a poor student! Thanks in Advance, -Jay

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Excel Graph: How can I turn data below in to a 'time based' graph

    - by Mike
    In my spreadsheet I am collecting time periods when certain values have been changed. The user is restricted to 4 time periods. I would like to show the data based on thos time periods. I've included a mock up' of the data and the type of graph I would like to create. I've tried to create it for the last hour but am obviously missing something so thought I'd ask around. http://i48.tinypic.com/55lezr.jpg Many thanks for any help Mike P.S How do I make this image appear in the message and not as a link?

    Read the article

  • Why Standards Place Limits on Data Transfer Rates?

    - by Mehrdad
    This is a rather general question about hardware and standards in general: Why do they place limits on data transfer rates, and disallow manufacturers from exceeding those rates? (E.g. 100 Mbit/s for Wireless G, 150 Mbit/s for Wireless N, ...) Why not allow for some sort of handshake protocol, whereby the devices being connected agree upon the maximum throughput that they each support, and use that value instead? Why does there have to be a hard-coded limit, which would require a new standard for every single improvement to a data rate?

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Converge Voice and Data networks using Sonicwall

    - by skinneejoe
    We are looking to converge VOIP and Data traffic onto a single wire so that our client's VOIP phones pass data through to the users computer. We are specing out a new Sonicwall NSA appliance to handle routing functions and layer 2 switches to manage VLANS. Not a huge network, medium sized. What should I know about converging the networks onto a single wire? Obviously I'll want to prioritize voice traffic, is this handled solely in the Sonicwall with QoS configurations or do the layer 2 switches need to be configured differently? Any other pitfalls I should be aware of, or any good resources for learning more?

    Read the article

  • Does data mining qualify as an abuse?

    - by Hybryd
    Hi all, today I had a strange experience with my ISP. They disabled my password for internet connection, and when I called them, they enabled it again, but they didn't say why it happened. In the last couple of days I was running a data mining that I made for one forum to get some useful info about business that I'm in. So I thought, maybe my ISP figured that 10,000 page requests in couple of hours to the same site may be some kind of attack. What do you think, does it qualify as an attack? Is it even ok to data mine in that way?

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

  • URGENT: help recovering lost data

    - by Niels Kristian
    I have made a directory: sudo mkdir /ssd, the directory was supposed to be mounted to a raid array called md3. This was done by adding /dev/md3 /ssd auto defaults 0 0 to fstab. Then after a while where I had used the directory, I realized that I had forgotten to run sudo mount -a - and then I did, and now the data is gone. I tried to uncomment the line in fstab and run sudo mount -a but that didn't get back my data. What can I do!? CONTENT OF FSTAB: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md/0 none swap sw 0 0 /dev/md/1 /boot ext3 defaults 0 0 /dev/md/2 / ext4 defaults 0 0 /dev/md3 /ssd auto defaults 0 0

    Read the article

  • Recover data from SD card

    - by Paul Tarjan
    I have a 2GB kingston microSD card which is about 3 years old. I put it in a reader today in my Windows Vista computer, wrote a 32MB file onto it, safely removed it, and then tried to read it elsewhere. Nothing. Putting it back in vista it now says You need to format the disk in drive F: before you can use it. What should I do? I have access to many computers and OSes if your recommendations need that. I would be very sad if I lost all the contents of the card. Most of the data is backed up, but there are a few things that aren't. :( Doing a # dd if=/dev/sdg of=~/tmp/sd.bin gives me a 2 gig file, and grepping the file it seems like lots of my data is still there, how can I put it back together?

    Read the article

  • How to have Excel data validation display different data in drop down than is actually validated

    - by Memitim
    How can I provide a user with a drop-down menu in a cell that displays the contents from one column but actually writes the value from a different column to the cell and validates against the values from that second column? I have a bit of code that very nearly does this (credit: DV0005 from the Contextures site): Private Sub Worksheet_Change(ByVal Target As range) On Error GoTo errHandler If Target.Cells.Count > 1 Then GoTo exitHandler If Target.Column = 10 Then If Target.Value = "" Then GoTo exitHandler Application.EnableEvents = False Target.Value = Worksheets("Measures").range("B1") _ .Offset(Application.WorksheetFunction _ .Match(Target.Value, Worksheets("Measures").range("Measures"), 0) - 1, 1) End If The drop-down displays the values from one column, for example Column B, but when selected actually writes the value on the same row from Column C to the cell. However, data validation is actually validating against Column B, so if I manually enter something from Column C in the cell and try to move to another cell, data validation throws an error.

    Read the article

  • Recovering data from a corrupted disk

    - by r_honey
    I use an external harddisk to backup my data (it had 3 partitions). Last week when I plugged it in, the OS (Win 7) hung up and I had to force re-boot the machine. When I turned it back on, the system just did not detect the hard-disk. It was last Sunday and I had to give up after sometime. Now I return back next Sunday (today) and when I plug it back-in to the machine, the OS detects the disk as well as all the 3 partitions on it. But it says all 3 are unformatted and I cant access any of them. Is there any way to recover data from the 3 partitions (I tried PC File Recovery and Recuva from PiriForm but neither detected these partitions).

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Suggestion for a Data Structure!

    - by Jay
    I have the following requirements for a data structure: Direct access to an element with the help of a key (Key will be an integer, range is also same as integer range) Avoid memory allocation in chunks (Allocate contigous memory for the data structure including the data) Should be able to grow the data structure size dynamically Which data structure would you suggest? Any pointers in the direction will also be of help.

    Read the article

  • How can Core Data store an NSData?

    - by mystify
    The documentation says, that core data properties can only store NSString, NSNumber and NSDate types. However, a lot of Core Data users claim Core Data could also store an NSData type. But I wasn't able to see that in the documentation, although the Xcode Data Modeler allows to choose a data type called "binary" (which seems to be NSData). Did I miss something? Is there a hidden place in the documentation that indeed lists NSData for binary stuff?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >