Search Results

Search found 58566 results on 2343 pages for 'data modelling'.

Page 82/2343 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Most efficient way to rebuild a tree structure from data

    - by Ahsan
    Have a question on recursively populating JsTree using the .NET wrapper available via NuGet. Any help would be greatly appreciated. the .NET class JsTree3Node has a property named Children which holds a list of JsTree3Nodes, and the pre-existing table which contains the node structure looks like this NodeId ParentNodeId Data AbsolutePath 1 NULL News /News 2 1 Financial /News/Financial 3 2 StockMarket /News/Financial/StockMarket I have a EF data context from the the database, so my code currently looks like this. var parentNode = new JsTree3Node(Guid.NewGuid().ToString()); foreach(var nodeItem in context.Nodes) { parentNode.Children.Add(nodeItem.Data); // What is the most efficient logic to do this recursively? } as the inline comment says in the above code, what would be the most efficient way to load the JStree data on to the parentNode object. I can change the existing node table to suite the logic so feel free to suggest any changes to improve performance.

    Read the article

  • Cloud Data Protection Tips and Guidelines

    Cloud Data Protection is an effective way to save and secure your data (must be less than 1 TB) for ultimate optimization of your PC health. But one may get bewildered to see hundreds of data backup ... [Author: Susan Brown - Computers and Internet - April 01, 2010]

    Read the article

  • Webcast: ODI and Successful Strategies for Optimizing Your Data Warehouse

    - by antonio romero
    A new public webcast for ODI: “Successful Strategies for Optimizing Your Data Warehouse”  is scheduled for March 3th at 10am PT/1pm ET. In this webcast, Mala Narasimharajan, from the product marketing team and Denis Gray from the product management team, will be presenting ODI’s strong value proposition for data warehousing solutions. You can find the registration link below. Live webcast: Successful Strategies for Optimizing Your Data Warehouse March 3, 2011 1pm ET/10pm PT Registration link: http://www.oracle.com/us/dm/66153-wwmk10035379mpp011-se-309154.html

    Read the article

  • Are there any data remanence issues with flash storage devices?

    - by matt
    I am under the impression that, unlike magnetic storage, once data has been deleted from a flash drive it is gone for good but I'm looking to confirm this. This is actually relating to my smart phone, not my computer, but I figured it would be the same for any flash type memory. Basically, I have done a "Factory Reset" on the phone, which wipes the Flash ROM clean but I'm wondering is it really clean or is the next person that has my phone, if they are savvy enough going to be able to get all my passwords and what not? And yes, I am wearing my tinfoil hat so the CIA satellites can't read my thoughts, so I'm covered there.

    Read the article

  • SQL Azure Data Sync

    - by kaleidoscope
    The Microsoft Sync Framework Power Pack for SQL Azure contains a series of components that improve the experience of synchronizing with SQL Azure. This includes runtime components that optimize performance and simplify the process of synchronizing with the cloud. SQL Azure Data Sync allows developers and DBA's to: · Link existing on-premises data stores to SQL Azure. · Create new applications in Windows Azure without abandoning existing on-premises applications. · Extend on-premises data to remote offices, retail stores and mobile workers via the cloud. · Take Windows Azure and SQL Azure based web application offline to provide an “Outlook like” cached-mode experience. The Microsoft Sync Framework Power Pack for SQL Azure is comprised of the following: · SqlAzureSyncProvider · Sql Azure Offline Visual Studio Plug-In · SQL Azure Data Sync Tool for SQL Server · New SQL Azure Events Automated Provisioning Geeta

    Read the article

  • Data Holder Framework

    - by csharp-source.net
    Data Holder is an open source .net object/relational mapper written in c#. It provides typed data ecapsulation and database persistence for .net applications. It also contains a wizzard for generating the data objects and persistance c# code. Right now it has persistence implementation only for MSQL 2000/2005.

    Read the article

  • how to do loop for array which have different data for each array

    - by Suriani Salleh
    i have this file XML file.. I need to convert it form XMl to MYSQL. if it have only one array than i know how to do it.. now my question how to extract this two array Each array will have different value of data..for example for first array, pmIntervalTxEthMaxUtilization data : 0,74,0,0,48 and for second array pmIntervalRxPowerLevel data: -79,-68,-52 , pmIntervalTxPowerLevel data: 13,11,-55 . can some one help to guide how write php code to extract this xml file to MY SQL <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxUndersizedFrames</mt> [ this is first array] <mt>pmIntervalRxUnicastFrames</mt> <mt>pmIntervalTxUnicastFrames</mt> <mt>pmIntervalRxEthMaxUtilization</mt> <mt>pmIntervalTxEthMaxUtilization</mt> <mv> <moid>port:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 1st array i want to insert in DB] <r>0</r> <r>0</r> <r>5</r> <r>0</r> </mv> </mi> <mi> <mts>20130618020000</mts> <gp>900</gp> <mt>pmIntervalRxSES</mt> [this is second array] <mt>pmIntervalRxPowerLevel</mt> <mt>pmIntervalTxPowerLevel</mt> <mv> <moid>client:1:3:23-24</moid> <sf>FALSE</sf> <r>0</r> [the data for 2nd array i want to insert in DB] <r>-79</r> <r>13</r> </mv> </mi> this is the code for one array that i write..i dont know how to write code for two array because the field appear two times and have different data value for each array // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; ........................... i have do a little research on it this is the out come... $i = 0; while( $i < 5) { // Loop through the specified xpath foreach($xml->md->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_uni = $subchild->r[10]; $tx_uni = $subchild->r[11]; $rx_eth = $subchild->r[16]; $tx_eth = $subchild->r[17]; // dump into database; .............................. $i++; if( $i == 5 )break; } } // Loop through the specified xpath foreach($xml->mi->mv as $subchild) { $port_no = $subchild->moid; $rx_ses = $subchild->r[0]; $rx_es = $subchild->r[1]; $tx_power = $subchild->r[10]; // dump into database; .......................

    Read the article

  • Oracle Vanquisher: A Data Center Optimization Adventure to Debut at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    Heat. Downtime. Site-wide outages. Legacy hardware. Security holes. These are all threats to your data center. What if you could vanquish them to simplify your IT and accelerate business innovation and growth? Find out how - play Oracle Vanquisher, a new data center optimization video game that will be showcased at Oracle OpenWorld (Hardware DEMOgrounds, Moscone South Hall).Playing Oracle Vanquisher, you'll be armed with a cool Oracle vacuum pack suit and a strategic IT roadmap. You'll thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position. Of course, optimizing your data center is far more than a great game. For more information, visit the Oracle Optimized Data Center homepage or check out these targeted Oracle OpenWorld keynotes and sessions:KeynotesShift Complexity, with Oracle President Mark HurdMonday, October 1, 8:00 a.m. - 9:30 a.m.Moscone North, Hall DOracle Cloud Infrastructure and Engineered Systems: Fast, Reliable, Virtualized, with Oracle Executive Vice President John FowlerWednesday, October 3, 8:00 a.m. - 9:45 a.m.Moscone North, Hall DSessions Oracle Linux Oracle Optimized Solutions Oracle Solaris SPARC Servers Storage SPARC SuperCluster Oracle VM Server Virtualization Desktop Virtualization

    Read the article

  • How to restore missing calendar data from Lightning/Thunderbird

    - by dev9
    Today out of nowhere all my events and tasks disappeared from my Thunderbird. However, I have a full backup of .thunderbird folder. How can I restore my calendar data? I reverted these files to previous versions: /home/me/.thunderbird/xxx.default/calendar-data/local.sqlite /home/me/.thunderbird/xxx.default/prefs.js but I still cannot see any data in my Thunderbird. What else should I do?

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • What to choose API based server or Socket based server for data driven application

    - by Imdad
    I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server pooling is done. As there are lot of data to pool it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions.

    Read the article

  • Looking for a Lead SQL Developer with a passion for data

    - by simonsabin
    Data is a huge part of what we do and I need someone that has a passion for data to lead our SQL team. If you’ve got experience with SQL and want to lead a team working in an agile environment with aggressive CI processes. Do you have a passion about data and want to use technology to solve problems then you are just the person I am looking for The role is based in London working for on of the top tech companies in Europe. Contact me though my blog or linkedin ( http://uk.linkedin.com/in/simonsabin...(read more)

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
    It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software. Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors. However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools. Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • An introduction to Oracle Retail Data Model with Claudio Cavacini

    - by user801960
    In this video, Claudio Cavacini of Oracle Retail explains Oracle Retail Data Model, a solution that combines pre-built data mining, online analytical processing (OLAP) and dimensional models to deliver industry-specific metrics and insights that improve a retailers’ bottom line. Claudio shares how the Oracle Retail Data Model (ORDM) delivers retailer and market insight quickly and efficiently, allowing retailers to provide a truly multi-channel approach and subsequently an effective customer experience. The rapid implementation of ORDM results in predictable costs and timescales, giving retailers a higher return on investment. Please visit our website for further information on Oracle Retail Data Model.

    Read the article

  • Create named criteria in EJB Data control

    - by shantala.sankeshwar
    This article gives the detailed steps on creating named criteria in EJB Data control.Note that this feature is available in Jdev version 11.1.2.0.0Use Case DescriptionSuppose we have defined an EJB Entity Object & we would like to filter the Entity object based on some criteria,then this filtering can be achieved by creating named criteria in EJB Data Control.Implementation stepsLet us suppose that we have created Java EE Web Application with Entities from Emp table Create session bean,generate data control for the same Edit empFindAll in DataControls.dcx fileCreate simple Named Criteria: deptno>=20Create on '+' icon to create Named Criteria:Refresh the Data Controls & create a new jspx page.Drop EmpCriteria as ADF Query Panel with TableRun the page,click on search button & we will see that Emp table shows filtered records

    Read the article

  • Recover files from NTFS drive with bad sectors

    - by Martin
    A few nights ago I have created a backup of my data on an external 500 GB NTFS USB hard drive. I have then formatted my computer, reinstalled Ubuntu and started transferring back the data from the external HDD. Unfortunately some files have became corrupted and Ubuntu is unable to copy them over. The same issue happens if I login using Windows 7. Disk Utility detects with SMART that there are "a few bad sectors". Some of files are perfectly intact, but other files cannot be accessed (nor read, copied...) although they are displayed within nautilus and show the correct file size. Is there anything I can do to recover this data? I have thought of using TestDisk but this utility seems more useful for repairing lost partitions or deleted files. I have also thought of using ddrescue so I could at least have a low level copy of the disk but I am not sure what use to make of it in order to recover the data!!!

    Read the article

  • Ubuntu took away permissions from my Data partition

    - by RobinJ
    The pangolin has struck again. The bug of the day for today is Ubuntu taking away my permissions on my Data partition (NTFS). One moment everything worked fine, the next moment I couldn't chmod anything anymore. chown throws no errors or warnings at all, but nothing has changed either. chmod keeps saying Operation not permitted. I've been messing around with /etc/fstab as suggested by other answers on AskUbuntu, but none of them seem to have the desired effect. This is my current line: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=000,gid=46,permissions,users,auto,exec 0 0 For reference, this is the original one: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=007,gid=46 0 0 (right after the problem started occuring) What do I need to do so I am the owner of my own hard drive again? I want to be able to just use chmod and chown (without sudo) without being told that some mysterious alien has taken over control of my Data partition. I can still read and write, but execution permissions seem to be the problem.

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • Oracle Product Leader Named a Leader in Gartner MQ for MDM of Product Data Solutions

    - by Mala Narasimharajan
    Gartner recently Oracle as a leader in the MQ report for MDM of Product Data Solutions.  They named Oracle as a leader with the following key points:  Strong MDM portfolio covering multiple data domains, industries and use cases Oracle PDH can be a good fit for Oracle EBS customers and can form part of a multidomain solution: Deep MDM of product data functionality Evolving support for information stewardship For  more information on the report visit Oracle's Analyst Relations blog at  http://blog.us.oracle.com/dimdmar/.  To learn more about Oracle's product information solutions for master data management click here. 

    Read the article

  • Is there a way to change the date format used when InfoPath saves the form data to xml?

    - by Robert
    I have an InfoPath Form template that has some Date Picker controls in it bound to elements in an xml data source. I know I can change the display format of the date by going into the Date Picker Properties and setting the date format. This foramt is only used for display puposes when the form is being filled out. When the form is saved as an xml file the date is always stored in the format YYYY-MM-DD. Is there a way to change the date format that gets serialized to xml? I'm using InfoPath 2007.

    Read the article

  • How do I simplify terrain with tunnels or overhangs?

    - by KKlouzal
    I'm attempting to store vertex data in a quadtree with C++, such that far-away vertices can be combined to simplify the object and speed up rendering. This works well with a reasonably flat mesh, but what about terrain with overhangs or tunnels? How should I represent such a mesh in a quadtree? After the initial generation, each mesh is roughly 130,000 polygons and about 300 of these meshes are lined up to create the surface of a planetary body. A fully generated planet is upwards of 10,000,000 polygons before applying any culling to the individual meshes. Therefore, this second optimization is vital for the project. The rest of my confusion focuses around my inexperience with vertex data: How do I properly loop through the vertex data to group them into specific quads? How do I conclude from vertex data what a quad's maximum size should be? How many quads should the quadtree include?

    Read the article

  • What data counters / meters are available?

    - by Santosh
    Actually I have a wireless 3G modem that works well on Windows based operating system, its interface software were made Windows centric. It can still connect to internet on Ubuntu or other linux based operating system but it won't show the data counter (the interface which shows how much data has been transferred, at what speed). If I continue to surf internet in Linux then I won't have any idea how much data has been used and it would become heavy on my pocket. So I just want a software that let me know how much data has been transferred, if there is a limiter; that warns or disconnects me when I reach predefined MBs then its better. Please let me know if there is any software or script or something like that already there.

    Read the article

  • OData Query Option top Forces Data To Be Sorted By Primary Key

    This post show a simple WCF Data Service (Formerly known as ADO.NET Data Services) project that retrieves data using the Reflection Provider for accessing data. It goes on to show that using $top... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A new Excel 2010 book for Data Analysis

    - by Marco Russo (SQLBI)
    Microsoft Press just announced the printing of Microsoft Excel 2010: Data Analysis and Business Modeling , which is the third edition of the book written by Wayne L. Winston covering many data analysis and modeling techniques using a very clear problem-solution approach, including a good statistical explanation whenever it is necessary. I suggest this book as a good complement to our Microsoft PowerPivot for Excel 2010: Give Your Data Meaning !...(read more)

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >