Search Results

Search found 58566 results on 2343 pages for 'data modelling'.

Page 81/2343 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought Hey, this should be easy! because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What can Go chan do that a list cannot?

    - by alpav
    I want to know in which situation Go chan makes code much simpler than using list or queue or array that is usually available in all languages. As it was stated by Rob Pike in one of his speeches about Go lexer, Go channels help to organize data flow between structures that are not homomorphic. I am interested in a simple Go code sample with chan that becomes MUCH more complicated in another language (for example C#) where chan is not available. I am not interested in samples that use chan just to increase performance by avoiding waiting of data between generating list and consuming the list (which can be solved by chunking) or as a way to organize thread safe queue or thread-safe communication (which can be easily solved by locking primitives). I am interested in a sample that makes code simpler structurally disregarding size of data. If such sample does not exist then sample where size of data matters. I guess desired sample would contain bi-directional communication between generator and consumer. Also if someone could add tag [channel] to the list of available tags, that would be great.

    Read the article

  • SQLAuthority News – Download Whitepaper – Enabling and Securing Data Entry with Analysis Services Writeback

    - by pinaldave
    SQL Server Analysis Service have many features which are commonly requested and many already exists in the system. Security Data Entry is very important feature and SSAS supports writeback feature.  Analysis Services is a tool for aggregating information and providing business users with the ability to analyze and support decision making in their business. By using the built-in writeback feature in Analysis Services, business users can also modify their data points to perform what-if analysis or supplement any existing data. The techniques described in this article derive from the author’s professional experience in the design and development of complex financial analysis applications used by various business groups in a large multinational company. Download Whitepaper Enabling and Securing Data Entry with Analysis Services Writeback. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Any experience on open source database synchronization open source solutions? [on hold]

    - by Boris Pavlovic
    I'm considering few database synchronization open source solutions. The system in need for data synchronization is composed of instances of different types of databases, i.e. heterogeneous system. There are few candidates: Symmetric DS Talend's Data Integration with support for data synchronization Continuent's Tungsteen Replication Daffodil Replicator OS Do you have any real world experience with any of these tools?

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can I restore my deleted calendar data in Windows Live Mail ?

    - by Cheryl
    I have been using the calendar in Windows Live Mail for months now. When I logged on yesterday, all of the data and appointments I have been inputting for months is gone. The calendar is completely blank. I have performed a system restore with no luck. I have run in safe mode with no luck. I have run a check disk in DOS with no luck. Any other ideas on what I can do to find my calendar info? Could the file be corrupt? I rely on this calendar and all of my info is gone, so any help will be greatly appreciated.

    Read the article

  • 'Getting Started with Oracle Data Integrator 11g: A Hands-On Tutorial' book is now available

    - by Julien Testut
    We are pleased to announce the availability of the first book on Oracle Data Integrator published by Packt Publishing:  Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial Authors: Peter C. Boyd-Bowman, Christophe Dupupet, Denis Gray, David Hecksel,  Julien Testut, Bernard Wheeler. Congratulations to everyone who contributed to this book! You can get more information about 'Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial' including the table of contents and a sample chapter at http://www.packtpub.com/oracle-data-integrator-11g-getting-started/book. The book is available on Amazon.com, Amazon.co.uk, Barnes & Noble and Safari Books Online.

    Read the article

  • Webcast Replay Available: Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast Scrambling Sensitive Data in EBS 12 Cloned Environments (Presentation) Eric Bing, Senior Director, Jagan Athreya, Enterprise Manager Product Management, and Elke Phelps, Senior Principal Product Manager, discussed the Oracle E-Business Suite Template for Data Masking Pack, and how it can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. (July 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • SQL SERVER – CSVExpress and Quick Data Load

    - by pinaldave
    One of the newest ETL tools is CSVexpress.com.  This is a program that can quickly load any CSV file into ODBC compliant databases uses data integration.  For those of you familiar with databases and how they operate, the question that comes to mind might be what use this program will have in your life. I have written earlier article on this subject over here SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress. You might know that RDBMS have automatic support for loading CSV files into tables – but it is not quite as easy as one click of a button.  First of all, most databases have a command line interface and you need the file and configuration script in order to load up.  You also need to know enough to write the script – which for novices can be extremely daunting.  On top of all this, if you work with more than one type of RDBMS, you need to know the ins and outs of uploading and writing script for more than one program. So you might begin to see how useful CSVexpress.com might be!  There are many other tools that enable uploading files to a database.  They can be very fancy – some can generate configuration files automatically, others load the data directly.  Again, novices will be able to tell you why these aren’t the most useful programs in the world.  You see, these programs were created with SQL in mind, not for uploading data.  If you don’t have large amounts of data to upload, getting the configurations right can be a long process and you will have to check the code that is generated yourself.  Not exactly “easy to use” for novices. That makes CSVexpress.com one of the best new tools available for everyone – but especially people who don’t want to learn a lot of new material all at once.  CSVexpress has an easy to navigate graphical user interface and no scripting or coding is required.  There are built-in constraints and data validations, and you can configure transforms and reject records right there on the screen.  But the best thing of all – it’s free! That’s right, you can download CSVexpress for free from www.csvexpress.com and start easily uploading and configuring riles almost immediately.  If you’re currently happy with your method of data configuration, keep up with the good work.  For the rest of us, there’s CSVexpress.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Google I/O 2010 - Data pipelines with Google App Engine

    Google I/O 2010 - Data pipelines with Google App Engine Google I/O 2010 - Building high-throughput data pipelines with Google App Engine App Engine 301 Brett Slatkin This session will cover how to build, test, and maintain large-scale data pipelines on Google App Engine. It will cover maximizing efficiency, productionization, and how to deal with changing requirements. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 01:01:52 More in Science & Technology

    Read the article

  • Defense Manpower Data Center Wins Award for Excellence in the Workforce

    - by Peggy Chen
    The Defense Manpower Data Center milConnect website recently won the 2012 Excellence.gov Award for Excellence in the Workforce. Defense Manpower Data Center milConnect is a centralized, online resource that provides military service members (active and retired) and their families (over 42 million in total) quick access to their profile, health care enrollments, benefits, and other military-related topics. An easy to use, safe and secure website, milConnect also provides service members with convenient access their personnel and service-related information. The self-service website allow users to quickly and easily find and, where applicable, update their information in the Defense Eligibility Reporting System (DEERS) and milConnect transmits information to and from one reliable source safely and securely.  The Defense Manpower Data Center (DMDC) maintains the largest, most comprehensive central repository of personnel, manpower, casualty, pay, entitlement, personnel security, person identity and attributes, survey, testing, training, and financial data in the Department of Defense (DoD).  This is one of the largest systems of record in the world. milConnect had the challenge of modernizing the user experience for over 42 million users. With records in over 22 applications and 25 interfaces in hundreds of existing systems, milConnect needed to reduce the complexity of multiple authentication sources as well as consolidating access to existing systems with sensitive information. It accomplished this using a service-orientated architecture, enterprise security and access and identity management for self-service access on a massive scale. By providing 24x7x365 secure access and handling over 5 million transactions daily, not only has milConnect, built on Oracle WebCenter, streamlined and improved the customer experience for military personnel and families. it has also done so while cutting costs, allowing self-service access, and promoting electronic government. Congrats to Defense Manpower Data Center and milConnect! 

    Read the article

  • Details on Oracle's Primavera P6 Reporting Database R2

    - by mark.kromer
    Below is a graphic screenshot of our detailed announcement for the new Oracle data warehouse product for Primavera P6 called P6 Reporting Database R2. This DW product includes the ETL, data warehouse star schemas and ODS that you'll need to build an enterprise reporting solution for your projects & portfolios. This product is included on a restricted license basis with the new Primavera P6 Analytics R1 product from Oracle because those Analytics are built in OBIEE based on this data warehouse product.

    Read the article

  • Low coupling processing big quantities of data

    - by vitalik
    Usually I achieve low coupling by creating classes that exchange lists, sets, and maps between them. Now I am developing a batch application and I can't put all the data inside a data structure because there isn't enough memory. I have to read and process one chunk of data and then going to the next one. So having low coupling is much more difficult because I have to check somewhere if there is still data to read, etc. What I am using now is: Source - Process - Persist The classes that process have to ask to the Source classes if there are more rows to read. What are the best practices and or useful patterns in such situations? I hope I am explaining myself, if not tell me.

    Read the article

  • Learning PostgreSql: bulk loading data

    - by Alexander Kuznetsov
    In this post we shall start loading data in bulk. For better performance of inserts, we shall load data into a table without constraints and indexes. This sounds familiar. There is a bulk copy utility, and it is very easy to invoke from C#. The following code feeds the output from a T-SQL stored procedure into a PostgreSql table: using ( var pgTableTarget = new PgTableTarget ( PgConnString , "Data.MyPgTable" , GetColumns ())) using ( var conn = new SqlConnection ( connectionString )) { conn.Open...(read more)

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • SQL – Business Intelligence: Derive Data or Information?

    - by Pinal Dave
    We all know the value of information in our lives. Whether it’s a personal decision or a business initiated one, people need it. But the question is: who is to make the distinction between data and information? We all come across a whole lot of data daily, that may be significant or not. We filter what’s required and forget about the rest. Information is filtered and distilled data. Filtering and distillation can also alter its actual meaning and natural state. Therefore, in this blog we discover some ways to ensure that we’re using business intelligence derived from the right information for making critical management decisions. Four key questions managers must ask themselves before making a decision: 1. Am I working with data or information? 2. What is it’s context? 3. How recent is it? 4. How was it derived or what is the source? The first question is probably the most important. You must know what you’re dealing with here. If you see use of adjectives and conclusions drawn, it’s information. Not raw data. You very next concern must be whether this is guised to present a particular viewpoint or perspective. It makes a lot of difference if you take a decision based on someone’s propaganda to distort real facts. Therefore, the context and the intentions of the distillation process must be clear to you. The next consideration is whether data is recent enough to hold any value. Since it has a very short shelf life, you must ensure that its context and value is not lost out of time. The last and the most important consideration is how was it derived in the first place. The observer effect is what calls the shots here. The source can change the context to a great extent if the collection methodology  and purpose is not clear. Gathering intelligence for decision making requires users to be keen observers and not take the information provided on its face value alone. These probing questions will allow you to make sure that you’re working with clean and accurate data devoid of any influence or manipulations. Only then can you be sure of deriving true business intelligence for your organization. BI technology is also a great way to ensure accuracy of reports. SQL BI Platform  provides advanced tools and techniques for all your BI needs and concerns. Koenig Solutions offers this course along with a host of other Business Intelligence and IT courses on all latest technologies available in the market today. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • This Year's SQL Christmas Card

    - by Mike C
    This year's Christmas Card is similar to last year's. I used the geometry data type again for a spatial data design. Just download the attachment, unzip the .SQL script and run it in SSMS. Then look at the Spatial Data preview tab for the result. Also don't forget to visit http://www.noradsanta.org/ if your kids want to track Santa. Merry Christmas, Happy Holidays and have a great new year!...(read more)

    Read the article

  • Shared Data Source name error underscore characters added

    - by mick
    The name of our shared data source in RS (report server) is "AF1 Live Database" (no underscore characters - just spaces between words) and is the same in report builder in VS. However, the following error pops up when the RDL of this report is uploaded onto our company site and run. (error we are receiving...) The report server cannot process the report or shared dataset. The shared data source 'AF1_Live_Database' for the report server or SharePoint site is not valid. Browse to the server or site and select a shared data source. (rsInvalidDataSourceReference) We have no idea why the error reports the shared data source as 'AF1_Live_Database' with underscore characters? As this appears to be the problem that keeps the report from running we are seeking your help, thanks.

    Read the article

  • SQL Server Integration Services 2008: Importing Excel Data Using Derived Column Transformation

    The complexity involved in transferring data between Excel and SQL Server results from different and sometimes incompatible data types. The Import and Export wizard mitigates potential issues introduced by these incompatibilities by taking advantage of Data Conversion Transformation. Marcin Policht describes another approach that produces an equivalent outcome by employing Derived Column Transformation instead.

    Read the article

  • Getting a handle on mobile data

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} written by Ashok Joshi The proliferation of mobile devices in the corporate world is both a blessing as well as a challenge.  Mobile devices improve productivity and the velocity of business for the end users; on the other hand, IT departments need to manage the corporate data and applications that run on these devices. Oracle Database Mobile Server (DMS for short) provides a simple and effective way to deal with the management challenge.  DMS supports data synchronization between a central Oracle database server and data on mobile devices.  It also provides authentication, encryption and application and device management.  Finally, DMS is a highly scalable solution that can be used to manage hundreds of thousands of devices.   Here’s a simplified outline of how such a solution might work. Each device runs local sync and mgmt agents that handle bidirectional data flow with an Oracle enterprise backend, run remote commands, and provide status to the management console. For example, mobile admins could monitor multiple networks of mobile devices, upgrade their software remotely, and even destroy the local database on a compromised device. DMS supports either Oracle Berkeley DB or SQLite for device-local storage, and runs on a wide variety of mobile platforms. The schema for the device-local database is pretty simple – it contains the name of the application that’s installed on the device as well as details such as product name, version number, time of last access etc. Each mobile user has an account on the monitoring system.  DMS supports authentication via the Oracle database authentication mechanisms or alternately, via an external authentication server such as Oracle Identity Management. DMS also provides the option of encrypting the data on disk as well as while it is being synchronized. Whenever a device connects with DMS, it sends the list of all local application changes to the server; the server updates the central repository with this information.  Synchronization can be triggered on-demand, whenever there’s a change on the device (e.g. new application installed or an existing application removed) or via a rule-based schedule (e.g. every Saturday). Synchronization is very fast and efficient, since only the changes are propagated.  This includes resume capability; should synchronization be interrupted for any reason, the next synchronization will resume where the previous synchronization was interrupted. If the device should be lost or stolen, DMS has the capability to remove the applications and/or data from the device. This ability to control access to sensitive data and applications is critical in the corporate environment. The central repository also allows the IT manager to track the kinds of applications that mobile users use and recommend patches and upgrades, while still allowing the mobile user full control over what applications s/he downloads and uses on the device.  This is useful since most devices are used for corporate as well as personal information. In certain restricted use scenarios, the IT manager can also control whether a certain application can be installed on a mobile device.  Should an unapproved application be installed, it can easily be removed the next time the device connects with the central server. Oracle Database mobile server provides a simple, effective and highly secure and scalable solution for managing the data and applications for the mobile workforce.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >