Search Results

Search found 11778 results on 472 pages for 'online sources'.

Page 21/472 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • SSIS and StreamInsight Working Together.

    I have been thinking a lot recently about what it would be like to have StreamInsight and SSIS working together.  Well the CAT team have produced a paper on some of our options here. Here are some of my thoughts. There is of course a slight mismatch in their types of usage.  StreamInsight is an Event Stream processing engine capable of operating on new data in the sub second timeframe.  The engine allows you to do real time analytics and take decisions on events that have potentially only just happened.  SSIS on the other hand is a batch processing engine.  In general I do not like having to invoke the same package more than once every 90 seconds or so as it can start to get expensive.  Usually when doing batch processing we have an hour or longer of grace before we have to move data from A –> B. StreamInsight operates on streams of data.  Before anyone mentions it yes I know StreamInsight is equally adept at using the IEnumerable interface, but I would argue live streaming and real-time analytics is a primary goal of the product.  SSIS does not have an “Always On” button I do not like the idea of embedding StreamInsight inside SSIS using a transform particularly.  It means StreamInsight becomes a batch processing engine because it can only operate when the SSIS package is running and SSIS is in charge of when that happens. If I am to have StreamInsight within SSIS then I prefer to have StreamInsight on the adapters.  This way you can force the adapters to stay open and introduce events into your Pipeline.   SSIS has a much richer set of transforms out of the box than StreamInsight.  Although “Always On” was not a design goal of SSIS I have used it like this and it works just fine. SSIS being called from within StreamInsight, now that excites me.  see below   For a while now I have been thinking what it would be like to decouple the Data Flow task from the SSIS package and expose it as something with which you can interact.  Anything can instantiate this version of a DFT as it would expose one or more  input interfaces and one or more output interfaces.  I can imagine that this would be a big hit when moving to “The Cloud” as well.  I could see the Data Flow task maybe being hosted in Azure Appfabric or some such layer. StreamInsight would be able to take advantage of this as well.   I am interested to see where this goes and will be pressing for more meat around the subject when I visit Redmond soon.

    Read the article

  • Class Design and Structure Online Web Store

    - by Phorce
    I hope I have asked this in the right forum. Basically, we're designing an Online Store and I am designing the class structure for ordering a product and want some clarification on what I have so far: So a customer comes, selects their product, chooses the quantity and selects 'Purchase' (I am using the Facade Pattern - So subsystems execute when this action is performed). My class structure: < Order > < Product > <Customer > There is no inheritance, more Association < Order has < Product , < Customer has < Order . Does this structure look ok? I've noticed that I don't handle the "Quantity" separately, I was just going to add this into the "Product" class, but, do you think it should be a class of it's own? Hope someone can help.

    Read the article

  • unable to add/remove program in Ubuntu 12.04 LTS?

    - by Manish Kumar Chauhan
    ** my problem is as following: unable to add/remove any program using either update-manager or Synaptic Package Manager or terminal update-manager is asking for partial upgrade and while updating software-center 5.6.2 catalog , there is no progress beyond the line "this is may take a moment" synaptic is unable to obtain an exclusive lock, similarly can't do terminal command sudo apt-get update if i try to break down the lock using the command sudo fuser -cuk /var/lib/dpkg/lock; sudo rm -f /var/lib/dpkg/lock it turns off my monitor display and i have to restart the whole system. note: this whole trouble started ,when i found ubuntu software-center missing after adding a repository and reinstalled it. **

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • The Best Text to Speech (TTS) Software Programs and Online Tools

    - by Lori Kaufman
    Text to Speech (TTS) software allows you to have text read aloud to you. This is useful for struggling readers and for writers, when editing and revising their work. You can also convert eBooks to audiobooks so you can listen to them on long drives. We’ve posted some websites here where you can find some good TTS software programs and online tools that are free or at least have free versions available. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Is online freelance work viable by American standards?

    - by JoelFan
    I've always been curious about trying out online freelance sites... it would be nice to work from home, feel independent, get to choose what I want to work on, get to work on different technologies, lose the PHB, etc. However I never really gave them a chance because I'm used to American rates and assumed that I would be competing with people from India, Russia, China, etc. that would severely undercut me, and it wouldn't be viable for me. Am I correct in this assumption or should I give it a shot? What kind of hourly rate would I be able to expect on short-term programming work?

    Read the article

  • Extending SSIS with custom Data Flow components (Presentation)

    Download the slides and sample code from my Extending SSIS with custom Data Flow components presentation, first presented at the SQLBits II (The SQL) Community Conference. Abstract Get some real-world insights into developing data flow components for SSIS. This starts with an introduction to the data flow pipeline engine, and explains the real differences between adapters and the three sub-types of transformation. Understanding how the different types of component behave and manage data is key to writing components of your own, and probably should but be required knowledge for anyone building packages at all. Using sample code throughout, I will show you how to write components, as well as highlighting best practice and lessons learned. The sample code includes fully working example projects for source, destination and transformation components. Presentation & Samples (358KB) Extending SSIS with custom Data Flow components.zip

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Online medical image processing grand challenges

    - by taltos
    Hello! I moved my question from stackoverflow here. I cherish the hope that I will be luckier. I'm currently working on my thesis, and I'm looking for an/some online medical image processing grand challenge(s). I already know this site but I need a challenge which has microscopic image dataset like cells, chromosomes, bacterias, viruses etc with classification or recognation objective. Like karyotyping. Maybe someone is working on this field or his university made a challenge what I'm looking for, and can help me. Thank you!

    Read the article

  • Where can I find programming work online ?

    - by explorest
    I have setup an ideal, quiet, non-interrupting environment at home. I am extremely productive here. I dont want to leave my home, not my room, not even my couch. How/where do I find work online so that I don't have to travel to it? Kindly post about your own personal experiences. Have you done it full time from home? Where and how? I am outside United States in a third world country so a lower pay is not an issue. The issue is the work-enviroment.

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • What does it mean to treat data as an asset?

    What does it mean to treat data as an asset? When considering this concept, we must define what data is and how it can be considered an asset. Data can easily be defined as a collection of stored truths that are open to interpretation and manipulation.  Expanding on this definition, data can be viewed as a set of captured facts, measurements, and ideas used to make decisions. Furthermore, InvestorsWords.com defines asset as any item of economic value owned by an individual or corporation. Now let’s apply this definition of asset to our definition of data, and ask the following question. Can facts, measurements and ideas be items that are of economic value owned by an individual or corporation? The obvious answer is yes; data can be bought and sold like commodities or analyzed to make smarter business decisions.  We can look at the economic value of data in one of two ways. First, data can be sold as a commodity that can take the form of goods like eBooks, Training, Music, Movies, and so on. Customers are willing to pay to gain access to this data for their consumption. This directly implies that there is an economic value for data in the form of a commodity because customers see a value in obtaining it.  Secondly data can be used in making smarter business decisions that allow for companies to become more profitable and/or reduce their potential for risk in regards to how they operate.  In the past I have worked at companies where we had to analyze previous sales activities in conjunction with current activities to determine how the company was preforming for the quarter.  In addition trends can be formulated based on existing data that allow companies to forecast data so that they can make strategic business decisions based sound forecasted data. Companies that truly value their data are constantly trying to grow and upgrade their data and supporting applications because it is the life blood of a company. If we look at an eBook retailer for example, imagine if they lost all of their data. They would be in essence forced out of business because they would have nothing to sell. In turn, if we look at a company that was using data to facilitate better decision making processes and they lost all of their data then they could be losing potential revenue and/ or increasing the company’s losses by making important business decisions virtually in the dark compared to when they were made on solid data.

    Read the article

  • Update Manager won't open (error related to pythonverbose)

    - by Mateus Machado Luna
    I'm having an issue with update-manager. Last night, my computer restart suddenly during the update process. Now it won't open and it keep appearing on the notifier with a message warning that an error occurred. The error is the same that is displayed when I try to open it on the terminal: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/bin/update-manager", line 26, in <module> from __future__ import print_function EOFError: EOF read where not expected I've already seen some questions here, but most of them are related to problems with ppas and the source.list file. This seems to be a bug on update-manager itself. I've already tried to remove it and install again, but the problem persists. I also noted another bug: my source-center doesn't open too. The message for it is similar to the other one: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/lib/command-not-found", line 5, in <module> from __future__ import absolute_import, print_function EOFError: EOF read where not expected For now I'm using apt-get update && upgrade for updating and the Synaptic for the source management. But I really would like to fix this stuff. Anyone can help? I'm with Ubuntu 12.10, Gnome-remix, 64-bits.

    Read the article

  • Online accounts advanced setting with Empathy (13.10)

    - by uruloke
    the new online accounts doesn't have the advanced settings as the empathy accounts had. How do i change the google server to connect to? i read here: https://wiki.gnome.org/Empathy/FAQ I can't connect to my Google Talk account Your router is probably blocking DNS SRV requests. If possible you should try to fix it. If you can't, the easiest work around is to set "talk.google.com" in the "Server" field of the advanced section of the account. So i think this might fix my problem, or maybe just an option to shift the port it connects to. and is there anyone that knows how to use join any IRC channels with Empathy? i have installed the plugin, but i don't know how to join a channel.

    Read the article

  • Online & Offline in Web Chat Application

    - by Mohammed Safeer
    I stuck amidst developing a chat web application using php for client side app. I used comet for chat application. And use technique of updating database when someone logout. Thus display offline on other side user. My problem is if someone close browser without logout, how the other side user know the person goes offline. How can i set online and offline icon in a php webchat application, when someone close chat window without logout? Is web sockets in php solve this problem? welcome all suggestions.

    Read the article

  • Udacity: Teaching thousands of students to program online using App Engine

    Udacity: Teaching thousands of students to program online using App Engine Join Fred Sauer & Iein Valdez as they talk with Steve Huffman, founder of Reddit and Hipmunk, and Chris Chew, senior software engineer at Udacity. Steve will share his experience teaching a course on web development using App Engine at Udacity, and Chris will talk about his experience building Udacity itself using App Engine. Submit your questions for Steve and Chris to answer live on air. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Data Generator Source Adapter

    This component needs little explanation. It generates random integer (DT_I4) and string (DT_WSTR) data and places them in the pipeline. You specify how many columns of each you would like and for any string columns you pass a fixed length value. You then need to specify how many rows in total you require to be generated. This component is used by us to do testing of the pipeline and components downstream. Previously we would have used a script component (as a source) to generate the rows but found ourselves rewriting the code too often so created this component. Screenshots SQL Server 2005 Integration Services SQL Server 2008/2012 Integration Services The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Data Generator Source from the list. Downloads The Data Generator Source Adapter is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Data Generator Source Adapter for SQL Server 2005 Data Generator Source Adapter for SQL Server 2008 Data Generator Source Adapter for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.30 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.29 - SQL Server 2008 February 2008 CTP. Includes support for upgrade of 2005 packages. Simplified user interface. (4 Mar 2008) Version 2.0.0.27 - SQL Server 2008 November 2007 CTP. String columns will now use the default system code page. Previously string columns always used 1252. (15 Feb 2008) SQL Server 2005 Version 1.1.0.23 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. (12 Jun 2006) Version 1.0.0.0 - SQL Server 2005 IDW 16 Sept CTP. Public release. (6 Oct 2005)

    Read the article

  • Connecting People, Processes, and Content: An Online Event

    - by Brian Dirking
    This morning we announced a new online event, “Transform Your Business by Connecting People, Processes, and Content.” At this event you will learn how an integrated approach to business process management (BPM), portals, content management, and collaboration can help you make more accurate and timely decisions based on the collective knowledge across your organization. But more than that, this event will focus on how customers have been successful transforming to a social enterprise. We’ve blogged about a few of the in the past few weeks – Balfour Beatty, New Look, Texas A&M. This event will give you an opportunity to learn about other customers and their successes, as well as an opportunity to: Watch Oracle executives participate in a roundtable discussion on the state of the social enterprise Hear industry experts discuss best practices and case studies of leveraging BPM, portals, and content management to transform and improve business processes Engage the experts by having your questions answered in real time Register today and learn how Oracle Fusion Middleware provides the most complete, open, integrated, and best-of-breed solution in the industry for transforming your business.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Register now for the FREE Tech Days Online Conference January 20th

    - by Eric Nelson
    The perfect solution to the “January blues” is a good solid few hours learning about great technology. The 'Build an app for that' Online Conference is exactly that, featuring demo-rich sessions on building applications for the browser, Windows 7, and Windows Phone 7. There are three tracks letting you choose which sessions are most relevant to you - whether you're just considering client development with Silverlight, or you've already got stuck in to an advanced project. We'll also explore new form factors such as Phone and Slate, and how to develop touch-based applications. Finally we'll cover the important subject of how to create beautifully designed user interfaces. Register now Agenda:

    Read the article

  • How do I get the source code of packages installed through apt-get?

    - by dustyprogrammer
    I am assuming that all application installed through apt-get are open source; but for those that are available in that manner, where can I get the source code for these applications as well as update them? I have a couple applications I use regularly that aren't being actively developed any longer and I would like to add features. Where would I go to get the rights to update these applications? mainly: hellanzb in my case Please and thank you.

    Read the article

  • Oracle VM at the IOUG Virtualization SIG – Online Symposium

    - by Chris Kawalek
    Join the Oracle VM product managers and product experts for a day full of best practices and information on the latest product updates. A sampling of what you can expect: Best practices from a customer’s perspective on deployment of Oracle VM and Oracle RAC. How to simplify and accelerate the onboarding of your applications to the cloud with Oracle Virtual Assemblies and Enterprise Manager 12c. The latest how-to and demo of DeployCluster Tool on Oracle VM 3. Date: Tomorrow, November 7th, 10am CDT – 2:50pm CDT Register for this free online event today! 

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

  • Sorting data in the SSIS Pipeline (Video)

    In this post I want to show a couple of ways to order the data that comes into the pipeline.  a number of people have asked me about this primarily because there are a number of ways to do it but also because some components in the pipeline take sorted inputs.  One of the methods I show is visually easy to understand and the other is less visual but potentially more performant.

    Read the article

  • Brand New Oracle WebLogic 12c Online Launch Event, December 1st, 18:00 GMT

    - by swalker
    The brand new WebLogic 12c will be released on December 1st 2011. Please join Hasan Rizvi on December 1, as he unveils the next generation of the industry’s #1 application server and cornerstone of Oracle’s cloud application foundation—Oracle WebLogic Server 12c. Hear, with your fellow IT managers, architects, and developers, how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Optimized to run your solutions for Java Platform, Enterprise Edition (Java EE); Oracle Fusion Middleware; and Oracle Fusion Applications Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder Don’t miss this online launch event on December 1st, 18:00 GMT. Register Now For regular information become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >