Search Results

Search found 144001 results on 5761 pages for 'sql server data tools'.

Page 86/5761 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • SQL Error Log Message- 'ACCESS_METHODS_SCAN_RANGE_GENERATOR'

    - by Chirag
    One of our SQL2005 Enterprise Servers running on Win2003 became unresponsive and on reboot I saw these errors logged before it went down. Date 17/09/2009 10:16:22 Log SQL Server (Archive #1 - 17/09/2009 10:17:00) Source spid111 Message Timeout occurred while waiting for latch: class 'ACCESS_METHODS_SCAN_RANGE_GENERATOR', id 000000002A761760, type 4, Task 0x000000000E609EB8 : 14, waittime 600, flags 0x1a, owning task 0x000000000E6129B8. Continuing to wait. Anyone know what this error points or relates to? Many thanks in advance.

    Read the article

  • Alter Stored Procedure in SQL Replication

    - by Refracted Paladin
    How do I, properly, ALTER a StoredProcedure in a SQL 2005 Merge Replication? I just need to add a Column. I already successfully added it to the Table and I now need to add it to a SP. I did so but now it will not synchronize with the following error -- Insert Error: Column name or number of supplied values does not match table definition. (Source: MSSQLServer, Error number: 213)

    Read the article

  • SQL commands when SQL only exists on network

    - by chama
    I'm trying to find a list of all sql servers on the network using the osql -L command in the command prompt. This command only works when SQLServer is installed on the computer that I'm working on. Is there any way to run this command when SQLServer is not installed on that particular computer, but is installed somewhere on the network? Thank you! EDIT: I'm writing a program in java, so the easy enumerations that you can do in the .NET framework won't work for me.

    Read the article

  • Setting up a SQL server outside the domain

    - by user41013
    Hi, im very new to this. I've set up an account "SQLBOX" thats on the a network, but not connected to the domain. I have installed an instance of SQL Server 2008 and am trying to connect to it from another machine on the network but am getting "Cannot connect to "SQLBOX"". From the "SQLBOX" i can connect to sqlservers on the domain, but not vice-versa. Any help would be grately appreciated. Sorry if the description isn't great. Thanks

    Read the article

  • Cancel/Kill SQL-Server BACKUP in SUPSPENDED state (WRITELOG)

    - by Sebastian Seifert
    I have a SQL 2008 R2 Express on which backups are made by executing sqlmaint from windows task planer. Several backups ran into an error and got stuck in state SUSPENDED with wait type WRITELOG. How can I get these backup processes to stop so they release resources? Simply killing the processes doesn't work. The process will stay in KILL/ROLL for a long time. This didn't change for several hours.

    Read the article

  • Cancel/Kill SQL-Server BACKUP in SUPSPENDED state (WRITELOG)

    - by Sebastian Seifert
    I have a SQL 2008 R2 Express on which backups are made by executing sqlmaint from windows task planer. Several backups ran into an error and got stuck in state SUSPENDED with wait type WRITELOG. How can I get these backup processes to stop so they release resources? Simply killing the processes doesn't work. The process will stay in KILL/ROLL for a long time. This didn't change for several hours.

    Read the article

  • Transaction Log filling up on SQL database set to Simple

    - by Will
    We have a database on a SQL 2005 server that is set to Simple transaction mode. The logging is set to 1 MB and is set to grow by 10% when it needs to. We keep running into an issue where the transaction log fills up and we need to shrink it. What could cause the transaction log to fill up when its set to Simple and unrestricted growth is allowed?

    Read the article

  • Schema compare with MS Data Tools in VS2008

    - by rdkleine
    When performing a schema compare having db_owner rights on the target database results in the following error: The user does not have permission to perform this action. Using the SQL Server Profiler I figured out this error occurs executing a query targeting the master db view: [sys].[dm_database_encryption_keys] While specifically ignoring all object types but Tables one would presume the SQL Compare doesn't need access to the db encryption keys. Also note: http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/c11a5f8a-b9cc-454f-ba77-e1c69141d64b/ One solution would be to GRANT VIEW SERVER STATE to the db user, but in my case I'm not hosting the database services and won't get the rights to the server state. Also tried excluding DatabaseEncryptionKey element in the compare file. <PropertyElementName> <Name>Microsoft.Data.Schema.Sql.SchemaModel.SqlServer.ISql100DatabaseEncryptionKey</Name> <Value>ExcludedType</Value> </PropertyElementName> Anyone has an workaround this?

    Read the article

  • How to control order of assignment for new identity column in SQL Server?

    - by alpav
    I have a table with CreateDate datetime field default(getdate()) that does not have any identity column. I would like to add identity(1,1) field that would reflect same order of existing records as CreateDate field (order by would give same results). How can I do that ? I guess if I create clustered key on CreateDate field and then add identity column it will work (not sure if it's guaranteed), is there a good/better way ? I am interested in SQL Server 2005, but I guess the answer will be the same for SQL Server 2008, SQL Server 2000.

    Read the article

  • Where should the partitioning column go in the primary key on SQL Server?

    - by Bialecki
    Using SQL Server 2005 and 2008. I've got a potentially very large table (potentially hundreds of millions of rows) consisting of the following columns: CREATE TABLE ( date SMALLDATETIME, id BIGINT, value FLOAT ) which is being partitioned on column date in daily partitions. The question then is should the primary key be on date, id or value, id? I can imagine that SQL Server is smart enough to know that it's already partitioning on date and therefore, if I'm always querying for whole chunks of days, then I can have it second in the primary key. Or I can imagine that SQL Server will need that column to be first in the primary key to get the benefit of partitioning. Can anyone lend some insight into which way the table should be keyed?

    Read the article

  • Does anyone have a backup strategy for SQL Azure databases?

    - by Pete
    I'm using SQL Azure on a project and it works great. The problem is that the usual backup features do not exist. I have exported the database a couple of times using SQLAzureMW ( http://sqlazuremw.codeplex.com/ ) but this tool is now choking trying to download the database data with bcp. In any case, it's not as nice a solution as SQL Server backups. Is anyone aware of a commercial or open source tool, or other technique, for making reliable backups of SQL Azure databases? This is really a showstopper.

    Read the article

  • What is the cost in bytes for the overhead of a sql_variant column in SQL Server?

    - by Elan
    I have a table, which contains many columns of float data type with 15 digit precision. Each column consumes 8 bytes of storage. Most of the time the data does not require this amount of precision and could be stored as a real data type. In many cases the value can be 0, in which case I could get away with storing a single byte. My goal here is to optimize space storage requirements, which is an issue I am facing working with a SQL Express 4GB database size limit. If byte, real and float data types are stored in a sql_variant column there is obviously some overhead involved in storing these values. What is the cost of this overhead? I would then need to evaluate whether I would actually end up in significant space savings (or not) switching to using sql_variant column data types. Thanks, Elan

    Read the article

  • What is the Sql Server equivalent for Oracle's DBMS_ASSERT?

    - by dotNetYum
    DBMS_ASSERT is one of the keys to prevent SQL injection attacks in Oracle. I tried a cursory search...is there any SQL Server 2005/2008 equivalent for this functionality? I am looking for a specific implementation that has a counterpart of all the respective Oracle package members of DBMS_ASSERT. NOOP SIMPLE_SQL_NAME QUALIFIED_SQL_NAME SCHEMA_NAME I know the best-practices of preventing injection...bind variables...being one of them. But,in this question I am specifically looking for a good way to sanitize input...in scenarios where bind-variables were not used. Do you have any specific implemetations? Is there a library that actually is a SQL Server Port of the Oracle package?

    Read the article

  • OData.org updated, gives clues about future SQL Azure enhancements

    - by jamiet
    The OData website at http://www.odata.org/home has been updated today to provide a much more engaging page than the previous sterile attempt. Moreover its now chockful of information about the progress of OData including a blog, a list of products that produce OData feeds plus some live OData feeds that you can hit up today, a list of OData-compliant clients and an FAQ. Most interestingly SQL Azure is listed as a producer of OData feeds: If you have a SQL Azure database account you can easily expose an OData feed through a simple configuration portal. You can select authenticated or anonymous access and expose different OData views according to permissions granted to the specified SQL Azure database user. A preview of this upcoming SQL Azure service is available at https://www.sqlazurelabs.com/OData.aspx and it enables you to select one of your existing SQL Azure databases and, with a few clicks, turn it into an OData feed. It looks as though SQL Azure will soon be added to the stable of products that natively support OData, good news indeed. @Jamiet Related blog posts: OData gunning for ubiquity across Microsoft products Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • Linked SQL Server to Oracle

    - by Jerid
    Getting this error when executing SQL Server query only when connected via remote Desktop. When running the query from desktop, no problem. Is this a tsnNames issue? Linked server to Oracle 9i Server: Msg 7399, Level 16, State 1, Line 1 OLE DB provider 'MSDAORA' reported an error. [OLE/DB provider returned message: Unspecified error] [OLE/DB provider returned message: ORA-01041: internal error. hostdef extension doesn't exist ] OLE DB error trace [OLE/DB Provider 'MSDAORA' ICommandText::Execute returned 0x80004005: ].

    Read the article

  • How to Protect Sensitive (HIPAA) SQL Server Standard Data and Log Files

    - by Quesi
    I am dealing with electronic personal health information (ePHI or PHI) and HIPAA regulations require that only authorized users can access ePHI. Column-level encryption may be of value for some of the data, but I need the ability to do like searches on some of the PHI fields such as name. Transparent Data Encryption (TDE) is a feature of SQL Server 2008 for encrypting database and log files. As I understand it this prevents someone who gains access to the MDF, LDF, or backup files from being able to do anything with the files because they are encrypted. TDE is only on enterprise and developer versions of SQL Server and enterprise is cost-prohibitive for my particular scenario. How can I get similar protection on SQL Server Standard? Is there a way to encrypt the database and backup files (is there a third-party tool)? Or just as good, is there a way to prevent the files from being used if the disk were attached to another machine (linux or windows)? Administrator access to the files from the same machine is fine, but I just want to prevent any issues if the disk were removed and hooked up to another machine. What are some of the solutions for this that are out there?

    Read the article

  • Accessing SQL Server over Workgroup

    - by Brian
    Hello, I have two machines: A: Win 2008 server B: Windows 7 They are on the same workgroup, and I enabled network discovery. So on the server, I have SQL Server installed with a SQL Server account (mixed mode is enabled). I'm trying to connect to this server from the win 7 machine in the workgroup, but no go. Do I have to reference the server by something else than machine name? How do I successfully establish that relation? I am a n00b to this type of thing... Thanks.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >