Search Results

Search found 30270 results on 1211 pages for 'database diagramming'.

Page 3/1211 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to Automate your Database Documentation

    - by Jonathan Hickford
    In my previous post, “Automating Deployments with SQL Compare command line” I looked at how teams can automate the deployment and post deployment validation of SQL Server databases using the command line versions of Red Gate tools. In this post I’m looking at another use for the command line tools, namely using them to generate up-to-date documentation with every database change. There are many reasons why up-to-date documentation is valuable. For example when somebody new has to work on or administer a database for the first time, or when a new database comes into service. Having database documentation reduces the risks of making incorrect decisions when making changes. Documentation is very useful to business intelligence analysts when writing reports, for example in SSRS. There are a couple of great examples talking about why up to date documentation is valuable on this site:  Database Documentation – Lands of Trolls: Why and How? and Database Documentation Using SQL Doc. The short answer is that it can save you time and reduce risk when you need that most! SQL Doc is a fast simple tool that automatically generates database documentation. It can create documents in HTML, Word or pdf files. The documentation contains information about object definitions and dependencies, along with any other information you want to associate with each object. The SQL Doc GUI, which is included in Red Gate’s SQL Developer Bundle and SQL Toolbelt, allows you to add additional notes to objects, and customise which objects are shown in the docs.  These settings can be saved as a .sqldoc project file. The SQL Doc command line can use this project file to automatically update the documentation every time the database is changed, ensuring that documentation that is always up to date. The simplest way to keep documentation up to date is probably to use a scheduled task to run a script every day. However if you have a source controlled database, or are using a Continuous Integration (CI) server or a build server, it may make more sense to use that instead. If  you’re using SQL Source Control or SSDT Database Projects to help version control your database, you can automatically update the documentation after each change is made to the source control repository that contains your database. To get this automation in place,  you can use the functionality of a Continuous Integration (CI) server, which can trigger commands to run when a source control repository has changed. A CI server will also capture and save the documentation that is created as an artifact, so you can always find the exact documentation for a specific version of the database. This forms an always up to date data dictionary. If you don’t already have a CI server in place there are several you can use, such as the free open source Jenkins or the free starter editions of TeamCity. I won’t cover setting these up in this article, but there is information about using CI servers for automating database tasks on the Red Gate Database Delivery webpage. You may be interested in Red Gate’s SQL CI utility (part of the SQL Automation Pack) which is an easy way to update a database with the latest changes from source control. The PowerShell example below shows how to create the documentation from a database. That database might be your integration database or a shared development database that is always up to date with the latest changes. $serverName = "server\instance" $databaseName = "databaseName" # If you want to document multiple databases use a comma separated list $userName = "username" $password = "password" # Path to SQLDoc.exe $SQLDocPath = "C:\Program Files (x86)\Red Gate\SQL Doc 3\SQLDoc.exe" $arguments = @( "/server:$($serverName)", "/database:$($databaseName)", "/username:$($userName)", "/password:$($password)", "/filetype:html", "/outputfolder:.", # "/project:$args[0]", # If you already have a .sqldoc project file you can pass it as an argument to this script. Values in the project will be overridden with any options set on the command line "/name:$databaseName Report", "/copyrightauthor:$([Environment]::UserName)" ) write-host $arguments & $SQLDocPath $arguments There are several options you can set on the command line to vary how your documentation is created. For example, you can document multiple databases or exclude certain types of objects. In the example above, we set the name of the report to match the database name, and use the current Windows user as the documentation author. For more examples of how you can customise the report from the command line please see the SQL Doc command line documentation If you already have a .sqldoc project file, or wish to further customise the report by including or excluding specific objects, you can use this project on the command line. Any settings you specify on the command line will override the defaults in the project. For details of what you can customise in the project please see the SQL Doc project documentation. In the example above, the line to use a project is commented out, but you can uncomment this line and then pass a path to a .sqldoc project file as an argument to this script.  Conclusion Keeping documentation about your databases up to date is very easy to set up using SQL Doc and PowerShell. By using a CI server to run this process you can trigger the documentation to be run on every change to a source controlled database, and keep historic documentation available. If you are considering more advanced database automation, e.g. database unit testing, change script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have any questions.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • database----database normalization

    - by runeveryday
    someone told me the following table isn't fit for the second database normalization. but i don't know why? i am a newbie of database design, i have read some tutorials of the 3NF. but to the 2NF and 3NF, i can't understand them well. expect someone can explain it for me. thank you, +------------+-----------+-------------------+ pk pk row +------------+-----------+-------------------+ A B C +------------+-----------+-------------------+ A D C +------------+-----------+-------------------+ A E C +------------+-----------+-------------------+

    Read the article

  • SQL SERVER – T-SQL Script to Take Database Offline – Take Database Online

    - by pinaldave
    Blog reader Joyesh Mitra recently left a comment to one of my very old posts about SQL SERVER – 2005 Take Off Line or Detach Database, which I have written focusing on taking the database offline. However, I did not include how to bring the offline database to online in that post. The reason I did not write it was that I was thinking it was a very simple script that almost everyone knows. However, it seems to me that there is something I found advanced in this procedure that is not simple for other people. We all have different expertise and we all try to learn new things, so I do not see any reason as to not write about the script to take the database online. -- Create Test DB CREATE DATABASE [myDB] GO -- Take the Database Offline ALTER DATABASE [myDB] SET OFFLINE WITH ROLLBACK IMMEDIATE GO -- Take the Database Online ALTER DATABASE [myDB] SET ONLINE GO -- Clean up DROP DATABASE [myDB] GO Joyesh let me know if this answers your question. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • 4?????????????(Database??)

    - by rika.tokumichi
    ???????????OTN????????? ???????5???????????????????????????????????????? ?????????????????????????????????? ????????????????????????????????????????????????????? ????????????????????????????????????^^ ???Database??????????????4?????????????????????????????????? ??????????? 1?:Oracle Database 11g Release 2?Download? 2?:Oracle Database 10g Express Edition?Download? 3?:Oracle SQL Developer 2.1 (2.1.0.63.73)?Download? 4?:Oracle Database 11g Release 1?Download? 5?:Oracle Database 10g Release 2?Download? (????4?1?~4?30?) ??????????Oracle Database 11g Release2?Windows?????????????????? Oracle Database 11g Release 2?4?????????! Oracle Database 11g Release 2 5??1? ? Oracle Database 10g Express Edition 3??2? ? Oracle SQL Developer 1??3? ? Oracle Database 11g Release 1 2??4? ? Oracle Database 10g Release 2 4??5? ? ???Oracle Database 11g Release2?Windows???????????? ???: >11g R2 on Windows???????! ???:???????????GUI??????/???????????????? >?????!? Oracle Database 11g Release2 - Windows? ???????????(PDF) ????:Oracle Database 11g R2?????????????????????6?15???!! >Oracle Database 11g R2 Windows? ??????!??????????? ???????????

    Read the article

  • Instant database snapshot

    - by raj
    My product uses oracle 9 database in its backend. every week the new release of the product is launched which will want to fire some DML, DDL queries to the database. I usually test the product release in a dummy database before applying it in the main database. I create a database dump using exp command, then import them into dummy database using imp. then i test the product in the dummy database and checks if there are any errors. This exp and imp takes about 3 hours to complete. Is there any alternative as : instant snapshot of the live database (which will be independent of the live one)? or is there any option to keep dummydatabase in sync with the originl database always. Yhis can be done by making the product firing DML&DDL queries to both the databases.. but this will be a HUGE performance problem.. how can i overcome this?

    Read the article

  • SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database

    - by pinaldave
    I have written a few articles recently on the subject of guest account. Here’s a quick list of these articles: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in. SQL SERVER – Detecting guest User Permissions – guest User Access Status One of the advices which I gave in all the three blog posts was: Disable the guest user in the user-created database. Additionally, I have mentioned that one should let the user account become enabled in MSDB database. I got many questions asking if there is any specific reason why this should be kept enabled, questions like, “What is the reason that MSDB database needs guest user?” Honestly, I did not know that the concept of the guest user will create so much interest in the readers. So now let’s turn this blog post into questions and answers format. Q: What will happen if the guest user is disabled in MSDB database? A:  Lots of bad things will happen. Error 916 - Logins can connect to this instance of SQL Server but they do not have specific permissions in a database to receive the permissions of the guest user. Q: How can I determine if the guest user is enabled or disabled for any specific database? A: There are many ways to do this. Make sure that you run each of these methods with the context of the database. For an example for msdb database, you can run the following code: USE msdb; SELECT name, permission_name, state_desc FROM sys.database_principals dp INNER JOIN sys.server_permissions sp ON dp.principal_id = sp.grantee_principal_id WHERE name = 'guest' AND permission_name = 'CONNECT' There are many other methods to detect the guest user status. Read them here: Detecting guest User Permissions – guest User Access Status Q: What is the default status of the guest user account in database? A: Enabled in master, TempDb, and MSDB. Disabled in model database. Q: Why is the default status of the guest user disabled in model database? A: It is not recommended to enable the guest in user database as it can introduce serious security threat. It can seriously damage the database if configured incorrectly. Read more here: Disable Guest Account – Serious Security Issue Q: How to disable guest user? A: REVOKE CONNECT FROM guest Q: How to enable guest user? A: GRANT CONNECT TO guest Did I miss any critical question in the list? Please leave your question as a comment and I will add it to this list. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • Questions and considerations to ask client for designing a database

    - by Julia
    Hi guys! so as title says, I would like to hear your advices what are the most important questions to consider and ask end-users before designing database for their application. We are to make database-oriented app, with special attenion to pay on db security (access control, encryption, integrity, backups)... Database will also keep some personal information about people, which is considered sensitive by law regulations, so security must be good. I worked on school projects with databases, but this is first time working "in real world", where this db security has real implications. So I found some advices and questions to ask on internet, but here I always get best ones. All help appreciated! Thank you!

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • How could RDBMSes be considered a fad?

    - by StuperUser
    Completing my Computing A-level in 2003 and getting a degree in Computing in 2007, and learning my trade in a company with a lot of SQL usage, I was brought up on the idea of Relational Databases being used for storage. So, despite being relatively new to development, I was taken-aback to read a comment (on Is LinqPad site quote "Tired of querying in antiquated SQL?" accurate? ) that said: [Some devs] despise [SQL] and think that it and RDBMS are a fad Obviously, a competent dev will use the right tool for the right job and won't create a relational database when e.g. flat file or another solution for storage is appropriate, but RDBMs are useful in a massive number of circumstances, so how could they be considered a fad?

    Read the article

  • Single Table Inheritance (Database Inheritance design options) pros and cons and in which case it us

    - by Yosef
    Hi, I study about today about 2 database design inheritance approaches: 1. Single Table Inheritance 2. Class Table Inheritance In my student opinion Single Table Inheritance make database more smaller vs other approaches because she use only 1 table. But i read that the more favorite approach is Class Table Inheritance according Bill Karwin. My Question is: Single Table Inheritance pros and cons and in which case it used? thanks, Yosef

    Read the article

  • What is a database file system?

    - by Ravi
    I have a very little idea about what database file system is. Can somebody out here explain to me what actually a database file system is, and what its applications are? How is it different from a conventional file system? How I can build it?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by ming yeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Database change management tools?

    - by zodeus
    We are currently in the process of solidifying a database change management process. We run MySql 5 running on RedHat 5. I have selected LiquiBase as the tool because it's open source and allows us to expand its functionality later if needed. It also seems to be one of the few free projects that are still active. Has anyone here had any experience using LiquiBase or other Db versioning tools? Company Background: We are an SaaS company providing a 7/24 hosted application. There are dozens of instances running different versions of the same database and we need a way to manage the deploy process as it's starting to get out of control. The databases have hundreds of tables and we typically do releases once every 3 months.

    Read the article

  • best way to store 1:1 user relationships in relational database

    - by aharon
    What is the best way to store user relationships, e.g. friendships, that must be bidirectional (you're my friend, thus I'm your friend) in a rel. database, e.g. MYSql? I can think of two ways: Everytime a user friends another user, I'd add two rows to a database, row A consisting of the user id of the innitiating user followed by the UID of the accepting user in the next column. Row B would be the reverse. You'd only add one row, UID(initiating user) followed by UID(accepting user); and then just search through both columns when trying to figure out whether user 1 is a friend of user 2. Surely there is something better?

    Read the article

  • Need Database Help - A second opinion - thank you

    - by user287745
    i have designed an er model and then normalized it till the BCNF and converted it into tables using vs08. my problem is i do not know from where to get the normalized database checked to see if it has no mistakes in normalization- can not be further normalized. please do not give answers such as- ask a friend- ask your professor- do not have these resources available- it is very very hard and really time consuming waiting for the relevant person to be available. so are there any sites from where i can ask help from other designers- people like you to check the normalized database? please note:- it should be free, sorry for accept rate, was not aware of accepting the answers, all the help is appreciated thank you

    Read the article

  • Database Firewall

    - by ???02
    Database Firewall?????SQL????????SQL????????????WEB?????HTTP??????SQL????????SQL????????????????????????????????????????????????????????????SQL??????????????????????????????????????·WEB???????????????????·??????????????????WEB???????????WEB??????????Web?????????????????IPA???????????SQL?????????????????????SQL??????????????????????SQL????????????????WEB??????????????????????????????????????????????????????????????????????????????????????????????WEB????????????????????????????????????????B to B?B to C???WEB????????????????????????????????????????????????????????????????????????????WEB?????????????????????WEB??????????????????·??????????????????????????????????????????????????????????????????????????????????????????WEB???????SQL?????????????????????????????Oracle Database Firewall???SQL??????????????????????????????????????????????????????????????????????????????2011?Oracle Database Firewall?????????·????????????????????????????????????????Oracle Database Firewall??????????????????SQL?????????·?????????????????????? Oracle Database Fireawall ?Oracle Database Firewall???????????????????????????????????????????????????????????SQL??????????????????????????????????????????2????????????????????SQL???Pass?Block????????????SQL?????????????????????????????????????????????????????????????????????????????????SQL????????????????????????????????????????????????????????????????????????????????????????????SQL?Oracle Database Firewall???????????????????SQL????????????????WEB??????????Oracle Database Firewall???????????????????????????????????Oracle Database??????SQL Server?DB2?Sybase??????????2????????Oracle Database Firewall?????????????????????????????????????????????????Oracle Database Firewall?????????????????????????????????????????????SQL???????????????????????????????????????????????????????????SPAN???(?????????)?????????????????????????????????SQL???????????????????????????????????????????SQL?Block???Pass??????????????????????????????????IDS?IPS????????????????????WAF (Web Application Firewall)? ??????????????????????????????????Database Firewall???????????????SQL????????????SQL????400????????????????(ISO/IEC 9075)??????????Oracle Database Firewall???????????????????????????????????????????SQL?????????????????????SQL??????Oracle Database Firewall??Oracle Database, SQL Server, DB2??????????????????????SQL???????????????????SQL??????????????????????????????????SQL???????????????????????????????????SQL????????????????????????????????????Oracle Database Firewall???SQL??????Oracle Database Firewall?????????????????????????????????????????????????????????????????????????????(Oracle Database???10gR2??XML??????????????????????????????????????????????????????????????Oracle Database???????????????????Oracle Database???????????????????????????????)???Oracle Database Firewall??????????????????????????????????????????????????????????????Oracle Database Firewall????????????????????????????????????????????????????????? ?????? Oracle Direct

    Read the article

  • Update mysql database with arpwatch textfile database

    - by bVector
    I'm looking to keep arpwatch entries in a mysql database to crossreference with other information I'm storing based on mac addresses. I've manually imported the arpwatch database into my mysql database, but being a novice with databases I'm not sure what the best way to continually update the database with new entries without creating duplicates would be. None of the fields can be unique, as even the time is duplicated frequently. I'm not interested in the actual arpwatch events like flip flop or new station, just the mac/ip/time pairings. Would a simple bash (or sql) shell script do the trick? Would it be possible to make the mac address plus the time be a composite key of some sort? the database is called utility, table is arpwatch, columns are mac, ip, time a seperate table named 'hosts' with columns mac, ip, type, hostname, location, notes has mac as the primary key. This table will correlate different ip addresses that a mac had over time using the arpwatch column initial import was done with MySQL Workbench using INSERT INTO commands with creative search and replace on the text file

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Opinions on sensor / reading / alert database design

    - by Mark
    I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database. In this system, an alert is an entity that must be acknowledged, acted upon, etc. Initially I related readings to alerts like this (very cut down) : - [Location] LocationId [Sensor] SensorId LocationId UpperLimitValue LowerLimitValue [SensorReading] SensorReadingId Value Status Timestamp [SensorAlert] SensorAlertId [SensorAlertReading] SensorAlertId SensorReadingId The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not. The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?). I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this: [Location] LocationId [Sensor] SensorId LocationId [SensorReading] SensorReadingId SensorId Value Status Timestamp [SensorAlert] SensorAlertId SensorId Timestamp [SensorAlertEnd] SensorAlertId Timestamp Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do. Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary. Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable? Is there perhaps another way of doing this that I may be missing? Thanks. EDIT: Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : - [Location] LocationId [Sensor] SensorId LocationId [SensorState] SensorStateId SensorId Timestamp Status IsInAlert [SensorReading] SensorReadingId SensorStateId Value Timestamp There must be an elegant solution to this!

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >