Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 4/1260 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Server Migration Checklist II

    - by merrillaldrich
    Easy Breezy Login Audit for your Ol’ 2000 Server In the last post on this topic I put up the preparatory steps I’ve been using for server migrations. Here I am posting some code that has worked well for us to trace who/what is connecting to our older SQL Server 2000 machines. It’s a simple audit of login events, tracing the login name, host name, database, and last login time for connections to the server, and gave us valuable insight into who was really using the machines and which databases might...(read more)

    Read the article

  • Database migration pattern for Java?

    - by Eno
    Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface. What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema). To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

    Read the article

  • Instant database snapshot

    - by raj
    My product uses oracle 9 database in its backend. every week the new release of the product is launched which will want to fire some DML, DDL queries to the database. I usually test the product release in a dummy database before applying it in the main database. I create a database dump using exp command, then import them into dummy database using imp. then i test the product in the dummy database and checks if there are any errors. This exp and imp takes about 3 hours to complete. Is there any alternative as : instant snapshot of the live database (which will be independent of the live one)? or is there any option to keep dummydatabase in sync with the originl database always. Yhis can be done by making the product firing DML&DDL queries to both the databases.. but this will be a HUGE performance problem.. how can i overcome this?

    Read the article

  • SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database

    - by pinaldave
    I have written a few articles recently on the subject of guest account. Here’s a quick list of these articles: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in. SQL SERVER – Detecting guest User Permissions – guest User Access Status One of the advices which I gave in all the three blog posts was: Disable the guest user in the user-created database. Additionally, I have mentioned that one should let the user account become enabled in MSDB database. I got many questions asking if there is any specific reason why this should be kept enabled, questions like, “What is the reason that MSDB database needs guest user?” Honestly, I did not know that the concept of the guest user will create so much interest in the readers. So now let’s turn this blog post into questions and answers format. Q: What will happen if the guest user is disabled in MSDB database? A:  Lots of bad things will happen. Error 916 - Logins can connect to this instance of SQL Server but they do not have specific permissions in a database to receive the permissions of the guest user. Q: How can I determine if the guest user is enabled or disabled for any specific database? A: There are many ways to do this. Make sure that you run each of these methods with the context of the database. For an example for msdb database, you can run the following code: USE msdb; SELECT name, permission_name, state_desc FROM sys.database_principals dp INNER JOIN sys.server_permissions sp ON dp.principal_id = sp.grantee_principal_id WHERE name = 'guest' AND permission_name = 'CONNECT' There are many other methods to detect the guest user status. Read them here: Detecting guest User Permissions – guest User Access Status Q: What is the default status of the guest user account in database? A: Enabled in master, TempDb, and MSDB. Disabled in model database. Q: Why is the default status of the guest user disabled in model database? A: It is not recommended to enable the guest in user database as it can introduce serious security threat. It can seriously damage the database if configured incorrectly. Read more here: Disable Guest Account – Serious Security Issue Q: How to disable guest user? A: REVOKE CONNECT FROM guest Q: How to enable guest user? A: GRANT CONNECT TO guest Did I miss any critical question in the list? Please leave your question as a comment and I will add it to this list. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • Questions and considerations to ask client for designing a database

    - by Julia
    Hi guys! so as title says, I would like to hear your advices what are the most important questions to consider and ask end-users before designing database for their application. We are to make database-oriented app, with special attenion to pay on db security (access control, encryption, integrity, backups)... Database will also keep some personal information about people, which is considered sensitive by law regulations, so security must be good. I worked on school projects with databases, but this is first time working "in real world", where this db security has real implications. So I found some advices and questions to ask on internet, but here I always get best ones. All help appreciated! Thank you!

    Read the article

  • Oracle Database 12c Spatial: Vector Performance Acceleration

    - by Okcan Yasin Saygili-Oracle
    Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. This allows organizations to make better decisions, respond to customers more effectively, and reduce operational costs – increasing ROI and creating competitive advantage. Oracle Database, the industry’s most advanced database,  includes native location capabilities, fully integrated in the kernel, for fast, scalable, reliable and secure spatial and massive graph applications. It is a foundation for deploying enterprise-wide spatial information systems and locationenabled business applications. Developers can extend existing Oracle-based tools and applications, since they can easily incorporate location information directly in their applications, workflows, and services. Spatial Features The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence. Network Data Model Graph Features The Network Data Model graph explicitly stores and maintains a persistent data model withnetwork connectivity and provides network analysis capability such as shortest path, nearest neighbors, within cost and reachability. It loads partitioned networks into memory on demand, overcomingthe limitations of in-memory analysis. Partitioning massive networks into manageable sub-networkssimplifies the network analysis. RDF Semantic Graph Features RDF Semantic Graph has native support for World Wide Web Consortium standards. It has open, scalable, and secure features for storing RDF/OWL ontologies anddata; native inference with OWL 2, SKOS and user-defined rules; and querying RDF/OWL data withSPARQL 1.1, Java APIs, and SPARQLgraph patterns in SQL. Video: Oracle Spatial and Graph Overview Oracle spatial is embeded on oracle database product. So ,we can use oracle installer (OUI).The Oracle Universal Installer (OUI) is used to install Oracle Database software. OUI is a graphical user interface utility that enables you to view the Oracle software that is installed on your machine, install new Oracle Database software, and delete Oracle software that you no longer need to use. Online Help is available to guide you through the installation process. One of the installation options is to create a database. If you select database creation, OUI automatically starts Oracle Database Configuration Assistant (DBCA) to guide you through the process of creating and configuring a database. If you do not create a database during installation, you must invoke DBCA after you have installed the software to create a database. You can also use DBCA to create additional databases. For installing Oracle Database 12c you may check the Installing Oracle Database Software and Creating a Database tutorial under the Oracle Database 12c 2-Day DBA Series.You can always check if spatial is available in your database using  "select comp_id, version, status, comp_name from dba_registry where comp_id='SDO';"   One of the most notable improvements with Oracle Spatial and Graph 12c can be seen in performance increases in vector data operations. Enabling the Spatial Vector Acceleration feature (available with the Spatial option) dramatically improves the performance of commonly used vector data operations, such as sdo_distance, sdo_aggr_union, and sdo_inside. With 12c, these operations also run more efficiently in parallel than in prior versions through the use of metadata caching. For organizations that have been facing processing limitations, these enhancements enable developers to make a small set of configuration changes and quickly realize significant performance improvements. Results include improved index performance, enhanced geometry engine performance, optimized secondary filter optimizations for Spatial operators, and improved CPU and memory utilization for many advanced vector functions. Vector performance acceleration is especially beneficial when using Oracle Exadata Database Machine and other large-scale systems. Oracle Spatial and Graph vector performance acceleration builds on general improvements available to all SDO_GEOMETRY operations in these areas: Caching of index metadata, Concurrent update mechanisms, and Optimized spatial predicate selectivity and cost functions. These optimizations enable more efficient use of: CPU, Memory, and Partitioning Resulting in substantial query performance improvements.UsageTo accelerate the performance of spatial operators, it is recommended that you set the SPATIAL_VECTOR_ACCELERATION database system parameter to the value TRUE. (This parameter is authorized for use only by licensed Oracle Spatial users, and its default value is FALSE.) You can set this parameter for the whole system or for a single session. To set the value for the whole system, do either of the following:Enter the following statement from a suitably privileged account:   ALTER SYSTEM SET SPATIAL_VECTOR_ACCELERATION = TRUE;Add the following to the database initialization file (xxxinit.ora):   SPATIAL_VECTOR_ACCELERATION = TRUE;To set the value for the current session, enter the following statement from a suitably privileged account:   ALTER SESSION SET SPATIAL_VECTOR_ACCELERATION = TRUE; Checkout the complete list of new features on Oracle.com @ http://www.oracle.com/technetwork/database/options/spatialandgraph/overview/index.html Spatial and Graph Data Sheet (PDF) Spatial and Graph White Paper (PDF)

    Read the article

  • How could RDBMSes be considered a fad?

    - by StuperUser
    Completing my Computing A-level in 2003 and getting a degree in Computing in 2007, and learning my trade in a company with a lot of SQL usage, I was brought up on the idea of Relational Databases being used for storage. So, despite being relatively new to development, I was taken-aback to read a comment (on Is LinqPad site quote "Tired of querying in antiquated SQL?" accurate? ) that said: [Some devs] despise [SQL] and think that it and RDBMS are a fad Obviously, a competent dev will use the right tool for the right job and won't create a relational database when e.g. flat file or another solution for storage is appropriate, but RDBMs are useful in a massive number of circumstances, so how could they be considered a fad?

    Read the article

  • Single Table Inheritance (Database Inheritance design options) pros and cons and in which case it us

    - by Yosef
    Hi, I study about today about 2 database design inheritance approaches: 1. Single Table Inheritance 2. Class Table Inheritance In my student opinion Single Table Inheritance make database more smaller vs other approaches because she use only 1 table. But i read that the more favorite approach is Class Table Inheritance according Bill Karwin. My Question is: Single Table Inheritance pros and cons and in which case it used? thanks, Yosef

    Read the article

  • What is a database file system?

    - by Ravi
    I have a very little idea about what database file system is. Can somebody out here explain to me what actually a database file system is, and what its applications are? How is it different from a conventional file system? How I can build it?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Wordpress database migration

    - by Sean Cull
    Hi Everyone! I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here. If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Ranking hit after site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed Google Webmaster Tools completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic comes from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings.

    Read the article

  • Ranking hit after WP site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed WMT completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic come from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings. Any help would be greatly appreciated. Thanks!

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by ming yeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Database change management tools?

    - by zodeus
    We are currently in the process of solidifying a database change management process. We run MySql 5 running on RedHat 5. I have selected LiquiBase as the tool because it's open source and allows us to expand its functionality later if needed. It also seems to be one of the few free projects that are still active. Has anyone here had any experience using LiquiBase or other Db versioning tools? Company Background: We are an SaaS company providing a 7/24 hosted application. There are dozens of instances running different versions of the same database and we need a way to manage the deploy process as it's starting to get out of control. The databases have hundreds of tables and we typically do releases once every 3 months.

    Read the article

  • best way to store 1:1 user relationships in relational database

    - by aharon
    What is the best way to store user relationships, e.g. friendships, that must be bidirectional (you're my friend, thus I'm your friend) in a rel. database, e.g. MYSql? I can think of two ways: Everytime a user friends another user, I'd add two rows to a database, row A consisting of the user id of the innitiating user followed by the UID of the accepting user in the next column. Row B would be the reverse. You'd only add one row, UID(initiating user) followed by UID(accepting user); and then just search through both columns when trying to figure out whether user 1 is a friend of user 2. Surely there is something better?

    Read the article

  • Need Database Help - A second opinion - thank you

    - by user287745
    i have designed an er model and then normalized it till the BCNF and converted it into tables using vs08. my problem is i do not know from where to get the normalized database checked to see if it has no mistakes in normalization- can not be further normalized. please do not give answers such as- ask a friend- ask your professor- do not have these resources available- it is very very hard and really time consuming waiting for the relevant person to be available. so are there any sites from where i can ask help from other designers- people like you to check the normalized database? please note:- it should be free, sorry for accept rate, was not aware of accepting the answers, all the help is appreciated thank you

    Read the article

  • Scanstate.exe Migration Error

    - by user1438147
    I'm trying to create a migration store on a server using Scanstate and this is the error I recieve: C:\USMTscanstate.exe \(Server_Name)\migration\mystore /f:migdocs.xml /f: migapp.xml /v:13 /f:scan.log Failed. An error occurred processing the command line. scanstate.exe \svdataitfl1\migration\mystore ##ERROR## -- /f:migdocs.xml /f migapp.xml /v:13 /f:scan.log Undefined or incomplete command line option ScanState return code: 11 I can't seem to find the answer to this...need some help.

    Read the article

  • A Model for Planning Your Oracle BPM 10g Migration by Kris Nelson

    - by JuergenKress
    As the Oracle SOA Suite and BPM Suite 12c products enter beta, many of our clients are starting to discuss migrating from the Oracle 10g or prior platforms. With the BPM Suite 11g, Oracle introduced a major change in architecture with a strong focus on integration with SOA and an entirely new technology stack. In addition, there were fresh new UIs and a renewed business focus with an improved Process Composer and features like Adaptive Case Management. While very beneficial to both technology and the business, the fundamental change in architecture does pose clear migration challenges for clients who have made investments in the 10g platform. Some of the key challenges facing 10g customers include: Managing in-process instance migration and running multiple process engines Migration of User Interfaces and other code within the environment that may not be automated Growing or finding technical staff with both 10g and 12c experience Managing migration projects while continuing to move the business forward and meet day-to-day responsibilities As a former practitioner in a mixed 10g/11g shop, I wrestled with many of these challenges as we tried to plan ahead for the migration. Luckily, there is migration tooling on the way from Oracle and several approaches you can use in planning your migration efforts. In addition, you already have a defined and visible process on the current platform, which will be invaluable as you migrate.  A Migration ModelThis model presents several options across a value and investment spectrum. The goal of the AVIO Migration Model is to kick-start discussions within your company and assist in creating a plan of action to take advantage of the new platform. As with all models, this is a framework for discussion and certain processes or situations may not fit. Please contact us if you have specific questions or want to discuss migrations efforts in your situation. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Kris Nelson,ACM,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Database Firewall

    - by ???02
    Database Firewall?????SQL????????SQL????????????WEB?????HTTP??????SQL????????SQL????????????????????????????????????????????????????????????SQL??????????????????????????????????????·WEB???????????????????·??????????????????WEB???????????WEB??????????Web?????????????????IPA???????????SQL?????????????????????SQL??????????????????????SQL????????????????WEB??????????????????????????????????????????????????????????????????????????????????????????????WEB????????????????????????????????????????B to B?B to C???WEB????????????????????????????????????????????????????????????????????????????WEB?????????????????????WEB??????????????????·??????????????????????????????????????????????????????????????????????????????????????????WEB???????SQL?????????????????????????????Oracle Database Firewall???SQL??????????????????????????????????????????????????????????????????????????????2011?Oracle Database Firewall?????????·????????????????????????????????????????Oracle Database Firewall??????????????????SQL?????????·?????????????????????? Oracle Database Fireawall ?Oracle Database Firewall???????????????????????????????????????????????????????????SQL??????????????????????????????????????????2????????????????????SQL???Pass?Block????????????SQL?????????????????????????????????????????????????????????????????????????????????SQL????????????????????????????????????????????????????????????????????????????????????????????SQL?Oracle Database Firewall???????????????????SQL????????????????WEB??????????Oracle Database Firewall???????????????????????????????????Oracle Database??????SQL Server?DB2?Sybase??????????2????????Oracle Database Firewall?????????????????????????????????????????????????Oracle Database Firewall?????????????????????????????????????????????SQL???????????????????????????????????????????????????????????SPAN???(?????????)?????????????????????????????????SQL???????????????????????????????????????????SQL?Block???Pass??????????????????????????????????IDS?IPS????????????????????WAF (Web Application Firewall)? ??????????????????????????????????Database Firewall???????????????SQL????????????SQL????400????????????????(ISO/IEC 9075)??????????Oracle Database Firewall???????????????????????????????????????????SQL?????????????????????SQL??????Oracle Database Firewall??Oracle Database, SQL Server, DB2??????????????????????SQL???????????????????SQL??????????????????????????????????SQL???????????????????????????????????SQL????????????????????????????????????Oracle Database Firewall???SQL??????Oracle Database Firewall?????????????????????????????????????????????????????????????????????????????(Oracle Database???10gR2??XML??????????????????????????????????????????????????????????????Oracle Database???????????????????Oracle Database???????????????????????????????)???Oracle Database Firewall??????????????????????????????????????????????????????????????Oracle Database Firewall????????????????????????????????????????????????????????? ?????? Oracle Direct

    Read the article

  • Update mysql database with arpwatch textfile database

    - by bVector
    I'm looking to keep arpwatch entries in a mysql database to crossreference with other information I'm storing based on mac addresses. I've manually imported the arpwatch database into my mysql database, but being a novice with databases I'm not sure what the best way to continually update the database with new entries without creating duplicates would be. None of the fields can be unique, as even the time is duplicated frequently. I'm not interested in the actual arpwatch events like flip flop or new station, just the mac/ip/time pairings. Would a simple bash (or sql) shell script do the trick? Would it be possible to make the mac address plus the time be a composite key of some sort? the database is called utility, table is arpwatch, columns are mac, ip, time a seperate table named 'hosts' with columns mac, ip, type, hostname, location, notes has mac as the primary key. This table will correlate different ip addresses that a mac had over time using the arpwatch column initial import was done with MySQL Workbench using INSERT INTO commands with creative search and replace on the text file

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >