Search Results

Search found 3097 results on 124 pages for 'ben alter'.

Page 13/124 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQL SERVER – FIX: ERROR Msg 5169, Level 16: FILEGROWTH cannot be greater than MAXSIZE for file

    - by pinaldave
    I am writing this blog post right after I resolve this error for one of the system. Recently one of the my friend who is expert in infrastructure as well private cloud was working on SQL Server installation. Please note he is seriously expert in what he does but he has never worked SQL Server before and have absolutely no experience with its installation. He was modifying database file and keep on getting following error. As soon as he saw me he asked me where is the maxfile size setting so he can change. Let us quickly re-create the scenario he was facing. Error Message: Msg 5169, Level 16, State 1, Line 1 FILEGROWTH cannot be greater than MAXSIZE for file ‘NewDB’. Creating Scenario: CREATE DATABASE [NewDB] ON PRIMARY (NAME = N'NewDB', FILENAME = N'D:\NewDB.mdf' , SIZE = 4096KB, FILEGROWTH = 1024KB, MAXSIZE = 4096KB) LOG ON (NAME = N'NewDB_log', FILENAME = N'D:\NewDB_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) GO Now let us see what exact command was creating error for him. USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB ) GO Workaround / Fix / Solution: The reason for the error is very simple. He was trying to modify the filegrowth to much higher value than the maximum file size specified for the database. There are two way we can fix it. Method 1: Reduces the filegrowth to lower value than maxsize of file USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024KB ) GO Method 2: Increase maxsize of file so it is greater than new filegrowth USE [master] GO ALTER DATABASE [NewDB] MODIFY FILE ( NAME = N'NewDB', FILEGROWTH = 1024MB, MAXSIZE = 4096MB) GO I think this blog post will help everybody who is facing similar issues. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ???????,???????????

    - by ??-Oracle
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE ??: 1.???????; 2.??????????????????; 3.????????????????,??????????,????????; ??: ?????????: SQL> create table test_b (id number(10)) tablespace bbb; Table created. SQL> insert into test_b values (1); 1 row created. SQL> commit; NAME --------------------------------------------------------------------------------     FILE# ---------- /opt/oracle/oradata/R11203/aaa.dbf        10 /opt/oracle/oradata/R11203/bbb.dbf        11 11 rows selected. SQL> host ??????,?????? bash-4.2$ ls -al /opt/oracle/oradata/R11203/bbb.dbf -rw-rw----   1 oracle     dba        10493952 Apr  4 09:53 /opt/oracle/oradata/R11203/bbb.dbf bash-4.2$ mv /opt/oracle/oradata/R11203/bbb.dbf /opt/oracle/oradata/R11203/bbb.dbf.bak bash-4.2$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Fri Apr 4 09:55:03 2014 Copyright (c) 1982, 2011, Oracle.  All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> alter tablespace bbb read only; alter tablespace bbb read only * ERROR at line 1: ORA-01116: error in opening database file 11 ORA-01110: data file 11: '/opt/oracle/oradata/R11203/bbb.dbf' ORA-27041: unable to open file HPUX-ia64 Error: 2: No such file or directory Additional information: 3 SQL> shutdown immediate; ORA-01116: error in opening database file 11 ORA-01110: data file 11: '/opt/oracle/oradata/R11203/bbb.dbf' ORA-27041: unable to open file HPUX-ia64 Error: 2: No such file or directory Additional information: 3 SQL> select status from v$instance; STATUS ------------ OPEN SQL> alter system switch logfile; System altered. SQL> / System altered. SQL> / System altered. SQL>/ System altered. SQL> / System altered. SQL> / System altered. SQL> ?? SQL> shutdown immediate; ORA-01116: error in opening database file 11 ORA-01110: data file 11: '/opt/oracle/oradata/R11203/bbb.dbf' ORA-27041: unable to open file HPUX-ia64 Error: 2: No such file or directory Additional information: 3 SQL> shutdown abort; ORACLE instance shut down. ???????mount?? SQL> startup mount; ORACLE instance started. Total System Global Area  329859072 bytes Fixed Size                  2182336 bytes Variable Size             285213504 bytes Database Buffers           37748736 bytes Redo Buffers                4714496 bytes Database mounted. ??alter database create datafile <> as ....???,???????????: SQL> alter database create datafile 11; Database altered. ???????redo log????????? SQL> recover datafile 11; Media recovery complete. SQL> alter database open; Database altered. SQL> select * from test_b;        ID ----------         1 DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-font-kerning:1.0pt;}

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • UIView Animation: PartialCurl ...bug during rotate?

    - by itai alter
    Hello all, a short question. I've created an app for the iPad, much like a utility app for the iPhone (one mainView, one flipSideView). The animation between them is UIModalTransitionStylePartialCurl. shouldAutorotateToInterfaceOrientation is returning YES. If I rotate the device BEFORE entering the FlipSide, everything is okay and the PartialCurl is displayed okay. But if I enter the FlipSide and then rotate the device, while the UIElements do rotate and position themselves just fine, the actual "page curl" stays with its initial orientation. it just won't budge :) Is it a known issue? am I doing something wrong? Thanks!

    Read the article

  • Dragging a Sprite (Cocos2D) while Chipmunk is simulating

    - by itai alter
    Hello all! I have a simple project built with Cocos2D and Chipmunk. So far it's just a Ball (body, shape & sprite) bouncing on the Ground (a static line segment at the bottom of the screen). I implemented the ccTouchesBegan/Moved/Ended methods to drag the ball around. I've tried both: cpBodySlew(ballBody, touchPoint, 1.0/60.0f); and ballBody->p = cgPointMake(touchPoint.x,touchPoint.y); and while the Ball does follow my dragging, it's still being affected by gravity and it tries to go down (which causes velocity problems and others). Does anyone know of the preferred way to Drag an active Body while the physics simulation is going on? Do I need somehow to stop the simulation and turn it back on afterwards? Thanks!

    Read the article

  • Selecting an entire photo album with UIImagePickerController

    - by itai alter
    Hello all, I was wondering if there's a way to select an entire photo album with UIImagePickerController. What I have now is a UIImagePickerController with sourceType of PhotoLibrary. It shows the Albums, and navigates inside an album to select a single image... What I want to do is when the user selects the Album, instead of going inside the album, I want to load all the images to an array so I could do a timed slideshow of them. The delegate (DidFinishPickingImage) lets me run code only after the user has already gone into the album and selected a single image. Is there a way to do it? I couldn't find any information on this. Thanks!

    Read the article

  • accessing parsed JSON on the iPhone SDK

    - by itai alter
    Hello All! I've been following the great tutorial about (iPhone, json and Flickr API and I did manage to access the parsed json info just fine. Now I'm trying to do the same thing with the Twitter API, and I am able to get the json info and parse it, but I can't seem to access it like in Flickr. I noticed that the json info that is retrieved from Twitter is a little different from Flickr. The Flickr json info starts straight with a curly braces ({), while the Twitter json info starts with a square bracket and then a curly braces ([{). I understand that it means it's an array inside the json info, but I don't know how to access it. In the Flickr example, I access the objects like so (the second line takes the number of pages Flickr has reported): NSDictionary *results = [jsonString JSONValue]; pagesString = [[results objectForKey:@"photos"] objectForKey:@"pages"]; but I can't seem to access the Twitter response in the same way... Does anyone know of a solution? (here's an example of the Twitter JSON response: api.twitter.com/1/statuses/public_timeline.json ) Thanks a bunch!

    Read the article

  • NSTimer freezes the app until it gets fired again?

    - by itai alter
    Hello all, I have a simple app with a button, UIImageView and a NSTimer. The timer is fired up every 5 seconds repeatedly to update the ImageView with a new image, while the button simply stops the timer and switches to another View. The problem is that when I press the button, nothing happens for a few seconds (until the timer fires up again). Is there a way to cause the button to stop the timer and do its job at any given time instead of between intervals of the timer? Thanks!

    Read the article

  • Saving a project as an .ipa

    - by itai alter
    Hello all! I wrote an app for the iPad, but I don't currently own an iPad. I would like to save my project as an .ipa file (assuming it's .ipa for the iPad, like the iPhone) so I could send it to a friend with a Jailbroken iPad to test it on an actual device before I release it to the App Store. Is there any way I can do this? Thanks a bunch!

    Read the article

  • Stored procedure to remove FK of a given table

    - by Nicole
    I need to create a stored procedure that: Accepts a table name as a parameter Find its dependencies (FKs) Removes them Truncate the table I created the following so far based on http://www.mssqltips.com/sqlservertip/1376/disable-enable-drop-and-recreate-sql-server-foreign-keys/ . My problem is that the following script successfully does 1 and 2 and generates queries to alter tables but does not actually execute them. In another word how can execute the resulting "Alter Table ..." queries to actually remove FKs? CREATE PROCEDURE DropDependencies(@TableName VARCHAR(50)) AS BEGIN SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name FROM sys.foreign_keys WHERE referenced_object_id=object_id(@TableName) END EXEC DropDependencies 'TableName' Any idea is appreciated! Update: I added the cursor to the SP but I still get and error: "Msg 203, Level 16, State 2, Procedure DropRestoreDependencies, Line 75 The name 'ALTER TABLE [dbo].[ChildTable] DROP CONSTRAINT [FK__ChileTable__ParentTable__745C7C5D]' is not a valid identifier." Here is the updated SP: CREATE PROCEDURE DropRestoreDependencies(@schemaName sysname, @tableName sysname) AS BEGIN SET NOCOUNT ON DECLARE @operation VARCHAR(10) SET @operation = 'DROP' --ENABLE, DISABLE, DROP DECLARE @cmd NVARCHAR(1000) DECLARE @FK_NAME sysname, @FK_OBJECTID INT, @FK_DISABLED INT, @FK_NOT_FOR_REPLICATION INT, @DELETE_RULE smallint, @UPDATE_RULE smallint, @FKTABLE_NAME sysname, @FKTABLE_OWNER sysname, @PKTABLE_NAME sysname, @PKTABLE_OWNER sysname, @FKCOLUMN_NAME sysname, @PKCOLUMN_NAME sysname, @CONSTRAINT_COLID INT DECLARE cursor_fkeys CURSOR FOR SELECT Fk.name, Fk.OBJECT_ID, Fk.is_disabled, Fk.is_not_for_replication, Fk.delete_referential_action, Fk.update_referential_action, OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name, schema_name(Fk.schema_id) AS Fk_table_schema, TbR.name AS Pk_table_name, schema_name(TbR.schema_id) Pk_table_schema FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName OPEN cursor_fkeys FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER WHILE @@FETCH_STATUS = 0 BEGIN -- create statement for dropping FK and also for recreating FK IF @operation = 'DROP' BEGIN -- drop statement SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME + '] DROP CONSTRAINT [' + @FK_NAME + ']' EXEC @cmd -- create process DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT -- create cursor to get FK columns DECLARE cursor_fkeyCols CURSOR FOR SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name, COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008 ORDER BY Fk_Cl.constraint_column_id OPEN cursor_fkeyCols FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME SET @COUNTER = 1 SET @FKCOLUMNS = '' SET @PKCOLUMNS = '' WHILE @@FETCH_STATUS = 0 BEGIN IF @COUNTER > 1 BEGIN SET @FKCOLUMNS = @FKCOLUMNS + ',' SET @PKCOLUMNS = @PKCOLUMNS + ',' END SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']' SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']' SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME END CLOSE cursor_fkeyCols DEALLOCATE cursor_fkeyCols END FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER END CLOSE cursor_fkeys DEALLOCATE cursor_fkeys END For running use: EXEC DropRestoreDependencies dbo, ParentTable

    Read the article

  • MySQL forgot about automatically creating an index for a foreign key?

    - by bobo
    After running the following SQL statements, you will see that, MySQL has automatically created the non-unique index question_tag_tag_id_tag_id on the tag_id column for me after the first ALTER TABLE statement has run. But after the second ALTER TABLE statement has run, I think MySQL should also automatically create another non-unique index question_tag_question_id_question_id on the question_id column for me. But as you can see from the SHOW INDEXES statement output, it's not there. Why does MySQL forget about the second ALTER TABLE statement? By the way, since I have already created a unique index question_id_tag_id_idx used by both question_id and tag_id columns. Is creating a separate index for each of them redundant? mysql> DROP DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> CREATE DATABASE mydatabase; Query OK, 1 row affected (0.00 sec) mysql> USE mydatabase; Database changed mysql> CREATE TABLE question (id BIGINT AUTO_INCREMENT, html TEXT, PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE tag (id BIGINT AUTO_INCREMENT, name VARCHAR(10) NOT NULL, UNIQUE INDEX name_idx (name), PRIMARY KEY(id)) ENGINE = INNODB; Query OK, 0 rows affected (0.05 sec) mysql> CREATE TABLE question_tag (question_id BIGINT, tag_id BIGINT, UNIQUE INDEX question_id_tag_id_idx (question_id, tag_id), PRIMARY KEY(question_id, tag_id)) ENGINE = INNODB; Query OK, 0 rows affected (0.00 sec) mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_tag_id_tag_id FOREIGN KEY (tag_id) REFERENCES tag(id); Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> ALTER TABLE question_tag ADD CONSTRAINT question_tag_question_id_question_id FOREIGN KEY (question_id) REFERENCES question(id); Query OK, 0 rows affected (0.13 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> SHOW INDEXES FROM question_tag; +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | question_tag | 0 | PRIMARY | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | PRIMARY | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 1 | question_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 0 | question_id_tag_id_idx | 2 | tag_id | A | 0 | NULL | NULL | | BTREE | | | question_tag | 1 | question_tag_tag_id_tag_id | 1 | tag_id | A | 0 | NULL | NULL | | BTREE | | +--------------+------------+----------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 5 rows in set (0.01 sec) mysql>

    Read the article

  • ASP Classic Named Parameter in Paramaterized Query: Must declare the scalar variable

    - by My Alter Ego
    I'm trying to write a parameterized query in ASP Classic, and it's starting to feel like i'm beating my head against a wall. I'm getting the following error: Must declare the scalar variable "@something". I would swear that is what the hello line does, but maybe i'm missing something... <% OPTION EXPLICIT %> <!-- #include file="../common/adovbs.inc" --> <% Response.Buffer=false dim conn,connectionString,cmd,sql,rs,parm connectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Data Source=.\sqlexpress;Initial Catalog=stuff" set conn = server.CreateObject("adodb.connection") conn.Open(connectionString) set cmd = server.CreateObject("adodb.command") set cmd.ActiveConnection = conn cmd.CommandType = adCmdText cmd.CommandText = "select @something" cmd.NamedParameters = true cmd.Prepared = true set parm = cmd.CreateParameter("@something",advarchar,adParamInput,255,"Hello") call cmd.Parameters.append(parm) set rs = cmd.Execute if not rs.eof then Response.Write rs(0) end if %>

    Read the article

  • __proto__ of a function

    - by alter
    if I have a class called Person. var Person = function(fname, lname){ this.fname = fname; this.lname = lname; } Person.prototype.mname = "Test"; var p = new Person('Alice','Bob'); Now, p.proto refers to prototype of Person but, when I try to do Person.proto , it points to function(), and Person.constructor points to Function(). can some1 explain what is the difference between function() and Function() and why prototype of a Function() class is a function()

    Read the article

  • how does an object knows about its parent in javascript

    - by alter
    Lets suppose I made a class called Person. var Person = function(fname){this.fname = fname;}; pObj is the object I made from this class. var pObj = new Person('top'); now I add one property to Person class, say lname. Person.prototype.lname = "Thomsom"; now pObj.lname gets me "Thomson". My question is that, when pObj didn't find the property lname in it, how does it know where to look for.

    Read the article

  • previousFailureCount always stays on 0 (zero)

    - by itai alter
    Hello all, First of all, I'd like to thank the good people who helped me around on this site. Thanks. I'm trying to detect a failed authentication attempt in my app... I'm using the didReceiveAuthenticationChallenge method, and the checking if [challenge previousFailureCount] is equal to 0. The problem is that it's always stays on zero, even if the username and password that I send with the credentials are incorrect. I couldn't find any info about this kind of issue, any help will be much appreciated. Thanks!

    Read the article

  • Useful Excel keyboard shortcuts

    - by Ben Lings
    What keyboard shortcuts do you use in Excel? Things I've discovered recently and found very useful are: Shift + Space - select the current row Ctrl + Space - select the current column Ctrl + Shift + Space - select the block of contiguous cell Ctrl + + - Insert (as in the context menu). If the current row is selected, will insert a new row. Ctrl + - - Delete (as in the context menu). If the current row is selected, will delete entire row. What (apart from the normal cut, copy, paste, etc) do you use? Ctrl + 1 to open the Format dialog. Shift + F2 to add/edit a cell comment. Shift + F2, followed by Esc to select the current cell comment, which can then be moved around with the arrow keys (????) or deleted by pressing Del. Ctrl + ???? to move to the last non-blank cell in a series. This is usually the edge of a table, but not if you have blank cells in the path. Pressing End followed by an arrow key does the same thing. Alt + F11 to open the VBA editor. Alt + = to start a SUM() formula and go straight to selecting cells to be summed. Ctrl + G or F5 to jump to a cell by typing its coordinates (e.g. C3) Ctrl + Home to jump to the top left, usually A1 unless you you are in a frozen split view, in which case it will jump to the top left of the "data" area. Ctrl + ; and Ctrl + Shift + ; to insert the current date and time, respectively. I know Ben Lings already posted this one, but I find it indispensable.

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • Can you "swap" the Sysprep answer file in Windows 7

    - by Ben
    I have a load of new Lenovo laptops which I am due to distribute in my company. We are distributed in multiple locations and I want to ship the laptops "boxed" and untouched by IT hand for distribution. We are using LANDesk to do all the software distribution and provisioning, but are currently falling at the first hurdle as when booted, the laptops kick into the Lenovo mini-setup wizard. I assume this is because they have been sysprepped at Lenovo. In order to keep with our (almost) zero touch strategy I want the users to PXE boot into a PE of some sort, which will run a script on startup which replaces the sysprep answer file with one of my own. (i.e. prepopulated with product key, company info etc.) and then reboot to complete Sysprep. The plan is that this will run, and then install the LANDesk agent as a post-sysprep task, which in turn will complete the provisioning. Anyone have any experience / know any pitfalls to look out for / can suggest a suitable, PXE-bootable PE environment? Apologies for the verbosity of the question - it takes a bit of explaining! Thanks in advance, Ben

    Read the article

  • Howto: SaaS / PHP Application / Tenants / Security

    - by Ben Fransen
    Hi all, Being completely new in the webhostingcorner I have a few questions on how to implement/setup a webserver for a SaaS application. I'm about to rent my own server for a new product (CMS) I'm launching in two months. Developing the system wasn't that much of wild ride to me, but a correct way to implement it, is. So lets say this is my situation: I want to host 10 websites for 8 clients. There are 6 single sites, and two clients have two websites they can manage with my software. The CMS must be placed on the server too, all clients are connecting to 1 system The database must be placed Depending on the contract a client makes, the client gets some storage. How to measure the used storage over the DB, FileSystem and email Clients may not, in any case be able to somehow get outside their directory, but from the CMS directory the CMS must be able to create files and dirs in a clients directory (for templates, imagegalleries, widgets, etc, etc). I was thinking about something like a dirstructure like this: ./CMS/ [all CMS files] ./Websites/*/ [all websites] My hostingprovider will install updates to the os (CentOS, latest) and the admin panel (Direct Admin). Is there anybody with experience on this topic? Or do you have some thoughts about it? please join the conversation since I'm completely new to this. Ben

    Read the article

  • How do I make wallpaper fit both monitors in dual monitor setup?

    - by Ben
    I am deploying some custom corporate wallpaper as part of a Windows 7 rollout. Some people will be using dual monitors, and the additional monitors may be either 4:3 or widescreen. I want to use the same wallpaper on both screens (i.e. 2 copies of the same wallpaper, not stretched across both.) If I set the background to "stretch", it uses the aspect ratio of the primary monitor to stretch the wallpaper on both monitors. So, for example, if I have a dual monitor setup using a 4:3 TFT as primary and my (widescreen) laptop LCD as secondary - the image shows on the laptop LCD in 4:3, with a black stripe down either side. I've only noticed this as an issue with my "custom" wallpaper. Both the default MS wallpaper and the built in Lenovo wallpaper don't seem to have this issue. Is this by using "trickery" such as using an image larger than the largest resolution you will have and centering it? (i.e. so you crop out part of the image.) Or can this be done "properly"? I don't want to use 3rd party software to do this, but would happily do a bit of Powershell scripting if this would solve the issue. Thanks in advance, Ben

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Exadata X3, 11.2.3.2 and Oracle Platinum Services

    - by Rene Kundersma
    Oracle recently announced an Exadata Hardware Update. The overall architecture will remain the same, however some interesting hardware refreshes are done especially for the storage server (X3-2L). Each cell will now have 1600GB of flash, this means an X3-2 full rack will have 20.3 TB of total flash ! For all the details I would like to refer to the Oracle Exadata product page: www.oracle.com/exadata Together with the announcement of the X3 generation. A new Exadata release, 11.2.3.2 is made available. New Exadata systems will be shipped with this release and existing installations can be updated to that release. As always there is a storage cell patch and a patch for the compute node, which again needs to be applied using YUM. Instructions and requirements for patching existing Exadata compute nodes to 11.2.3.2 using YUM can be found in the patch README. Depending on the release you have installed on your compute nodes the README will direct you to a particular section in MOS note 1473002.1. MOS 1473002.1 should only be followed with the instructions from the 11.2.3.2 patch README. Like with 11.2.3.1.0 and 11.2.3.1.1 instructions are added to prepare your systems to use YUM for the first time in case you are still on release 11.2.2.4.2 and earlier. You will also find these One Time Setup instructions in MOS note 1473002.1 By default compute nodes that will be updated to 11.2.3.2.0 will have the UEK kernel. Before 11.2.3.2.0 the 'compatible kernel' was used for the compute nodes. For 11.2.3.2.0 customer will have the choice to replace the UEK kernel with the Exadata standard 'compatible kernel' which is also in the ULN 11.2.3.2 channel. Recommended is to use the kernel that is installed by default. One of the other great new things 11.2.3.2 brings is Writeback Flashcache (wbfc). By default wbfc is disabled after the upgrade to 11.2.3.2. Enable wbfc after patching on the storage servers of your test environment and see the improvements this brings for your applications. Writeback FlashCache can be enabled  by dropping the existing FlashCache, stopping the cellsrv process and changing the FlashCacheMode attribute of the cell. Of course stopping cellsrv can only be done in a controlled manner. Steps: drop flashcache alter cell shutdown services cellsrv again, cellsrv can only be stopped in a controlled manner alter cell flashCacheMode = WriteBack alter cell startup services cellsrv create flashcache all Going back to WriteThrough FlashCache is also possible, but only after flushing the FlashCache: alter cell flashcache all flush Last item I like to highlight in particular is already from a while ago, but a great thing to emphasis: Oracle Platinum Services. On top of the remote fault monitoring with faster response times Oracle has included update and patch deployment services.These services are delivered by Oracle Advanced Customer Support at no additional costs for qualified Oracle Premier Support customers. References: 11.2.3.2.0 README Exadata YUM Repository Population, One-Time Setup Configuration and YUM upgrades  1473002.1 Oracle Platinum Services

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >