Search Results

Search found 10060 results on 403 pages for 'column'.

Page 99/403 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • How to Create tree type CVL in Content server(UCM)

    - by rajeev.y.ranjan-oracle
    Steps to create tree choice list:1)Create a table "tblStates" with column "stateID" and "stateName". Click on "ADD Recommended".2) Create another table "tblCities with columns "cityID", "stateID" and "cityName".3)Then create two views on these tables namely "tblstateview" and "tblcityview".3)In "StateView" added two rows with values as JH and MH in stateID column.Jharkhand and Maharastra in stateName.4)Similarly in tblcityview added two rows with values as:BO and RA in cityID column.JH and MH in stateID columnBokaro and Mumbai in cityname column.5)Created relationship with Parentinfo "tblStates" and stateID and  childinfo with tblCities and stateID.6)Created metadata by name "Newtest"Enable option list,go to the configure ,Select use tree,Click on go edit definition 7)Tree Definition at level 1: a)Choose" tblstateView"b)Choose relation "newstatecity"At Level2:a)Choose cityView.Log out of the NativeUI and ContentUI and test the tree created by name "Newtest".

    Read the article

  • Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?

    - by sebf
    I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices. I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order. What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements. Given that: "post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. " Why in the following snippet of code: Matrix4 r = t3 * t2 * t1; Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose(); Does r != r2 and why does pos3 != pos for: Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1); Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1); Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?) One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?

    Read the article

  • What is the best way to slide a panel in WPF?

    - by Kris Erickson
    I have a fairly simple UserControl that I have made (pardon my Xaml I am just learning WPF) and I want to slide the off the screen. To do so I am animating a translate transform (I also tried making the Panel the child of a canvas and animating the X position with the same results), but the panel moves very jerkily, even on a fairly fast new computer. What is the best way to slide in and out (preferably with KeySplines so that it moves with inertia) without getting the jerkyness. I only have 8 buttons on the panel, so I didn't think it would be too much of a problem. Here is the Xaml I am using, it runs fine in Kaxaml, but it is very jerky and slow (as well as being jerkly and slow when run compiled in a WPF app). <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="1002" Height="578"> <UserControl.Resources> <Style TargetType="Button"> <Setter Property="Control.Padding" Value="4"/> <Setter Property="Control.Margin" Value="10"/> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid Name="backgroundGrid" Width="210" Height="210" Background="#00FFFFFF"> <Grid.BitmapEffect> <BitmapEffectGroup> <DropShadowBitmapEffect x:Name="buttonDropShadow" ShadowDepth="2"/> <OuterGlowBitmapEffect x:Name="buttonGlow" GlowColor="#A0FEDF00" GlowSize="0"/> </BitmapEffectGroup> </Grid.BitmapEffect> <Border x:Name="background" Margin="1,1,1,1" CornerRadius="15"> <Border.Background> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="#FF0062B6"/> <GradientStop Offset="1" Color="#FF0089FE"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.Background> </Border> <Border Margin="1,1,1,0" BorderBrush="#FF000000" BorderThickness="1.5" CornerRadius="15"/> <ContentPresenter HorizontalAlignment="Center" Margin="{TemplateBinding Control.Padding}" VerticalAlignment="Center" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Canvas> <Grid x:Name="Panel1" Height="578" Canvas.Left="0" Canvas.Top="0"> <Grid.RenderTransform> <TransformGroup> <TranslateTransform x:Name="panelTranslate" X="0" Y="0"/> </TransformGroup> </Grid.RenderTransform> <Grid.RowDefinitions> <RowDefinition Height="287"/> <RowDefinition Height="287"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Panel1Col1"/> <ColumnDefinition x:Name="Panel1Col2"/> <ColumnDefinition x:Name="Panel1Col3"/> <ColumnDefinition x:Name="Panel1Col4"/> <!-- Set width to 0 to hide a column--> </Grid.ColumnDefinitions> <Button x:Name="Panel1Product1" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"> <Button.Triggers> <EventTrigger RoutedEvent="Button.Click" SourceName="Panel1Product1"> <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <DoubleAnimation BeginTime="00:00:00.6" Duration="0:0:3" From="0" Storyboard.TargetName="panelTranslate" Storyboard.TargetProperty="X" To="-1000"/> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Button.Triggers> </Button> <Button x:Name="Panel1Product2" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product3" Grid.Column="1" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product4" Grid.Column="1" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product5" Grid.Column="2" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product6" Grid.Column="2" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product7" Grid.Column="3" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product8" Grid.Column="3" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Grid> </Canvas> </UserControl>

    Read the article

  • CSS issue with elements spanning columns

    - by bigFoot
    Hi folks. Overview: I'm trying to create a relatively simple page layout detailed below and running into problems no matter how I try to approach it. Concept: - A standard-size-block layout. I'll quote unit widths: each content block is 240px square with 5px of margin around it. - A left column of fixed width of 1 unit (245px - 1 block + margin to left). No problems here. - A right column of variable width to fill the remaining space. No problems here either. - In the left column, a number of 1unit x 1unit blocks fixed down the column. Also some blank space at the top - again, not a problem. - In the right column: a number of free-floating blocks of standard unit-sizes which float around and fill the space given to them by the browser window. No problems here. - Lastly, a single element, 2 units wide, which sits half in the left column and half in the right column, and which the blocks in the right column still float around. Here be dragons. Please see here for a diagram: http://is.gd/bPUGI Problem: No matter how I approach this, it goes wrong. Below is code for my existing attempt at a solution. My current problem is that the 1x1 blocks on the right do not respect the 2x1 block, and as a result half of the 2x1 block is overwritten by a 1x1 block in the right-hand column. I'm aware that this is almost certainly an issue with position: absolute taking things out of flow. However, can't really find a way round that which doesn't just throw up another problem instead. Code: <html> <head> <title>wat</title> <style type="text/css"> body { background: #ccc; color: #000; padding: 0px 5px 5px 0px; margin: 0px; } #leftcol { width: 245px; margin-top: 490px; position: absolute; } #rightcol { left: 245px; position: absolute; } #bigblock { float: left; position: relative; margin-top: -240px; background: red; } .cblock { margin: 5px 0px 0px 5px; float: left; overflow: hidden; display: block; background: #fff; } .w1 { width: 240px; } .w2 { width: 485px; } .l1 { height: 240px; } </head> <body> <div class="cblock w2 l1" id="bigblock"> <h1>DRAGONS</h1> <p>Here be they</p> </div> <div id="leftcol"> <div class="cblock w1 l1"> <h1>Left 1</h1> <p>1x1 block</p> </div> </div> <div id="rightcol"> <div class="cblock w1 l1"> <h1>Right 1</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 2</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 3</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 4</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 5</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 6</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 7</h1> <p>1x1 block</p> </div> </div> </body> </html> Constraints: One final note that I need cross-browser compatibility, though I'm more than happy to enforce this with JS if necessary. That said, if a CSS-only solution exists, I'd be extremely happy. Thanks in advance!

    Read the article

  • text-aling:left align:left in google android browser is making the text in column around %20 of the page how this can be fixed?

    - by user981220
    Hello I have a problem with text-align: left or align: left no matter what I put the text get's stuck on the left side of the page in a big column how this can be fixed ? example of the code: <div class="maintext02"><span><p>text</p> .maintext02 { font-family:Georgia, "Times New Roman", Times, serif; color:#545454; line-height:25px; font-size:12px; width:943px; padding-top:15px; padding-bottom:15px; text-align:left; }

    Read the article

  • Many to Many delete in NHibernate two parents with common association

    - by Joshua Grippo
    I have 3 top level entities in my app: Circuit, Issue, Document Circuits can contain Documents and Issues can contain Documents. When I delete a Circuit, I want it to delete the documents associated with it, unless it is used by something else. I would like this same behavior with Issues. I have it working when the only association is in the same table in the db, but if it is in another table, then it fails due to foreign key constraints. ex 1(This will cascade properly, because there is only a foreign constraint from Circuit to Document) Document1 exists. Circuit1 exists and contains a reference to Document1. If I delete Circuit1 then it deletes Document1 with it. ex 2(This will cascade properly, because there is only a foreign constraint from Circuit to Document.) Document1 exists. Circuit1 exists and contains a reference to Document1. Circuit2 exists and contains a reference to Document1. If I delete Circuit1 then it is deleted, but Document1 is not deleted because Circuit2 exists. If I then delete Circuit2, then Document1 is deleted. ex 3(This will throw an error, because when it deletes the Circuit it sees that there are no other circuits that reference the document so it tries to delete the document. However it should not, because there is an Issue that has a foreign constraint to the document.) Document 1 exists. Circuit1 exists and contains a reference to Document1. Issue1 exists and contains a reference to Document1. If I delete Circuit1, then it fails, because it tries to delete Document1, but Issues1 still has a reference. DB: This think won't let upload an image, so here is the ERD to the DB: http://lh3.ggpht.com/_jZWhe7NXay8/TROJhOd7qlI/AAAAAAAAAGU/rkni3oEANvc/CircuitIssues.gif Model: public class Circuit { public virtual int CircuitID { get; set; } public virtual string CJON { get; set; } public virtual IList<Document> Documents { get; set; } } public class Issue { public virtual int IssueID { get; set; } public virtual string Summary { get; set; } public virtual IList<Model.Document> Documents { get; set; } } public class Document { public virtual int DocumentID { get; set; } public virtual string Data { get; set; } } Mapping Files: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Circuit" table="Circuit"> <id name="CircuitID"> <column name="CircuitID" not-null="true"/> <generator class="identity" /> </id> <property name="CJON" column="CJON" type="string" not-null="true"/> <bag name="Documents" table="CircuitDocument" cascade="save-update,delete-orphan"> <key column="CircuitID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Issue" table="Issue"> <id name="IssueID"> <column name="IssueID" not-null="true"/> <generator class="identity" /> </id> <property name="Summary" column="Summary" type="string" not-null="true"/> <bag name="Documents" table="IssueDocument" cascade="save-update,delete-orphan"> <key column="IssueID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Document" table="Document"> <id name="DocumentID"> <column name="DocumentID" not-null="true"/> <generator class="identity" /> </id> <property name="Data" column="Data" type="string" not-null="true"/> </class> </hibernate-mapping> Code: using (ISession session = sessionFactory.OpenSession()) { var doc = new Model.Document() { Data = "Doc" }; var circuit = new Model.Circuit() { CJON = "circ" }; circuit.Documents = new List<Model.Document>(new Model.Document[] { doc }); var issue = new Model.Issue() { Summary = "iss" }; issue.Documents = new List<Model.Document>(new Model.Document[] { doc }); session.Save(circuit); session.Save(issue); session.Flush(); } using (ISession session = sessionFactory.OpenSession()) { foreach (var item in session.CreateCriteria<Model.Circuit>().List<Model.Circuit>()) { session.Delete(item); } //this flush fails, because there is a reference to a child document from issue session.Flush(); foreach (var item in session.CreateCriteria<Model.Issue>().List<Model.Issue>()) { session.Delete(item); } session.Flush(); }

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Control-Break Style ADF Table - Comparing Values with Previous Row

    - by Steven Davelaar
    Sometimes you need to display data in an ADF Faces table in a control-break layout style, where rows should be "indented" when the break column has the same value as in the previous row. In the screen shot below, you see how the table breaks on both the RegionId column as well as the CountryId column. To implement this I didn't use fancy SQL statements. The table is based on a straightforward Locations ViewObject that is based on the Locations entity object and the Countries reference entity object, and the join query was automatically created by adding the reference EO. To get the indentation in the ADF Faces table, we simple use two rendered properties on the RegionId and CountryId outputText items:  <af:column sortProperty="RegionId" sortable="false"            headerText="#{bindings.LocationsView1.hints.RegionId.label}"            id="c5">   <af:outputText value="#{row.RegionId}" id="ot2"                  rendered="#{!CompareWithPreviousRowBean['RegionId']}">     <af:convertNumber groupingUsed="false"                       pattern="#{bindings.LocationsView1.hints.RegionId.format}"/>   </af:outputText> </af:column> <af:column sortProperty="CountryId" sortable="false"            headerText="#{bindings.LocationsView1.hints.CountryId.label}"            id="c1">   <af:outputText value="#{row.CountryId}" id="ot5"                  rendered="#{!CompareWithPreviousRowBean['CountryId']}"/> </af:column> The CompareWithPreviousRowBean managed bean is defined in request scope and is a generic bean that can be used for all the tables in your application that needs this layout style. As you can see the bean is a Map-style bean where we pass in the name of the attribute that should be compared with the previous row. The get method in the bean that is called returns boolean false when the attribute has the same value in the same row. Here is the code of the get method:  public Object get(Object key) {   String attrName = (String) key;   boolean isSame = false;   // get the currently processed row, using row expression #{row}   JUCtrlHierNodeBinding row = (JUCtrlHierNodeBinding) resolveExpression(getRowExpression());   JUCtrlHierBinding tableBinding = row.getHierBinding();   int rowRangeIndex = row.getViewObject().getRangeIndexOf(row.getRow());   Object currentAttrValue = row.getRow().getAttribute(attrName);   if (rowRangeIndex > 0)   {     Object previousAttrValue = tableBinding.getAttributeFromRow(rowRangeIndex - 1, attrName);     isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);   }   else if (tableBinding.getRangeStart() > 0)   {     // previous row is in previous range, we create separate rowset iterator,     // so we can change the range start without messing up the table rendering which uses     // the default rowset iterator     int absoluteIndexPreviousRow = tableBinding.getRangeStart() - 1;     RowSetIterator rsi = null;     try     {       rsi = tableBinding.getViewObject().getRowSet().createRowSetIterator(null);       rsi.setRangeStart(absoluteIndexPreviousRow);       Row previousRow = rsi.getRowAtRangeIndex(0);       Object previousAttrValue = previousRow.getAttribute(attrName);       isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);     }     finally     {       rsi.closeRowSetIterator();     }   }   return isSame; } The row expression defaults to #{row} but this can be changed through the rowExpression  managed property of the bean.  You can download the sample application here.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Bootstrap responsive CSS [migrated]

    - by savolai
    I have a four column design and I am using Bootstrap. The design renders fine in a single column in mobile devices, but in "(min-width: 768px) and (max-width: 979px)", I get four columns though there is room for only two. So clearly, the rows/spans setup would need to be rethought for those sizes. The only way I can imagine of doing this is to have semantic CSS classes used in the HTML and only including grid classes in the CSS using LESS, and then depending on screen size, including different grid classes to achieve four or two column layout. Not sure if this would work either though. Is this the way to go with, or am I thinking this too complicatedly? Thanks! Also at: https://groups.google.com/forum/#!topic/twitter-bootstrap/R5jEp0oQ_-E

    Read the article

  • Oracle bleibt auch 2011 Spitzenreiter im Bereich Datenbanken

    - by Anne Manke
    Mit der Veröffentlichung der aktuellen Ausgabe "Market Share: All Software Markets, Worldwide 2011" bestätigt das weltweit führende Marktanalyseunternehmen Gartner Oracle's Marktführerschaft im Bereich der Relationellen Datenbank Management Systeme (RDBMS). Oracle konnte innerhalb des letzten Jahres seinen Abstand zu seinen Marktbegleitern im Bereich der RDBMS mit einem stabilen Wachstum von 18% sogar ausbauen: der Marktanteil stieg im Jahr 2010 von 48,2% auf 48,8% im Jahr 2011. Damit ist der Abstand zu Oracle's stärkstem Verfolger IBM auf 28,6%.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Revenue 2010 ($USM) Revenue 2011 ($USM) Growth 2010 Growth 2011 Share 2010 Share 2011 Oracle 9,990.5 11,787.0 10.9% 18.0% 48.2% 48.8% IBM 4,300.4 4,870.4 5.4% 13.3% 20.7% 20.2% Microsoft 3,641.2 4,098.9 10.1% 12.6% 17.6% 17.0% SAP/Sybase 744.4 1,101.1 12.8% 47.9% 3.6% 4.6% Teradata 754.7 882.3 16.9% 16.9% 3.6% 3.7% Source: Gartner’s “Market Share: All Software Markets, Worldwide 2011,” March 29, 2012, By Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;}

    Read the article

  • How do I Fix SQL Server error: Order by items must appear in the select list if Select distinct is s

    - by Paula DiTallo 2007-2009 All Rights Reserved
    There's more than one reason why you may receive this error, but the most common reason is that your order by statement column list doesn't correlate with the values specified in your column list when you happen to be using DISTINCT. This is usually easy to spot and resolve. A more obscure reason may be that you are using a function around one of the selected columns --but omitting to use the same function around the same selected column name in the order by statement. Here's an example:   select distinct upper(columnA)   from [evaluate].[testTable]    order by columnA  asc   This statement will cause the "Order by items must appear in the select list if SELECT DISTINCT is specified."  error to appear not because distinct was used, but because the order by statement did not utilize the upper() fundtion around colunnA.  To correct this error, do this: select distinct upper(columnA)   from [evaluate].[testTable]    order by upper(columnA) asc

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 30 (sys.dm_server_registry)

    - by Tamarick Hill
    The sys.dm_server_registry DMV is used to provide SQL Server configuration and installation information that is currently stored in your Windows Registry. It is a very simple DMV that returns only three columns. The first column returned is the registry_key. The second column returned is the value_name which is the name of the actual registry key value. The third and final column returned is the value_data which is the value of the registry key data. Lets have a look at the information this DMV returns as well as some key values from the Windows Registy. SELECT * FROM sys.dm_server_registry View using RegEdit to view the registy: This DMV provides you with a quick and easy way to view SQL Server Instance registry values. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204561.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Error while installing GNU Octave packages

    - by carllacan
    I want to install the GNU Octave optim package, but I keep receiving errors in the process. Apparently I need to install some other packages first, one of which is the general package. However, when I try to, I receive this error: octave:17> pkg install general-1.3.2.tar.gz make: /usr/bin/mkoctfile: Command not found make: *** [__exit__.oct] Error 127 'make' returned the following error: make: Entering directory `/tmp/oct-CGIPo9/general/src' /usr/bin/mkoctfile __exit__.cc make: Leaving directory `/tmp/oct-CGIPo9/general/src' error: called from `pkg>configure_make' in file /usr/share/octave/3.6.1/m/pkg/pkg.m near line 1391, column 9 error: called from: error: /usr/share/octave/3.6.1/m/pkg/pkg.m at line 834, column 5 error: /usr/share/octave/3.6.1/m/pkg/pkg.m at line 383, column 9

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Display folder sizes in file manager

    - by wim
    In nautilus (or nemo) file manager, the "Size" column shows the filesize for files and the number of items contained in a folder for subdirectories: Number of items is not that important for me, it would be more useful if I could make this column show the total size contained under the directory. I had an extension on windows called foldersize which shows what I mean: I think it involved a service which ran in the background monitoring filesystem modifications in order to make sure the column was kept up to date. I am interested to know if there is any similar extension to nautilus, I would also be open to switching to another file manager to get this functionality. I am aware of the Disk Usage Analyser in Ubuntu, but what I'm looking for is a solution with file manager integration.

    Read the article

  • vb.net and mysql connectivity [closed]

    - by kalpana
    I have used adodb using odbc database connectivity for connecting vb.net to mysql. I have fetched table values into recordset. I want to fetch only one column values (for example, table name-login, column name-password and values in password column are "manage","sales","general"). I want to fetch these values in text boxes. I have written code but it's not working. Dim conn As New ADODB.Connection Dim res As New ADODB.Recordset conn.Open("test", "root", "root") res = conn.Execute("select password from login") textbox1.text=res(0).value textbox2.text=res(1).value textbox3.text=res(2).value I am getting data in textbox1 but other data is not getting inserted into textbox2 and textbox3..I am getting error i.e (1) Item cannot be found in the collection corresponding to the requested name or ordinal.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >