Search Results

Search found 11644 results on 466 pages for 'column defaults'.

Page 107/466 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • should variable be released or not? iphone-sdk

    - by psebos
    Hi, I have the following piece of code from a book. There is this function loadPrefs where the NSString *userTimeZone is being released before the end of the function. Why? The string was not created with alloc and I assume that the stringForKey function returns an autoreleased NSString. Is this an error or am I missing something? Is it an error in the book? (I new into objective-C) In the documentation for stringForKey the only thing it mentions is: Special Considerations The returned string is immutable, even if the value you originally set was a mutable string. The code: - (void) loadPrefs { timeZoneName = DefaultTimeZonePref; NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; NSString *userTimeZone = [defaults stringForKey: TimeZonePrefKey]; if (userTimeZone != NULL) timeZoneName = userTimeZone; [userTimeZone release]; show24Hour = [defaults boolForKey:TwentyFourHourPrefKey]; } Thanks!!!!

    Read the article

  • text-aling:left align:left in google android browser is making the text in column around %20 of the page how this can be fixed?

    - by user981220
    Hello I have a problem with text-align: left or align: left no matter what I put the text get's stuck on the left side of the page in a big column how this can be fixed ? example of the code: <div class="maintext02"><span><p>text</p> .maintext02 { font-family:Georgia, "Times New Roman", Times, serif; color:#545454; line-height:25px; font-size:12px; width:943px; padding-top:15px; padding-bottom:15px; text-align:left; }

    Read the article

  • Many to Many delete in NHibernate two parents with common association

    - by Joshua Grippo
    I have 3 top level entities in my app: Circuit, Issue, Document Circuits can contain Documents and Issues can contain Documents. When I delete a Circuit, I want it to delete the documents associated with it, unless it is used by something else. I would like this same behavior with Issues. I have it working when the only association is in the same table in the db, but if it is in another table, then it fails due to foreign key constraints. ex 1(This will cascade properly, because there is only a foreign constraint from Circuit to Document) Document1 exists. Circuit1 exists and contains a reference to Document1. If I delete Circuit1 then it deletes Document1 with it. ex 2(This will cascade properly, because there is only a foreign constraint from Circuit to Document.) Document1 exists. Circuit1 exists and contains a reference to Document1. Circuit2 exists and contains a reference to Document1. If I delete Circuit1 then it is deleted, but Document1 is not deleted because Circuit2 exists. If I then delete Circuit2, then Document1 is deleted. ex 3(This will throw an error, because when it deletes the Circuit it sees that there are no other circuits that reference the document so it tries to delete the document. However it should not, because there is an Issue that has a foreign constraint to the document.) Document 1 exists. Circuit1 exists and contains a reference to Document1. Issue1 exists and contains a reference to Document1. If I delete Circuit1, then it fails, because it tries to delete Document1, but Issues1 still has a reference. DB: This think won't let upload an image, so here is the ERD to the DB: http://lh3.ggpht.com/_jZWhe7NXay8/TROJhOd7qlI/AAAAAAAAAGU/rkni3oEANvc/CircuitIssues.gif Model: public class Circuit { public virtual int CircuitID { get; set; } public virtual string CJON { get; set; } public virtual IList<Document> Documents { get; set; } } public class Issue { public virtual int IssueID { get; set; } public virtual string Summary { get; set; } public virtual IList<Model.Document> Documents { get; set; } } public class Document { public virtual int DocumentID { get; set; } public virtual string Data { get; set; } } Mapping Files: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Circuit" table="Circuit"> <id name="CircuitID"> <column name="CircuitID" not-null="true"/> <generator class="identity" /> </id> <property name="CJON" column="CJON" type="string" not-null="true"/> <bag name="Documents" table="CircuitDocument" cascade="save-update,delete-orphan"> <key column="CircuitID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Issue" table="Issue"> <id name="IssueID"> <column name="IssueID" not-null="true"/> <generator class="identity" /> </id> <property name="Summary" column="Summary" type="string" not-null="true"/> <bag name="Documents" table="IssueDocument" cascade="save-update,delete-orphan"> <key column="IssueID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Document" table="Document"> <id name="DocumentID"> <column name="DocumentID" not-null="true"/> <generator class="identity" /> </id> <property name="Data" column="Data" type="string" not-null="true"/> </class> </hibernate-mapping> Code: using (ISession session = sessionFactory.OpenSession()) { var doc = new Model.Document() { Data = "Doc" }; var circuit = new Model.Circuit() { CJON = "circ" }; circuit.Documents = new List<Model.Document>(new Model.Document[] { doc }); var issue = new Model.Issue() { Summary = "iss" }; issue.Documents = new List<Model.Document>(new Model.Document[] { doc }); session.Save(circuit); session.Save(issue); session.Flush(); } using (ISession session = sessionFactory.OpenSession()) { foreach (var item in session.CreateCriteria<Model.Circuit>().List<Model.Circuit>()) { session.Delete(item); } //this flush fails, because there is a reference to a child document from issue session.Flush(); foreach (var item in session.CreateCriteria<Model.Issue>().List<Model.Issue>()) { session.Delete(item); } session.Flush(); }

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Control-Break Style ADF Table - Comparing Values with Previous Row

    - by Steven Davelaar
    Sometimes you need to display data in an ADF Faces table in a control-break layout style, where rows should be "indented" when the break column has the same value as in the previous row. In the screen shot below, you see how the table breaks on both the RegionId column as well as the CountryId column. To implement this I didn't use fancy SQL statements. The table is based on a straightforward Locations ViewObject that is based on the Locations entity object and the Countries reference entity object, and the join query was automatically created by adding the reference EO. To get the indentation in the ADF Faces table, we simple use two rendered properties on the RegionId and CountryId outputText items:  <af:column sortProperty="RegionId" sortable="false"            headerText="#{bindings.LocationsView1.hints.RegionId.label}"            id="c5">   <af:outputText value="#{row.RegionId}" id="ot2"                  rendered="#{!CompareWithPreviousRowBean['RegionId']}">     <af:convertNumber groupingUsed="false"                       pattern="#{bindings.LocationsView1.hints.RegionId.format}"/>   </af:outputText> </af:column> <af:column sortProperty="CountryId" sortable="false"            headerText="#{bindings.LocationsView1.hints.CountryId.label}"            id="c1">   <af:outputText value="#{row.CountryId}" id="ot5"                  rendered="#{!CompareWithPreviousRowBean['CountryId']}"/> </af:column> The CompareWithPreviousRowBean managed bean is defined in request scope and is a generic bean that can be used for all the tables in your application that needs this layout style. As you can see the bean is a Map-style bean where we pass in the name of the attribute that should be compared with the previous row. The get method in the bean that is called returns boolean false when the attribute has the same value in the same row. Here is the code of the get method:  public Object get(Object key) {   String attrName = (String) key;   boolean isSame = false;   // get the currently processed row, using row expression #{row}   JUCtrlHierNodeBinding row = (JUCtrlHierNodeBinding) resolveExpression(getRowExpression());   JUCtrlHierBinding tableBinding = row.getHierBinding();   int rowRangeIndex = row.getViewObject().getRangeIndexOf(row.getRow());   Object currentAttrValue = row.getRow().getAttribute(attrName);   if (rowRangeIndex > 0)   {     Object previousAttrValue = tableBinding.getAttributeFromRow(rowRangeIndex - 1, attrName);     isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);   }   else if (tableBinding.getRangeStart() > 0)   {     // previous row is in previous range, we create separate rowset iterator,     // so we can change the range start without messing up the table rendering which uses     // the default rowset iterator     int absoluteIndexPreviousRow = tableBinding.getRangeStart() - 1;     RowSetIterator rsi = null;     try     {       rsi = tableBinding.getViewObject().getRowSet().createRowSetIterator(null);       rsi.setRangeStart(absoluteIndexPreviousRow);       Row previousRow = rsi.getRowAtRangeIndex(0);       Object previousAttrValue = previousRow.getAttribute(attrName);       isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);     }     finally     {       rsi.closeRowSetIterator();     }   }   return isSame; } The row expression defaults to #{row} but this can be changed through the rowExpression  managed property of the bean.  You can download the sample application here.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Bootstrap responsive CSS [migrated]

    - by savolai
    I have a four column design and I am using Bootstrap. The design renders fine in a single column in mobile devices, but in "(min-width: 768px) and (max-width: 979px)", I get four columns though there is room for only two. So clearly, the rows/spans setup would need to be rethought for those sizes. The only way I can imagine of doing this is to have semantic CSS classes used in the HTML and only including grid classes in the CSS using LESS, and then depending on screen size, including different grid classes to achieve four or two column layout. Not sure if this would work either though. Is this the way to go with, or am I thinking this too complicatedly? Thanks! Also at: https://groups.google.com/forum/#!topic/twitter-bootstrap/R5jEp0oQ_-E

    Read the article

  • Oracle bleibt auch 2011 Spitzenreiter im Bereich Datenbanken

    - by Anne Manke
    Mit der Veröffentlichung der aktuellen Ausgabe "Market Share: All Software Markets, Worldwide 2011" bestätigt das weltweit führende Marktanalyseunternehmen Gartner Oracle's Marktführerschaft im Bereich der Relationellen Datenbank Management Systeme (RDBMS). Oracle konnte innerhalb des letzten Jahres seinen Abstand zu seinen Marktbegleitern im Bereich der RDBMS mit einem stabilen Wachstum von 18% sogar ausbauen: der Marktanteil stieg im Jahr 2010 von 48,2% auf 48,8% im Jahr 2011. Damit ist der Abstand zu Oracle's stärkstem Verfolger IBM auf 28,6%.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Revenue 2010 ($USM) Revenue 2011 ($USM) Growth 2010 Growth 2011 Share 2010 Share 2011 Oracle 9,990.5 11,787.0 10.9% 18.0% 48.2% 48.8% IBM 4,300.4 4,870.4 5.4% 13.3% 20.7% 20.2% Microsoft 3,641.2 4,098.9 10.1% 12.6% 17.6% 17.0% SAP/Sybase 744.4 1,101.1 12.8% 47.9% 3.6% 4.6% Teradata 754.7 882.3 16.9% 16.9% 3.6% 3.7% Source: Gartner’s “Market Share: All Software Markets, Worldwide 2011,” March 29, 2012, By Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2 {mso-style-name:"Light List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:61; mso-style-unhide:no; border:solid #C0504D 1.0pt; mso-border-themecolor:accent2; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableLightListAccent2FirstRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-shading:#C0504D; mso-tstyle-shading-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:2.25pt double #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2; mso-para-margin-top:0cm; mso-para-margin-bottom:0cm; mso-para-margin-bottom:.0001pt; line-height:normal; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2FirstCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:first-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2LastCol {mso-style-name:"Light List - Accent 2"; mso-table-condition:last-column; mso-style-priority:61; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableLightListAccent2OddColumn {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;} table.MsoTableLightListAccent2OddRow {mso-style-name:"Light List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:61; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #C0504D; mso-tstyle-border-top-themecolor:accent2; mso-tstyle-border-left:1.0pt solid #C0504D; mso-tstyle-border-left-themecolor:accent2; mso-tstyle-border-bottom:1.0pt solid #C0504D; mso-tstyle-border-bottom-themecolor:accent2; mso-tstyle-border-right:1.0pt solid #C0504D; mso-tstyle-border-right-themecolor:accent2;}

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 30 (sys.dm_server_registry)

    - by Tamarick Hill
    The sys.dm_server_registry DMV is used to provide SQL Server configuration and installation information that is currently stored in your Windows Registry. It is a very simple DMV that returns only three columns. The first column returned is the registry_key. The second column returned is the value_name which is the name of the actual registry key value. The third and final column returned is the value_data which is the value of the registry key data. Lets have a look at the information this DMV returns as well as some key values from the Windows Registy. SELECT * FROM sys.dm_server_registry View using RegEdit to view the registy: This DMV provides you with a quick and easy way to view SQL Server Instance registry values. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204561.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • How do I Fix SQL Server error: Order by items must appear in the select list if Select distinct is s

    - by Paula DiTallo 2007-2009 All Rights Reserved
    There's more than one reason why you may receive this error, but the most common reason is that your order by statement column list doesn't correlate with the values specified in your column list when you happen to be using DISTINCT. This is usually easy to spot and resolve. A more obscure reason may be that you are using a function around one of the selected columns --but omitting to use the same function around the same selected column name in the order by statement. Here's an example:   select distinct upper(columnA)   from [evaluate].[testTable]    order by columnA  asc   This statement will cause the "Order by items must appear in the select list if SELECT DISTINCT is specified."  error to appear not because distinct was used, but because the order by statement did not utilize the upper() fundtion around colunnA.  To correct this error, do this: select distinct upper(columnA)   from [evaluate].[testTable]    order by upper(columnA) asc

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Error while installing GNU Octave packages

    - by carllacan
    I want to install the GNU Octave optim package, but I keep receiving errors in the process. Apparently I need to install some other packages first, one of which is the general package. However, when I try to, I receive this error: octave:17> pkg install general-1.3.2.tar.gz make: /usr/bin/mkoctfile: Command not found make: *** [__exit__.oct] Error 127 'make' returned the following error: make: Entering directory `/tmp/oct-CGIPo9/general/src' /usr/bin/mkoctfile __exit__.cc make: Leaving directory `/tmp/oct-CGIPo9/general/src' error: called from `pkg>configure_make' in file /usr/share/octave/3.6.1/m/pkg/pkg.m near line 1391, column 9 error: called from: error: /usr/share/octave/3.6.1/m/pkg/pkg.m at line 834, column 5 error: /usr/share/octave/3.6.1/m/pkg/pkg.m at line 383, column 9

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Display folder sizes in file manager

    - by wim
    In nautilus (or nemo) file manager, the "Size" column shows the filesize for files and the number of items contained in a folder for subdirectories: Number of items is not that important for me, it would be more useful if I could make this column show the total size contained under the directory. I had an extension on windows called foldersize which shows what I mean: I think it involved a service which ran in the background monitoring filesystem modifications in order to make sure the column was kept up to date. I am interested to know if there is any similar extension to nautilus, I would also be open to switching to another file manager to get this functionality. I am aware of the Disk Usage Analyser in Ubuntu, but what I'm looking for is a solution with file manager integration.

    Read the article

  • Ubuntu confuses my partitions

    - by Diego
    I have 3 relevant partitions split between 2 disks, sda2: Windows 1 partition sda3: Ubuntu partition sdb1: Data partition I was using pysdm to add a label to my partitions and somehow I seem to have screwed up my installation. Now, every time I access the Data partition mounted in /media/Data I see the files in my Windows partition, and viceversa. I've tried unmounting and remounting correctly to no avail, it seems that wherever I mount sda2, if I access that folder I get the files in sdb1, and viceversa. Anyone know what may have happened and how to solve this? Update: This is the result of blkid: /dev/sda1: LABEL="System Reserved" UUID="C62603F02603E073" TYPE="ntfs" /dev/sda2: LABEL="Windows" UUID="00A6D498A6D49010" TYPE="ntfs" /dev/sda5: UUID="033cac3b-6f77-4f09-a629-495dc866866a" TYPE="ext4" /dev/sdb1: LABEL="Data" UUID="BCD83AE3D83A9B98" TYPE="ntfs" These are the contents of my ftsab file: UUID=033cac3b-6f77-4f09-a629-495dc866866a / ext4 errors=remount-ro,user_xattr 0 1 /dev/sda1 /media/Boot_old ntfs defaults 0 0 /dev/sda2 /media/Windows ntfs defaults 0 0 /dev/sdb1 /media/Data ntfs nls=iso8859-1,ro,users,umask=000 0 0

    Read the article

  • vb.net and mysql connectivity [closed]

    - by kalpana
    I have used adodb using odbc database connectivity for connecting vb.net to mysql. I have fetched table values into recordset. I want to fetch only one column values (for example, table name-login, column name-password and values in password column are "manage","sales","general"). I want to fetch these values in text boxes. I have written code but it's not working. Dim conn As New ADODB.Connection Dim res As New ADODB.Recordset conn.Open("test", "root", "root") res = conn.Execute("select password from login") textbox1.text=res(0).value textbox2.text=res(1).value textbox3.text=res(2).value I am getting data in textbox1 but other data is not getting inserted into textbox2 and textbox3..I am getting error i.e (1) Item cannot be found in the collection corresponding to the requested name or ordinal.

    Read the article

  • To Bit or Not To Bit

    - by Johnm
    'Twas a long day of troubleshooting and firefighting and now, with most of the office vacant, you face a blank scripting window to create a new table in his database. Many questions circle your mind like dirty water gurgling down the bathtub drain: "How normalized should this table be?", "Should I use an identity column?", "NVarchar or Varchar?", "Should this column be NULLABLE?", "I wonder what apple blue cheese bacon cheesecake tastes like?" Well, there are times when the mind goes it's own direction. A Bit About Bit At some point during your table creation efforts you will encounter the decision of whether to use the bit data type for a column. The bit data type is an integer data type that recognizes only the values of 1, 0 and NULL as valid. This data type is often utilized to store yes/no or true/false values. An example of its use would be a column called [IsGasoline] which would be intended to contain the value of 1 if the row's subject (a car) had a gasoline engine and a 0 if the subject did not have a gasoline engine. The bit data type can even be found in some of the system tables of SQL Server. For example, the sysssispackages table in the msdb database which contains SQL Server Integration Services Package information for the packages stored in SQL Server. This table contains a column called [IsEncrypted]. A value of 1 indicates that the package has been encrypted while the value of 0 indicates that it is not. I have learned that the most effective way to disperse the crowd that surrounds the office coffee machine is to engage into SQL Server debates. The bit data type has been one of the most reoccurring, as well as the most enjoyable, of these topics. It contains a practical side and a philosophical side. Practical Consideration This data type certainly has its place and is a valuable option for database design; but it is often used in situations where the answer is really not a pure true/false response. In addition, true/false values are not very informative or scalable. Let's use the previously noted [IsGasoline] column for illustration. While on the surface it appears to be a rather simple question when evaluating a car: "Does the car have a gasoline engine?" If the person entering data is entering a row for a Jeep Liberty, the response would be a 1 since it has a gasoline engine. If the person is entering data is entering a row for a Chevrolet Volt, the response would be a 0 since it is an electric engine. What happens when a person is entering a row for the gasoline/electric hybrid Toyota Prius? Would one person's conclusion be consistent with another person's conclusion? The argument could be made that the current intent for the database is to be used only for pure gasoline and pure electric engines; but this is where the scalability issue comes into play. With the use of a bit data type a database modification and data conversion would be required if the business decided to take on hybrid engines. Whereas, alternatively, if the int data type were used as a foreign key to a reference table containing the engine type options, the change to include the hybrid option would only require an entry into the reference table. Philosophical Consideration Since the bit data type is often used for true/false or yes/no data (also called Boolean) it presents a philosophical conundrum of what to do about the allowance of the NULL value. The inclusion of NULL in a true/false or yes/no response simply violates the logical principle of bivalence which states that "every proposition is either true or false". If NULL is not true, then it must be false. The mathematical laws of Boolean logic support this concept by stating that the only valid values of this scenario are 1 and 0. There is another way to look at this conundrum: NULL is also considered to be the absence of a response. In other words, it is the equivalent to "undecided". Anyone who watches the news can tell you that polls always include an "undecided" option. This could be considered a valid option in the world of yes/no/dunno. Through out all of these considerations I have discovered one absolute certainty: When you have found a person, or group of persons, who are willing to entertain a philosophical debate of the bit data type, you have found some true friends.

    Read the article

  • What are the default mount settings for mount / fstab?

    - by John Craick
    What are the default mounting options for a non root partition ? The man entry for mount says ... defaults - use default options: rw, suid, dev, exec, auto, nouser, and async. ... so that might be what we expect to see. But, unless I'm missing something, that's not what happens. I have an ext3 partition labelled "NewHome20G" which is seen as /dev/sdc6 by the system. This we can see from ... root@john-pc1204:~# blkid | grep NewHome20G /dev/sdc6: LABEL="NewHome20G" UUID="d024bad5-906c-46c0-b7d4-812daf2c9628" TYPE="ext3" I have an entry in fstab as follows ... root@john-pc1204:~# cat /etc/fstab | grep NewHome LABEL=NewHome20G /media/NewHome20G ext3 rw,nosuid,nodev,exec,users 0 2 Note the option settings that are specified in that fstab line. Now I look at how the partition is actually mounted after boot up ... root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] ... so, when the filesystem gets mounted the exec & users options I specified seem to have been ignored. Just to be sure, I unmount sdc6, remount it and look at the mount options again ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] .... same result Now I unmount the partition again, remount it specifying the exec option and look at the result ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o exec root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,nosuid,nodev) [NewHome20G] ... and here the exec option has finally taken effect and the noexec setting has vanished. Just for interest, I re-mount the partition with the defaults option root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o defaults root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] The noexec is back, so it looks very like rw,noexec,nosuid,nodev are the default options which is NOT what man says. Why does this matter ? I have a folder full of useful scripts stored on a data disk. Because that disk is mounted noexec those scripts won't run, even though they have all been set with chmod 777. I can work round this in several ways but it's disappointing that the man entry seems to be wrong. Have I missed something obvious here or have the default options in Ubuntu changed from what they were a few versions ago ?

    Read the article

  • Ubuntu took away permissions from my Data partition

    - by RobinJ
    The pangolin has struck again. The bug of the day for today is Ubuntu taking away my permissions on my Data partition (NTFS). One moment everything worked fine, the next moment I couldn't chmod anything anymore. chown throws no errors or warnings at all, but nothing has changed either. chmod keeps saying Operation not permitted. I've been messing around with /etc/fstab as suggested by other answers on AskUbuntu, but none of them seem to have the desired effect. This is my current line: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=000,gid=46,permissions,users,auto,exec 0 0 For reference, this is the original one: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=007,gid=46 0 0 (right after the problem started occuring) What do I need to do so I am the owner of my own hard drive again? I want to be able to just use chmod and chown (without sudo) without being told that some mysterious alien has taken over control of my Data partition. I can still read and write, but execution permissions seem to be the problem.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #004

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Auto Generate Script to Delete Deprecated Fields in Current Database In early career everytime I have to drop a column, I had hard time doing it because I was scared what if that column was needed somewhere in the code. Due to this fear I never dropped any column. I just renamed the column. If the column which I renamed was needed afterwards it was very easy to rename it back again. However, it is not recommended to keep the deleted column renamed in the database. At every interval I used to drop the columns which was prefixed with specific word. This script is 6 years old but still works. Give it a look, I am open for improvements. 2007 Shrinking Truncate Log File – Log Full – Part 2 Shrinking database or mdf file is indeed bad thing and it creates lots of problems. However, once in a while there is legit requirement to shrink the log file – a very rare one. In the rare occasion shrinking or truncating the log file may be the only solution. However, one should make sure to take backup before and after the truncate or shrink as in case of a disaster they can be very useful. Remember that truncating log file will break the log chain and while restore it can create major issue. Anyway, use this feature with caution. 2008 Simple Use of Cursor to Print All Stored Procedures of Database Including Schema This is a very interesting requirement I used to face in my early career days, I needed to print all the Stored procedures of my database. Interesting enough I had written a cursor to do so. Today when I look back at this stored procedure, I believe there will be a much cleaner way to do the same task, however, I still use this SP quite often when I have to document all the stored procedures of my database. Interesting Observation about Order of Resultset without ORDER BY In industry many developers avoid using ORDER BY clause to display the result in particular order thinking that Index is enforcing the order. In this interesting example, I demonstrate that without using ORDER BY, same table and similar query can return different results. Query optimizer always returns results using any method which is optimized for performance. The learning is There is no order unless ORDER BY is used. 2009 Size of Index Table – A Puzzle to Find Index Size for Each Index on Table I asked this puzzle earlier where I asked how to find the Index size for each of the tables. The puzzle was very well received and lots of interesting answers were received. To answer this question I have written following blog posts. I suggest this weekend you try to solve this problem and see if you can come up with a better solution. If not, well here are the solutions. Solution 1 | Solution 2 | Solution 3 Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. The SQL Server Query optimizer is a very smart tool and it makes a better selection of execution plan. Suggesting hints to the Query Optimizer should be attempted when absolutely necessary and by experienced developers who know exactly what they are doing (or in development as a way to experiment and learn). Interesting Observation – TOP 100 PERCENT and ORDER BY I have seen developers and DBAs using TOP very causally when they have to use the ORDER BY clause. Theoretically, there is no need of ORDER BY in the view at all. All the ordering should be done outside the view and view should just have the SELECT statement in it. It was quite common that to save this extra typing by including ordering inside of the view. At several instances developers want a complete resultset and for the same they include TOP 100 PERCENT along with ORDER BY, assuming that this will simulate the SELECT statement with ORDER BY. 2010 SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience In year 2010 I attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever. Instead of writing about my usual routine or the event, I wrote about the interesting things I did and how I felt about it! When I go back and read it, I feel that this is the best event I attended in year 2010. Change Database Access to Single User Mode Using SSMS Image says all. 2011 SQL Server 2012 has introduced new analytic functions. These functions were long awaited and I am glad that they are now here. Before when any of this function was needed, people used to write long T-SQL code to simulate these functions. But now there’s no need of doing so. Having available native function also helps performance as well readability. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • C# Tip - Rendering HTML in a Gridview cell

    - by BobPalmer
    Just a quick tip for working with the gridview in ASP.Net.  If your data column contains HTML text, you've probably seen something like this in your gridview after pulling the data: <font color="red">First Item</font><br/><font color="green">Second Item</font><br/><font color="blue">Third Item</font> To have the relevant column render in HTML, just go to your gridview property pages, find the column you need rendered in HTML, and click 'convert this Field into a TemplateField'.  The result is that as a template field, HTML within your bound data value will be rendered properly.  So our example above would transform into: First Item Second Item Third Item I primarily use this technique for enabling HTML content in comment fields, and to insert line breaks when building the data for these fields. Hope this helps out! -Bob

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >