Search Results

Search found 191 results on 8 pages for 'dbms'.

Page 7/8 | < Previous Page | 3 4 5 6 7 8  | Next Page >

  • How do I do nested transactions in NHibernate?

    - by Gavin Schultz-Ohkubo
    Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS. I find that if I try something like this: using (var outerTX = UnitOfWork.Current.BeginTransaction()) { using (var nestedTX = UnitOfWork.Current.BeginTransaction()) { ... do stuff nestedTX.Commit(); } outerTX.Commit(); } then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction. Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)? I'm now using Ayende's UnitOfWork implementation (thanks Sneal). Forgive any naivety in this question, I'm still new to NHibernate. Thanks! EDIT: I've discovered that you can use TransactionScope, such as: using (var transactionScope = new TransactionScope()) { using (var tx = UnitOfWork.Current.BeginTransaction()) { ... do stuff tx.Commit(); } using (var tx = UnitOfWork.Current.BeginTransaction()) { ... do stuff tx.Commit(); } transactionScope.Commit(); } However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • Add new types to Go

    - by nevalu
    I'm trying add new types for that been managed/used as in Go core types. To create new types is anything very interesting to validate data before of send it to a non-SQL DBMS or to check data from a form. Go uses univeral constants to define them at global level: var DateType = universe.DefineType("date", universePos, &dateType{}) In this case they're defined to be called from a package like types: var Date = &dateType{} I get these errors: test.go:58: o.lit undefined (cannot refer to unexported field lit) test.go:62: *dateType is not Type missing Pos() token.Position The code is based on: http://github.com/tav/go/blob/master/src/pkg/exp/eval/value.go http://github.com/tav/go/blob/master/src/pkg/exp/eval/type.go package main import ( "exp/eval" "fmt" // "go/token" ) // http://github.com/tav/go/blob/master/src/pkg/exp/eval/value.go type DateValue interface { eval.Value Get(*eval.Thread) string Set(*eval.Thread, string) } /* Date */ type dateV string func (v *dateV) String() string { return fmt.Sprint(*v) } func (v *dateV) Assign(t *eval.Thread, o eval.Value) { *v = dateV(o.(DateValue).Get(t)) } func (v *dateV) Get(*eval.Thread) string { return string(*v) } func (v *dateV) Set(t *eval.Thread, x string) { *v = dateV(x) } // http://github.com/tav/go/blob/master/src/pkg/exp/eval/type.go type Type interface { eval.Type // isDate returns true if this is a date type. isDate() bool } /* Common type */ type commonType struct{} // added func (commonType) isDate() bool { return false } /* Date */ type dateType struct { commonType } // * It should not be an universal constant //var universePos = token.Position{"<universe>", 0, 0, 0} // added //var DateType = universe.DefineType("date", universePos, &dateType{}) var Date = &dateType{} func (t *dateType) compat(o Type, conv bool) bool { t2, ok := o.lit().(*dateType) return ok && t == t2 } func (t *dateType) lit() Type { return t } func (t *dateType) isDate() bool { return true } func (t *dateType) String() string { return "<date>" } func (t *dateType) Zero() eval.Value { res := dateV("") return &res } /* Named types */ /* type NamedType struct { eval.NamedType Def Type }*/ type NamedType struct { // added // token.Position Name string // Underlying type. If incomplete is true, this will be nil. // If incomplete is false and this is still nil, then this is // a placeholder type representing an error. Def Type // True while this type is being defined. incomplete bool methods map[string]eval.Method } func (t *NamedType) isDate() bool { return t.Def.isDate() } /* *********************** */ func main() { print("foo") }

    Read the article

  • Cache Class compilation error using parent-child relationships and cache sql storage

    - by Fred Altman
    I have the global listed below that I'm trying to create a couple of cache classes using sql stoarage for: ^WHEAIPP(1,26,1)=2 ^WHEAIPP(1,26,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,29,1)=2 ^WHEAIPP(1,29,1,1)="58074^^SMSNARE^58311" 2)="58074^59128^MPHILLIPS^59135" ^WHEAIPP(1,93,1)=2 ^WHEAIPP(1,93,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,166,1)=2 ^WHEAIPP(1,166,1,1)="58407^^SMSNARE^58420" 2)="58407^59128^MPHILLIPS^59135" ^WHEAIPP(1,324,1)=2 ^WHEAIPP(1,324,1,1)="58884^^SSNARE^58948" 2)="58884^59128^MPHILLIPS^59135" ^WHEAIPP(1,419,1)=3 ^WHEAIPP(1,419,1,1)="59707^^SSNARE^59708" 2)="59707^^MPHILLIPS^59910,58000^^^^" 3)="59707^59981^SSNARE^60117,53241^^^^" The first two subscripts of the global (Hmo and Keen) make a unique entry. The third subscript (Seq) has a property (IppLineCount) which is the number of IppLines in the fourth subscript level (Seq2). I create the class WIppProv below which is the parent class: /// <PRE> /// ============================ /// Generated Class Definition /// Table: WMCA_B_IPP_PROV /// Generated by: FXALTMAN /// Generated on: 05/21/2012 13:46:41 /// Generator: XWESTblClsGenV2 /// ---------------------------- /// </PRE> Class XFXA.MCA.WIppProv Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .HMO Property Hmo As %Integer; /// .KEEN Property Keen As %Integer; /// .SEQ Property Seq As %String; Property IppLineCount As %Integer; Index iMaster On (Hmo, Keen, Seq) [ IdKey, Unique ]; Relationship IppLines As XFXA.MCA.WIppProvLine [ Cardinality = many, Inverse = relWIppProv ]; <Storage name="SQLMapping"> <DataLocation>^WHEAIPP</DataLocation> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <Data name="IppLineCount"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Global>^WHEAIPP</Global> <PopulationType>full</PopulationType> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <LoopInitValue>1</LoopInitValue> <Expression>{Seq}</Expression> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } This class compiles fine. Next I created the WIppProvLine class listed below and made a parent-child relationship between the two: /// Used to represent a single line of IPP data Class XFXA.MCA.WIppProvLine Extends (%Persistent, %XML.Adaptor) [ ClassType = persistent, Inheritance = right, ProcedureBlock, StorageStrategy = SQLMapping ] { /// .CLM_AMT_ALLOWED node: 0 piece: 6<BR> /// This field should be used in conjunction with the Claim Operator field to /// define a whole claim dollar amount at which a particular claim should be /// flagged with a Pend status. Property ClmAmtAllowed As %String; /// .CLM_LINE_AMT_ALLOWED node: 0 piece: 8<BR> /// This field should be used in conjunction with the Clm Line Operator field to /// define a claim line dollar amount at which a particular claim should be flagged /// with a Pend status. Property ClmLineAmtAllowed As %String; /// .CLM_LINE_OP node: 0 piece: 7<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim line dollars above, below, or equal to a set /// amount. Property ClmLineOp As %String; /// .CLM_OP node: 0 piece: 5<BR> /// A new Table/Column Reference that gives the SIU (Special Investigative Unit) /// the ability to look for claim dollars above, below, or equal to a set amount. Property ClmOp As %String; Property EffDt As %Date; Property Hmo As %Integer; /// .IPP_REASON node: 0 piece: 10<BR> /// IPP Reason Code Property IppCode As %Integer; Property Keen As %Integer; /// .LAST_CHG_DT node: 0 piece: 4<BR> /// Last Changed Date Property LastChgDt As %Date; /// .PX_DX_CDE_FLAG node: 0 piece: 9<BR> /// A Flag to indicate whether or not Procedure Codes or Diagnosis Codes are to be /// associated with this SIU Flag Type Entry. If the Flag = Y, then control would /// jump to a new screen where the user can enter the necessary codes. Property PxDxCdeFlag As %String; Property Seq As %String; Property Seq2 As %String; Index iMaster On (Hmo, Keen, Seq, Seq2) [ IdKey, PrimaryKey, Unique ]; /// .TERM_DT node: 0 piece: 2<BR> /// Term Date Property TermDt As %Date; /// .USER_INI node: 0 piece: 3 Property UserIni As %String; Relationship relWIppProv As XFXA.MCA.WIppProv [ Cardinality = one, Inverse = IppLines ]; Index relWIppProvIndex On relWIppProv; //Index NewIndex1 On (RelWIppProv, Seq2) [ IdKey, PrimaryKey, Unique ]; <Storage name="SQLMapping"> <ExtentSize>1000000</ExtentSize> <SQLMap name="DBMS"> <ConditionalWithHostVars></ConditionalWithHostVars> <Data name="ClmAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>6</Piece> </Data> <Data name="ClmLineAmtAllowed"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>8</Piece> </Data> <Data name="ClmLineOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>7</Piece> </Data> <Data name="ClmOp"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>5</Piece> </Data> <Data name="EffDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>1</Piece> </Data> <Data name="Hmo"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>11</Piece> </Data> <Data name="IppCode"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>10</Piece> </Data> <Data name="LastChgDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>4</Piece> </Data> <Data name="PxDxCdeFlag"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>9</Piece> </Data> <Data name="TermDt"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>2</Piece> </Data> <Data name="UserIni"> <Delimiter>"^"</Delimiter> <Node>+0</Node> <Piece>3</Piece> </Data> <Global>^WHEAIPP</Global> <Subscript name="1"> <AccessType>Sub</AccessType> <Expression>{Hmo}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="2"> <AccessType>Sub</AccessType> <Expression>{Keen}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="3"> <AccessType>Sub</AccessType> <Expression>{Seq}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Subscript name="4"> <AccessType>Sub</AccessType> <Expression>{Seq2}</Expression> <LoopInitValue>1</LoopInitValue> </Subscript> <Type>data</Type> </SQLMap> <StreamLocation>^XFXA.MCA.WIppProvLineS</StreamLocation> <Type>%Library.CacheSQLStorage</Type> </Storage> } When I try to compile this one I get the following error: ERROR #5502: Error compiling SQL Table 'XFXA_MCA.WIppProvLine %msg: Table XFXA_MCA.WIppProvLine has the following unmapped (not defined on the data map) fields: relWIppProv' ERROR #5030: An error occurred while compiling class XFXA.MCA.WIppProvLine Detected 1 errors during compilation in 2.745s. What am I doing wrong? Thanks in Advance, Fred

    Read the article

  • MySQL FULLTEXT aggravation

    - by southof40
    Hi - I'm having problems with case-sensitivity in MySQL FULLTEXT searches. I've just followed the FULLTEXT example in the MySQL doco at http://dev.mysql.com/doc/refman/5.1/en/fulltext-boolean.html . I'll post it here for ease of reference ... CREATE TABLE articles ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, title VARCHAR(200), body TEXT, FULLTEXT (title,body) ); INSERT INTO articles (title,body) VALUES ('MySQL Tutorial','DBMS stands for DataBase ...'), ('How To Use MySQL Well','After you went through a ...'), ('Optimizing MySQL','In this tutorial we will show ...'), ('1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'), ('MySQL vs. YourSQL','In the following database comparison ...'), ('MySQL Security','When configured properly, MySQL ...'); SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); ... my problem is that the example shows that SELECT returning the first and fifth rows ('..DataBase..' and '..database..') but I only get one row ('database') ! The example doesn't demonstrate what collation the table in the example had but I have ended up with latin1_general_cs on the title and body columns of my example table. My version of MySQL is 5.1.39-log and the connection collation is utf8_unicode_ci . I'd be really grateful is someone could suggest why my experience differs from the example in the manual ! Be grateful for any advice.

    Read the article

  • PHP: tips/resources/patterns for learning to implement a basic ORM

    - by BoltClock
    I've seen various MVC frameworks as well as standalone ORM frameworks for PHP, as well as other ORM questions here; however, most of the questions ask for existing frameworks to get started with, which is not what I'm looking for. (I have also read this SO question but I'm not sure what to make of it, and the answers are vague.) Instead, I figured I'd learn best by getting my hands dirty and actually writing my own ORM, even a simple one. Except I don't really know how to get started, especially since the code I see in other ORMs is so complicated. With my PHP 5.2.x (this is important) MVC framework I have a basic custom database abstraction layer, that has: Very simple methods like connect($host, $user, $pass, $base), query($sql, $binds), etc Subclasses for each DBMS that it supports A class (and respective subclasses) to represent SQL result sets But does not have: Active Record functionality, which I assume is an ORM thing (correct me if I'm wrong) I've read up a little about ORM, and from my understanding they provide a means to further abstract data models from the database itself by representing data as nothing more than PHP-based classes/objects; again, correct me if I am wrong or have missed out in any way. Still, I'd like some simple tips from anyone else who's dabbled more or less with ORM frameworks. Is there anything else I need to take note of, simple example code for me to refer to, or resources I can read? Thanks a lot in advance!

    Read the article

  • What is the best method for updating all changed data in EF 4?

    - by Soul_Master
    I try to create some method that can update any changed data from changed Data object (this object is generated by ASP.NET MVC) to old Data object (this object is retrieved from current data in DBMS) like the following code. public static bool UpdateSomeData(SomeEntities context, SomeModelType changedData) { var oldData = GetSomeModelTypeById(context, changedData.ID); UpdateModel(oldData, changedData); return context.SaveChanges() > 0; } I try to create method for saving any changed data without affects other unchanged data like the following source code. public static void UpdateModel<TModel>(TModel oldData, TModel changedData) { foreach (var pi in typeof(TModel).GetProperties() .Where ( // Ignore Change ID property for security reason x => x.Name.ToUpper() != "ID" && x.CanRead && x.CanWrite && ( // It must be primitive type or Guid x.PropertyType.FullName.StartsWith("System") && !x.PropertyType.FullName.StartsWith("System.Collection") && !x.PropertyType.FullName.StartsWith("System.Data.Entity.DynamicProxies") ) ) { var oldValue = pi.GetValue(oldData, null); var newValue = pi.GetValue(changedData, null); if (!oldValue.Equals(newValue)) { pi.SetValue(oldData, newValue, null); } } } I am not sure about the above method because it is so ugly method for updating data. From recent bug, it realizes me that if you update some property like Navigation Properties (related data from other table), it will remove current record from database. I don't understand why it happened. But it is very dangerous for me. So, do you have any idea for this question to ensure me about updating data from ASP.NET MVC? Thanks,

    Read the article

  • oracle plsql select pivot without dynamic sql to group by

    - by kayhan yüksel
    To whom it may respond to, We would like to use SELECT function with PIVOT option at a 11g r2 Oracle DBMS. Our query is like : "select * from (SELECT o.ship_to_customer_no, ol.item_no,ol.amount FROM t_order o, t_order_line ol WHERE o.NO = ol.order_no and ol.item_no in (select distinct(item_no) from t_order_line)) pivot --xml ( SUM(amount) FOR item_no IN ( select distinct(item_no) as item_no_ from t_order_line));" As can be seen, XML is commented out, if run as PIVOT XML it gives the correct output in XML format, but we are required to get the data as unformatted pivot data, but this sentence throws error : ORA-00936: missing expression Any resolutions or ideas would be welcomed, Best Regards -------------if we can get the result of this to sys_refcursor using execute immediate it will be solved ------------------------ the procedure : PROCEDURE pr_test2 (deneme OUT sys_refcursor) IS v_sql NVARCHAR2 (4000) := ''; TYPE v_items IS TABLE OF NVARCHAR2 (30); v_pivot_items NVARCHAR2 (4000) := ''; BEGIN FOR i IN (SELECT DISTINCT (item_no) AS items FROM t_order_line) LOOP v_pivot_items := ',''' || i.items || '''' || v_pivot_items; END LOOP; v_pivot_items := LTRIM (v_pivot_items, ','); v_sql := 'begin select * from (SELECT o.ship_to_customer_no, ol.item_no,ol.amount FROM t_order o, t_order_line ol WHERE o.NO = ol.order_no and OL.ITEM_NO in (select distinct(item_no) from t_order_line)) pivot --xml ( SUM(amount) FOR item_no IN (' || v_pivot_items || '));end;'; open DENEME for select v_sql from dual; Kayhan YÜKSEL

    Read the article

  • [Database] How to model this one-to-one relation?

    - by pbean
    I have several entities which respresent different types of users who need to be able to log in to a particular system. Additionally, they have different types of information associated with them. For example: a "general user", which has an e-mail address and "admin user", which has a workstation number (note that this a hypothetical case). Both entities also share common properties like first name, surname, address and telephone number. Finally, they naturally need to have a (unique) user name and a password to log in. In the application, the user just has to fill in his user name and password, and the functionality of the application changes slightly according to the type of the user. You can imagine that the username needs to be unique for this work. How should I model this effectively? I can't just create two tables, because then I can't force a unique constaint on the user name. I also can't put them all in just one table, because they have different types of specific information associated to them. I think I might need 3 seperate tables, one for "users" (with user name and password), one for the "general users" and another one for the "admin users", but how would the relations between these work? Or is there another solution? (By the way, the target DBMS is MySQL, so I don't think generalization is supported in the database system itself).

    Read the article

  • What database systems should an startup company consider?

    - by Am
    Right now I'm developing the prototype of a web application that aggregates large number of text entries from a large number of users. This data must be frequently displayed back and often updated. At the moment I store the content inside a MySQL database and use NHibernate ORM layer to interact with the DB. I've got a table defined for users, roles, submissions, tags, notifications and etc. I like this solution because it works well and my code looks nice and sane, but I'm also worried about how MySQL will perform once the size of our database reaches a significant number. I feel that it may struggle performing join operations fast enough. This has made me think about non-relational database system such as MongoDB, CouchDB, Cassandra or Hadoop. Unfortunately I have no experience with either. I've read some good reviews on MongoDB and it looks interesting. I'm happy to spend the time and learn if one turns out to be the way to go. I'd much appreciate any one offering points or issues to consider when going with none relational dbms?

    Read the article

  • What is the corrrect way to increment a field making up part of a composit key

    - by Tr1stan
    I have a bunch of tables whose primary key is made up of the foreign keys of other tables (Composite key). Therefore for example the attributes (as a very cut down version) might look like this: A[aPK, SomeFields] 1:M B[bPK, aFK, SomeFields] 1:M C[cPK, bFK, aFK, SomeFields] as data this could look like: A[aPK, SomeFields]: 1, Foo 2, Bar B[bPK, aFK, SomeFields]: 1, 1, FooData1 2, 1, FooData2 1, 2, BarData1 2, 2, BarData2 C[cPK, bFK, aFK, SomeFields]: 1, 1, 1, FooData1More 2, 1, 1, FooData1More 1, 2, 1, FooData2More 2, 2, 1, FooData2More 1, 1, 2, BarData1More 2, 1, 2, BarData1More 1, 2, 2, BarData2More 2, 2, 2, BarData2More I've got this running in a MSSQL DBMS and I'm looking for the best way to increment the left most column, in each table when a new tuple is added to it. I can't use the Auto Increment Identity Specification option as that has no idea that it is part of a composite key. I also don't want to use any aggregate function such as: MAX(field)+1 as this will have adverse affects with multiple users inputting data, rolling back etc. There might however be a nice trigger based option here, but I'm not sure. This must be a common issue so I'm hoping that someone has a lovely solution. As a side which may or may not affect the answer, I'm using Entity Framework 1.0 as my ORM, within a c# MVC application.

    Read the article

  • QSqlQuery UPDATE/INSERT DateTime with server's time (eg CURRENT_TIMESTAMP)

    - by Skinniest Man
    I am using QSqlQuery to insert data into a MySQL database. Currently all I care about is getting this to work with MySQL, but ideally I'd like to keep this as platform-independent as possible. What I'm after, in the context of MySQL, is to end up with code that effectively executes something like the following query: UPDATE table SET time_field=CURRENT_TIMESTAMP() WHERE id='5' The following code is what I have attempted, but it fails: QSqlQuery query; query.prepare("INSERT INTO table SET time_field=? WHERE id=?"); query.addBindValue("CURRENT_TIMESTAMP()"); query.addBindValue(5); query.exec(); The error I get is: Incorrect datetime value: 'CURRENT_TIMESTAMP()' for column 'time_field' at row 1 QMYSQL3: Unable to execute statement. I am not surprised as I assume Qt is doing some type checking when it binds values. I have dug through the Qt documentation as well as I know how, but I can't find anything in the API designed specifically for supporting MySQL's CURRENT_TIMESTAMP() function, or that of any other DBMS. Any suggestions?

    Read the article

  • Why am I returning empty records when querying in mysql with php?

    - by Brian Bolton
    I created the following script to query a table and return the first 30 results. The query returns 30 results, but they do not have any text or information. Why would this be? The table stores Vietnamese characters. The database is mysql4. Here's the page: http://saomaidanang.com/recentposts.php Here's the code: <?php header( 'Content-Type: text/html; charset=utf-8' ); //CONNECTION INFO $dbms = 'mysql'; $dbhost = 'xxxxx'; $dbname = 'xxxxxxx'; $dbuser = 'xxxxxxx'; $dbpasswd = 'xxxxxxxxxxxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpasswd ) or die('Error connecting to mysql'); mysql_select_db($dbname , $conn); //QUERY $result = mysql_query("SET NAMES utf8"); $cmd = 'SELECT * FROM `phpbb_posts_text` ORDER BY `phpbb_posts_text`.`post_subject` DESC LIMIT 0, 30 '; $result = mysql_query($cmd); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html dir="ltr"> <head> <title>recent posts</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <p> <?php //DISPLAY while ($myrow = mysql_fetch_row($result)) { echo 'post subject:'; echo(utf8_encode($myrow ['post_subject'])); echo 'post text:'; echo(utf8_encode($myrow ['post_text'])); } ?> </p> </body>

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • Need some clarification on the ANSI/SPARC 3-tier database architecture.

    - by Moonshield
    Hi there, I'm currently revising for a databases exam and looking over some past papers, but there's one question that I'm slightly unsure about and was wondering if someone could offer some assistance. "Describe EACH of the THREE levels of the ANSI SPARC 3 level architecture. Your answer should include the purpose of EACH of the schemas, the level of abstraction they provide and the software tools that would be used to access and support them." As I understand it (although please correct me if I'm wrong): the internal schema specifies the physical storage of the data; the conceptual schema specifies the structure of the database and the domains; and the external schemas are how the database is viewed by "users" (applications, etc.). As for the abstraction, I understand that the conceptual layer means that the physical data storage can be altered without the end user being affected, likewise the The bit that I'm not sure about is what tools are used to access and support each layer. Would the internal schema be handled by the DBMS, the conceptual schema handled by some sort of DDL interpreter and the external schema handled by a DML interpreter (or have I misunderstood what each level does)? Any assistance would be greatly appreciated. Thanks, Moonshield

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Oracle User definied aggregate function for varray of varchar

    - by baju
    I am trying to write some aggregate function for the varray and I get this error code when I'm trying to use it with data from the DB: ORA-00600 internal error code, arguments: [kodpunp1], [], [], [], [], [], [], [], [], [], [], [] [koxsihread1], [0], [3989], [45778], [], [], [], [], [], [], [], [] Code of the function is really simple(in fact it does nothing ): create or replace TYPE "TEST_VECTOR" as varray(10) of varchar(20) ALTER TYPE "TEST_VECTOR" MODIFY LIMIT 4000 CASCADE create or replace type Test as object( lastVector TEST_VECTOR, STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number, MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number, MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number, MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number ); create or replace type body Test is STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number is begin sctx := Test(TEST_VECTOR()); return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number is begin self.lastVector := value; return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number is begin return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number is begin returnValue := self.lastVector; return ODCIConst.Success; end; end; create or replace FUNCTION test_fn (input TEST_VECTOR) RETURN TEST_VECTOR PARALLEL_ENABLE AGGREGATE USING Test; Next I create some test data: create table t1_test_table( t1_id number not null, t1_value TEST_VECTOR not null, Constraint PRIMARY_KEY_1 PRIMARY KEY (t1_id) ) Next step is to put some data to the table insert into t1_test_table (t1_id,t1_value) values (1,TEST_VECTOR('x','y','z')) Now everything is prepared to perform queries: Select test_fn(TEST_VECTOR('y','x')) from dual Query above work well Select test_fn(t1_value) from t1_test_table where t1_id = 1 Version of Oracle DBMS I use: 11.2.0.3.0 Does anyone tried do such a thing? What can be the reason that it does not work? How to solve it? Thanks in advance for help.

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • Conceptual data modeling: Is RDF the right tool? Other solutions?

    - by paprika
    I'm planning a system that combines various data sources and lets users do simple queries on these. A part of the system needs to act as an abstraction layer that knows all connected data sources: the user shouldn't [need to] know about the underlying data "providers". A data provider could be anything: a relational DBMS, a bug tracking system, ..., a weather station. They are hooked up to the query system through a common API that defines how to "offer" data. The type of queries a certain data provider understands is given by its "offer" (e.g. I know these entities, I can give you aggregates of type X for relationship Y, ...). My concern right now is the unification of the data: the various data providers need to agree on a common vocabulary (e.g. the name of the entity "customer" could vary across different systems). Thus, defining a high level representation of the entities and their relationships is required. So far I have the following requirements: I need to be able to define objects and their properties/attributes. Further, arbitrary relations between these objects need to be represented: a verb that defines the nature of the relation (e.g. "knows"), the multiplicity (e.g. 1:n) and the direction/navigability of the relation. It occurs to me that RDF is a viable option, but is it "the right tool" for this job? What other solutions/frameworks do exist for semantic data modeling that have a machine readable representation and why are they better suited for this task? I'm grateful for every opinion and pointer to helpful resources.

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • NetBeans, JSF, and MySQL Primary Keys using AUTO_INCREMENT

    - by MarkH
    I recently had the opportunity to spin up a small web application using JSF and MySQL. Having developed JSF apps with Oracle Database back-ends before and possessing some small familiarity with MySQL (sans JSF), I thought this would be a cakewalk. Things did go pretty smoothly...but there was one little "gotcha" that took more time than the few seconds it really warranted. The Problem Every DBMS has its own way of automatically generating primary keys, and each has its pros and cons. For the Oracle Database, you use a sequence and point your Java classes to it using annotations that look something like this: @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="POC_ID_SEQ") @SequenceGenerator(name="POC_ID_SEQ", sequenceName="POC_ID_SEQ", allocationSize=1) Between creating the actual sequence in the database and making sure you have your annotations right (watch those typos!), it seems a bit cumbersome. But it typically "just works", without fuss. Enter MySQL. Designating an integer-based field as PRIMARY KEY and using the keyword AUTO_INCREMENT makes the same task seem much simpler. And it is, mostly. But while NetBeans cranks out a superb "first cut" for a basic JSF CRUD app, there are a couple of small things you'll need to bring to the mix in order to be able to actually (C)reate records. The (RUD) performs fine out of the gate. The Solution Omitting all design considerations and activity (!), here is the basic sequence of events I followed to create, then resolve, the JSF/MySQL "Primary Key Perfect Storm": Fire up NetBeans. Create JSF project. Create Entity Classes from Database. Create JSF Pages from Entity Classes. Test run. Try to create record and hit error. It's a simple fix, but one that was fun to find in its completeness. :-) Even though you've told it what to do for a primary key, a MySQL table requires a gentle nudge to actually generate that new key value. Two things are needed to make the magic happen. First, you need to ensure the following annotation is in place in your Java entity classes: @GeneratedValue(strategy = GenerationType.IDENTITY) All well and good, but the real key is this: in your controller class(es), you'll have a create() function that looks something like this, minus the comment line and the setId() call in bold red type:     public String create() {         try {             // Assign 0 to ID for MySQL to properly auto_increment the primary key.             current.setId(0);             getFacade().create(current);             JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("CategoryCreated"));             return prepareCreate();         } catch (Exception e) {             JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));             return null;         }     } Setting the current object's primary key attribute to zero (0) prior to saving it tells MySQL to get the next available value and assign it to that record's key field. Short and simple…but not inherently obvious if you've never used that particular combination of NetBeans/JSF/MySQL before. Hope this helps! All the best, Mark

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Local LINQtoSQL Database For Your Windows Phone 7 Application

    - by Tim Murphy
    There aren’t many applications that are of value without having some for of data store.  In Windows Phone development we have a few options.  You can store text directly to isolated storage.  You can also use a number of third party libraries to create or mimic databases in isolated storage.  With Mango we gained the ability to have a native .NET database approach which uses LINQ to SQL.  In this article I will try to bring together the components needed to implement this last type of data store and fill in some of the blanks that I think other articles have left out. Defining A Database The first things you are going to need to do is define classes that represent your tables and a data context class that is used as the overall database definition.  The table class consists of column definitions as you would expect.  They can have relationships and constraints as with any relational DBMS.  Below is an example of a table definition. First you will need to add some assembly references to the code file. using System.ComponentModel;using System.Data.Linq;using System.Data.Linq.Mapping; You can then add the table class and its associated columns.  It needs to implement INotifyPropertyChanged and INotifyPropertyChanging.  Each level of the class needs to be decorated with the attribute appropriate for that part of the definition.  Where the class represents the table the properties represent the columns.  In this example you will see that the column is marked as a primary key and not nullable with a an auto generated value. You will also notice that the in the column property’s set method It uses the NotifyPropertyChanging and NotifyPropertyChanged methods in order to make sure that the proper events are fired. [Table]public class MyTable: INotifyPropertyChanged, INotifyPropertyChanging{ public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(string propertyName) { if(PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public event PropertyChangingEventHandler PropertyChanging; private void NotifyPropertyChanging(string propertyName) { if(PropertyChanging != null) { PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } private int _TableKey; [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] public int TableKey { get { return _TableKey; } set { NotifyPropertyChanging("TableKey"); _TableKey = value; NotifyPropertyChanged("TableKey"); } } The last part of the database definition that needs to be created is the data context.  This is a simple class that takes an isolated storage location connection string its constructor and then instantiates tables as public properties. public class MyDataContext: DataContext{ public MyDataContext(string connectionString): base(connectionString) { MyRecords = this.GetTable<MyTable>(); } public Table<MyTable> MyRecords;} Creating A New Database Instance Now that we have a database definition it is time to create an instance of the data context within our Windows Phone app.  When your app fires up it should check if the database already exists and create an instance if it does not.  I would suggest that this be part of the constructor of your ViewModel. db = new MyDataContext(connectionString);if(!db.DatabaseExists()){ db.CreateDatabase();} The next thing you have to know is how the connection string for isolated storage should be constructed.  The main sticking point I have found is that the database cannot be created unless the file mode is read/write.  You may have different connection strings but the initial one needs to be similar to the following. string connString = "Data Source = 'isostore:/MyApp.sdf'; File Mode = read write"; Using you database Now that you have done all the up front work it is time to put the database to use.  To make your life a little easier and keep proper separation between your view and your viewmodel you should add a couple of methods to the viewmodel.  These will do the CRUD work of your application.  What you will notice is that the SubmitChanges method is the secret sauce in all of the methods that change data. private myDataContext myDb;private ObservableCollection<MyTable> _viewRecords;public ObservableCollection<MyTable> ViewRecords{ get { return _viewRecords; } set { _viewRecords = value; NotifyPropertyChanged("ViewRecords"); }}public void LoadMedstarDbData(){ var tempItems = from MyTable myRecord in myDb.LocalScans select myRecord; ViewRecords = new ObservableCollection<MyTable>(tempItems);}public void SaveChangesToDb(){ myDb.SubmitChanges();}public void AddMyTableItem(MyTable newScan){ myDb.LocalScans.InsertOnSubmit(newScan); myDb.SubmitChanges();}public void DeleteMyTableItem(MyTable newScan){ myDb.LocalScans.DeleteOnSubmit(newScan); myDb.SubmitChanges();} Updating existing database What happens when you need to change the structure of your database?  Unfortunately you have to add code to your application that checks the version of the database which over time will create some pollution in your codes base.  On the other hand it does give you control of the update.  In this example you will see the DatabaseSchemaUpdater in action.  Assuming we added a “Notes” field to the MyTable structure, the following code will check if the database is the latest version and add the field if it isn’t. if(!myDb.DatabaseExists()){ myDb.CreateDatabase();}else{ DatabaseSchemaUpdater dbUdater = myDb.CreateDatabaseSchemaUpdater(); if(dbUdater.DatabaseSchemaVersion < 2) { dbUdater.AddColumn<MyTable>("Notes"); dbUdater.DatabaseSchemaVersion = 2; dbUdater.Execute(); }} Summary This approach does take a fairly large amount of work, but I think the end product is robust and very native for .NET developers.  It turns out to be worth the investment. del.icio.us Tags: Windows Phone,Windows Phone 7,LINQ to SQL,LINQ,Database,Isolated Storage

    Read the article

  • Daylight saving time and Timezone best practices

    - by Oded
    I am hoping to make this question and the answers to it the definitive guide to dealing with daylight saving time, in particular for dealing with the actual change overs. If you have anything to add, please do Many systems are dependent on keeping accurate time, the problem is with changes to time due to daylight savings - moving the clock forward or backwards. For instance, one has business rules in an order taking system that depend on the time of the order - if the clock changes, the rules might not be as clear. How should the time of the order be persisted? There is of course an endless number of scenarios - this one is simply an illustrative one. How have you dealt with the daylight saving issue? What assumptions are part of your solution? (looking for context here) As important, if not more so: What did you try that did not work? Why did it not work? I would be interested in programming, OS, data persistence and other pertinent aspects of the issue. General answers are great, but I would also like to see details especially if they are only available on one platform. Summary of answers and other data: (please add yours) Do: Always persist time according to a unified standard that is not affected by daylight savings. GMT and UTC have been mentioned by different people. Include the local time offset (including DST offset) in stored timestamps. Remember that DST offsets are not always an integer number of hours (e.g. Indian Standard Time is UTC+05:30). If using Java, use JodaTime. - http://joda-time.sourceforge.net/ Create a table TZOffsets with three columns: RegionClassId, StartDateTime, and OffsetMinutes (int, in minutes). See answer Check if your DBMS needs to be shutdown during transition. Business rules should always work on civil time. Internally, keep timestamps in something like civil-time-seconds-from-epoch. See answer Only convert to local times at the last possible moment. Don't: Do not use javascript date and time calculations in web apps unless you ABSOLUTELY have to. Testing: When testing make sure you test countries in the Western and Eastern hemispheres, with both DST in progress and not and a country that does not use DST (6 in total). Reference: Olson database, aka Tz_database - ftp://elsie.nci.nih.gov/pub Sources for Time Zone and DST - http://www.twinsun.com/tz/tz-link.htm ISO format (ISO 8601) - http://en.wikipedia.org/wiki/ISO_8601 Mapping between Olson database and Windows TimeZone Ids, from the Unicode Consortium - http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/windows_tzid.html TimeZone page on WikiPedia - http://en.wikipedia.org/wiki/Tz_database StackOverflow questions tagged dst - http://stackoverflow.com/questions/tagged/dst StackOverflow questions tagged timezone - http://stackoverflow.com/questions/tagged/timezone Other: Lobby your representative to end the abomination that is DST. We can always hope...

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

< Previous Page | 3 4 5 6 7 8  | Next Page >