Search Results

Search found 24675 results on 987 pages for 'table'.

Page 235/987 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • SQL Convert Nvarchar(255) to DateTime problem

    - by steven
    Hi, I'm using SQL server 2008. I have 2 Tables: Table 1 and Table 2. Table 1 has 1 column called: OldDate which is nvarchar(255), null Table 2 has 1 column called: NewDate which is datetime, not null Example data in Table 1: 26/07/03 NULL NULL 23/07/2003 7/26/2003 NULL 28/07/03 When i try CAST(OldDate as datetime) I get this error: Arithmetic overflow error converting expression to data type datetime. I need to insert OldDate into NewDate with no errors. I can't skip some rows. Anyhelp would be appreciated.

    Read the article

  • An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING cla

    - by brz dot net
    I have to find the indentid from the status table based on below two conditions: 1. If there are more than one record having same indentid in status table and the same indentID has count1 in feasibilitystatus table then I don't want to display the record. 2. If there is only one record of indentid in status table and the same indentID has count0 in feasibilitystatus table then I don't want to display the record. Query: select distinct s.indentid from status s where s.status='true' and s.indentid not in(select case when count(s.indentid)>1 then (select indentid from feasibilitystatus group by indentid having count(indentid)>1) else (select indentid from feasibilitystatus group by indentid having count(indentid)>0) end as indentid from status) Error: An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.

    Read the article

  • using structures with multidimentional tables

    - by gem
    I have a table of structures and this structures are 2 dimentional table of constants. can you teach me on how to get the values in the table of constants. (note following is just example) typedef struct { unsigned char ** Type1; unsigned char ** Type2; } Formula; typedef struct { Formula tformula[size]; } table; const table Values = { (unsigned char**) &(default_val1), (unsigned char**) &(default_val2) }; const unsigned char default_val1[4][4] = { {0,1,2,3}, {4,5,6,7}, {8,9,0,11}, {12,13,14,15} } const unsigned char default_val2[4][4] = { {15,16,17,13}, {14,15,16,17}, {18,19,10,21}, {22,23,24,25} }

    Read the article

  • Dynamic "WHERE IN" on IQueryable (linq to SQL)

    - by user320235
    I have a LINQ to SQL query returning rows from a table into an IQueryable object. IQueryable<MyClass> items = from table in DBContext.MyTable select new MyClass { ID = table.ID, Col1 = table.Col1, Col2 = table.Col2 } I then want to perform a SQL "WHERE ... IN ...." query on the results. This works fine using the following. (return results with id's ID1 ID2 or ID3) sQuery = "ID1,ID2,ID3"; string[] aSearch = sQuery.Split(','); items = items.Where(i => aSearch.Contains(i.ID)); What I would like to be able to do, is perform the same operation, but not have to specify the i.ID part. So if I have the string of the field name I want to apply the "WHERE IN" clause to, how can I use this in the .Contains() method?

    Read the article

  • Getting a query to index seek (rather than scan)

    - by PaulB
    Running the following query (SQL Server 2000) the execution plan shows that it used an index seek and Profiler shows it's doing 71 reads with a duration of 0. select top 1 id from table where name = '0010000546163' order by id desc Contrast that with the following with uses an index scan with 8500 reads and a duration of about a second. declare @p varchar(20) select @p = '0010000546163' select top 1 id from table where name = @p order by id desc Why is the execution plan different? Is there a way to change the second method to seek? thanks EDIT Table looks like CREATE TABLE [table] ( [Id] [int] IDENTITY (1, 1) NOT NULL , [Name] [varchar] (13) COLLATE Latin1_General_CI_AS NOT NULL) Id is primary clustered key There is a non-unique index on Name and a unique composite index on id/name There are other columns - left them out for brevity

    Read the article

  • wanted to get all dates in mysql result

    - by PankajK
    I have mysql table called user(id, name, join_on) join on is a date field what I want is to show in each day how many uses has been created I can use group by but it will only give me the dates when users get added like if date 4/12/10 5 users added 4/13/10 2 users added 4/15/10 7 users added here date 4/14/10 is missing and I want listing of all dates in one month. I have one solution for it by creating another table only for adding date and that table will left join my users table on join_on and will give total result but I don't want to do that as for creating that I need to create and add entries in date table please suggest the different approach for doing so. Thank you.

    Read the article

  • Not sure how to use Decode, NVL, and/or isNull (or something else?) in this situation

    - by RSW
    I have a table of orders for particular products, and a table of products that are on sale. (It's not ideal database structure, but that's out of my control.) What I want to do is outer join the order table to the sale table via product number, but I don't want to include any particular data from the sale table, I just want a Y if the join exists or N if it doesn't in the output. Can anyone explain how I can do this in SQL? Thanks in advance!

    Read the article

  • sDesigning a database with flexible user profile

    - by Mughrabi
    I am working on a design where I can have flexible attributes for users and I am confused how to continue the design of the schema. I made a table where I kept system needed information: Table name: users id username password Now, I wish to create a profile table and have one to one relation where all the other attributes in profile table such as email, first name, last name, etc. My question is: is there a way to add a third table in which profiles will be flexible? In other words, if my clients need to create a new attribute he/she won't need any customization to the code.

    Read the article

  • Mysql database structure...

    - by Patrick
    I'm creating a members site, and I'm currently working on the user Preference settings. Should I create a table with all the preference fields (about 17 fields) or should I include them in the main member table along with the account settings? Is there a limit as to how many fields I should have in a table? currently the member table has about 21 fields... not sure if its okay to add another 17 more fields when I can easily just put them in another table. It'll take more coding to pull up the data though... any sugguestions?

    Read the article

  • Copy one object to another

    - by developer
    Hi All, I have 2 tables, user and userprofile. The userprofile table has a lot of fields similar to the user table. What I need to do is, on click of a button I need to copy all the fields of user table to userprofile table. UserModel AttData = UserModel[0]; DataServices.Save((UserProfile)AttData.ProfileModel.Instance); UserModel[0] contains all the data in user table. I want to copy this data to AttData.ProfileModel.Instance. How can I do that?

    Read the article

  • Java JTable, how to change cell data (write text in)?

    - by Bob Owuor
    Am looking to change a cell's data in a jtable. How can I do this? When I execute the following code I get errors. JFrame f= new JFrame(); final JTable table= new JTable(10,5); TableModelListener tl= new TableModelListener(){ public void tableChanged(TableModelEvent e){ table.setValueAt("hello world",2,2); } }; table.getModel().addTableModelListener(tl); f.add(table); f.pack(); f.setVisible(true); I have also tried this below but it still doesn't work. What gives? table.getModel().setValueAt("hello world",2,2);

    Read the article

  • Historical Rolling Daily sum

    - by user2980057
    I have a table of consisting of Dates and the amount of revenue recorded for that day going back for about 12 years. What I would like to do with this data is create a new table with Dates and prior 7-day revenue numbers. Any guidance would be greatly appreciated. Below is an example of what my source table and what my results would need to look like.... Source Table.. DATE | Revenue 12/31/2013 | 200 12/30/2013 | 300 12/29/2013 | 400 12/28/2013 | 100 12/27/2013 | 200 12/26/2013 | 150 12/25/2013 | 350 12/24/2013 | 450 12/23/2013 | 200 12/22/2013 | 300 12/21/2013 | 100 12/20/2013 | 300 Resulting Table... DATE | 7Dayrev 12/31/2013 | 1700 12/30/2013 | 1950 12/29/2013 | 1850 12/28/2013 | 1750 12/27/2013 | 1750 12/26/2013 | 1850 ETC......

    Read the article

  • how do I refactor this to make single function calls?

    - by stack.user.1
    I've been using this for a while updating mysql as needed. However I'm not too sure on the syntax..and need to migrate the sql to an array. Particulary the line database::query("CREATE TABLE $name($query)"); Does this translate to CREATE TABLE bookmark(name VARCHAR(64), url VARCHAR(256), tag VARCHAR(256), id INT) This is my ...guess. Is this correct? class table extends database { private function create($name, $query) { database::query("CREATE TABLE $name($query)"); } public function make($type) { switch ($type) { case "credentials": self::create('credentials', 'id INT NOT NULL AUTO_INCREMENT, flname VARCHAR(60), email VARCHAR(32), pass VARCHAR(40), PRIMARY KEY(id)'); break; case "booomark": self::create('boomark', 'name VARCHAR(64), url VARCHAR(256), tag VARCHAR(256), id INT'); break; case "tweet": self::create('tweet', 'time INT, fname VARCHAR(32), message VARCHAR(128), email VARCHAR(64)'); break; default: throw new Exception('Invalid Table Type'); } } }

    Read the article

  • Select distinct ... inner join vs. select ... where id in (...)

    - by Tonio
    I'm trying to create a subset of a table (as a materialized view), defined as those records which have a matching record in another materialized view. For example, let's say I have a Users table with user_id and name columns, and a Log table, with entry_id, user_id, activity, and timestamp columns. First I create a materialized view of the Log table, selecting only those rows with timestamp some_date. Now I want a materliazed view of the Users referenced in my snapshot of the Log table. I can either create it as select * from Users where user_id in (select user_id from Log_mview), or I can do select distinct u.* from Users u inner join Log_mview l on u.user_id = l.user_id (need the distinct to avoid multiple hits from users with multiple log entries). The former seems cleaner and more elegant, but takes much longer. Am I missing something? Is there a better way to do this?

    Read the article

  • Why isn't INT more efficient than UNIQUEIDENTIFIER (according to the execution plan)?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation? EDIT: The question is not about which is more efficient, but why is the query execution plan showing both queries as having the same cost?

    Read the article

  • has_many in rails

    - by user363243
    I have a 'users' table which stores all of my websites users, some of which are technicians. The users table has a 'first_name' and 'last_name' field as well as other fields. The other table is called 'service_tickets' which has a foreign key from users called technician_id. This gives me a real problem because when I am looking at the service_tickets table, the related user is actually a technician. That's why the service_tickets table has a technician_id and not a user_id field. I am trying to accomplish something like this: t = service_ticket.find_by_id(7) t.technician.first_name # notice how I don't do t.user.first_name Is this possible in rails? I can't seem to get it to work... Thank you for your help!

    Read the article

  • Javascript not working in IE but works in Firefox chrome

    - by user1290528
    So i have the following php page with a java script that gets the total of items based on their quatity, then inputs the total into a text box for each item. In ie the text boxes are being filled with $NaN. While in firefox, chrome the text boxes are filled with the correct values. Any help would be graatly appreciated. <?php echo $_SESSION['SESS_MEMBER_ID']; require_once('auth.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <title>Breakfast Menu</title> <link href="loginmodule.css" rel="stylesheet" type="text/css"> <script type='text/javascript'> var totalarray=new Array(); var totalarray2= new Array(); var runningtotal = 0; var runningtotal2 = 0; var discount = .2; var discounttotal = 0; var discount1 = 0; runningtotal = runningtotal * 1; runningtotal2 = runningtotal2 * 1; function displayResult(price,init) { var newstring = "quantity"+init; var totstring = "total"+init; var quantity = document.getElementById(newstring).value; var quantity = parseFloat(quantity); var test = price * quantity; var test = test.toFixed(2); document.getElementById(newstring).value = quantity; document.getElementById(totstring).value = "$" + test; totalarray[init] = test; getTotal(); } function getTotal(){ runningtotal = 0; var i=0; for (i=0;i<totalarray.length;i++){ totalarray[i] = totalarray[i] *1; runningtotal = runningtotal + totalarray[i]; discounttotal = totalarray[i] * discount; discounttotal = totalarray[i] - discounttotal; This line is where IE shows its first error document.getElementById('totalcost').value="$" + runningtotal.toFixed(2); } var orderpart1 = document.getElementById('totalcost').value; var orderpart1 = orderpart1.substr(1); var orderpart1 = orderpart1 * 1; var orderpart2 = document.getElementById('totalcost2').value; var orderpart2 = orderpart2.substr(1); var orderpart2 = orderpart2 * 1; var ordertot = orderpart1 + orderpart2; document.getElementById('ordertotal').value ="$"+ ordertot.toFixed(2) } function displayResult2(price2,init2) { var newstring2 = "quantity2"+init2; var totstring2 = "total2"+init2; var quantity2 = document.getElementById(newstring2).value; var quantity2 = parseFloat(quantity2); var test2 = price2 * quantity2; var test2 = test2.toFixed(2); document.getElementById(newstring2).value = quantity2; document.getElementById(totstring2).value = "$" + test2; totalarray2[init2] = test2; getTotal2(); } function getTotal2(){ runningtotal2 = 0; var i=0; for (i=0;i<totalarray2.length;i++){ totalarray2[i] = totalarray2[i] *1; runningtotal2 = runningtotal2+ totalarray2[i]; This is where IE shows its second error document.getElementById('totalcost2').value="$" + runningtotal2.toFixed(2); }//IE Shows Second error here var orderpart1 = document.getElementById('totalcost').value; var orderpart1 = orderpart1.substr(1); var orderpart1 = orderpart1 * 1; var orderpart2 = document.getElementById('totalcost2').value; var orderpart2 = orderpart2.substr(1); var orderpart2 = orderpart2 * 1; var ordertot = orderpart1 + orderpart2; document.getElementById('ordertotal').value ="$"+ ordertot.toFixed(2); } </script> </head> <body> <?php include("newnew.php"); ?> <td style="vertical-align: top; width: 80%; height:80%;"><br> <div style="text-align: center;"> <form action="testplaceorder.php" method="post" onSubmit="return confirm('Are you sure?');"> <h4>Employee Breakfast Order Form</h4> <h1 align="left">Breakfest Foods</h1> <table border='0' cellpadding='0' cellspacing='0'> <tr> <td> <table width="100%" border="1"> <tr> <th>Item&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Price&nbsp&nbsp&nbsp&nbsp&nbsp </th> <th>Quantity&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Total&nbsp&nbsp&nbsp&nbsp&nbsp</th> </tr> <?php mysql_connect("localhost", "seniorproject", "farmingdale123") or die(mysql_error()); mysql_select_db("fsenior") or die(mysql_error()); $result = mysql_query("SELECT name, price,foodid FROM Food where foodtype='br'") or die(mysql_error()); $init = 0; while(list($name, $price, $brId) = mysql_fetch_row($result)) { echo "<tr> <td>$name</td> <td>\$$price</td> <td><select name='quantity$init' id='quantity$init' onchange='displayResult($price,$init)'><option>0</option><option>1</option><option>2</option><option>3</option><option>4</option><option>5</option><option>6</option><option>7</option><option>8</option><option>9</option></td> <td><input name='total$init' type='text' id='total$init' readonly='readonly' value='\$0.00'></td> </tr>" ; echo "<script type='text/javascript'>displayResult($price,$init);</script>"; $foodname = "'SESS_FOODNAME_" . $init . "'"; $foodid = "'SESS_FOODID_" . $init."'"; $_SESSION[$foodname] = $name; $_SESSION[$foodid] = $brId; $init = $init+1; } $_SESSION['SESS_INIT'] = $init; ?> <tr> <td></td> <td></td> <td>Total Cost</td> <td><input name='totalcost' type='text' id='totalcost' readonly='readonly' value='$0.00'></td> </tr> <tr><td></td><td></td><td>Discount</td><td><input name='discountvalue1' id ='discountvalue1' type='text' readonly='readonly' value='20%'></td> </tr> <tr><td></td><td></td><td>Total After Discount</td><td><input name='discounttotal1' id ='discounttotal1' type='text' readonly='readonly' value='$0.00'></td></tr> </table> <tr> <td><br></td> </tr> </table> <h1 align="left">Breakfest Drinks</h1> <table border='0' cellpadding='0' cellspacing='0'> <tr> <td> <table width="100%" border="1"> <tr> <th>Item&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Price&nbsp&nbsp&nbsp&nbsp&nbsp </th> <th>Quantity&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Total&nbsp&nbsp&nbsp&nbsp&nbsp</th> </tr> <?php mysql_connect("localhost", "****", "***") or die(mysql_error()); mysql_select_db("fsenior") or die(mysql_error()); $result2 = mysql_query("SELECT drinkname, price,drinkid FROM Drinks where drinktype='br'") or die(mysql_error()); $init2 = 0; while(list($name2, $price2, $brId2) = mysql_fetch_row($result2)) { echo "<tr> <td>$name2</td> <td>\$$price2</td> <td><select name='quantity2$init2' id='quantity2$init2' onchange='displayResult2($price2,$init2)'><option>0</option><option>1</option><option>2</option><option>3</option><option>4</option><option>5</option><option>6</option><option>7</option><option>8</option><option>9</option></td> <td><input name='total2$init2' type='text' id='total2$init2' readonly='readonly' value='\$0.00'></td> </tr>" ; echo "<script type='text/javascript'>displayResult2($price2,$init2);</script>"; $drinkname = "'SESS_DRINKNAME_" . $init2 . "'"; $drinkid = "'SESS_DRINKID_" . $init2."'"; $_SESSION[$drinkname] = $name2; $_SESSION[$drinkid] = $brId2; $init2 = $init2+1; } $_SESSION['SESS_INIT2'] = $init2; ?> <tr> <td></td> <td></td> <td>Total Cost</td> <td><input name='totalcost2' type='text' id='totalcost2' readonly='readonly' value='$0.00'></td> </tr> </table> <tr> <td><br></td> </tr> </table> <table border="2"> <tr><td>Total Order Cost:</td><td> <?php echo "<input name='ordertotal' type='text' id='ordertotal' readonly='readonly' value='\$0.00'></td></table>"; ?> <p align="left"><input type='submit' name='submit' value='Submit'/></p> </form> </div></td> </tr> </tbody> </table></td> </tr> </tbody> </table> </body> </html>

    Read the article

  • Subterranean IL: Pseudo custom attributes

    - by Simon Cooper
    Custom attributes were designed to make the .NET framework extensible; if a .NET language needs to store additional metadata on an item that isn't expressible in IL, then an attribute could be applied to the IL item to represent this metadata. For instance, the C# compiler uses DecimalConstantAttribute and DateTimeConstantAttribute to represent compile-time decimal or datetime constants, which aren't allowed in pure IL, and FixedBufferAttribute to represent fixed struct fields. How attributes are compiled Within a .NET assembly are a series of tables containing all the metadata for items within the assembly; for instance, the TypeDef table stores metadata on all the types in the assembly, and MethodDef does the same for all the methods and constructors. Custom attribute information is stored in the CustomAttribute table, which has references to the IL item the attribute is applied to, the constructor used (which implies the type of attribute applied), and a binary blob representing the arguments and name/value pairs used in the attribute application. For example, the following C# class: [Obsolete("Please use MyClass2", true)] public class MyClass { // ... } corresponds to the following IL class definition: .class public MyClass { .custom instance void [mscorlib]System.ObsoleteAttribute::.ctor(string, bool) = { string('Please use MyClass2' bool(true) } // ... } and results in the following entry in the CustomAttribute table: TypeDef(MyClass) MemberRef(ObsoleteAttribute::.ctor(string, bool)) blob -> {string('Please use MyClass2' bool(true)} However, there are some attributes that don't compile in this way. Pseudo custom attributes Just like there are some concepts in a language that can't be represented in IL, there are some concepts in IL that can't be represented in a language. This is where pseudo custom attributes come into play. The most obvious of these is SerializableAttribute. Although it looks like an attribute, it doesn't compile to a CustomAttribute table entry; it instead sets the serializable bit directly within the TypeDef entry for the type. This flag is fully expressible within IL; this C#: [Serializable] public class MySerializableClass {} compiles to this IL: .class public serializable MySerializableClass {} For those interested, a full list of pseudo custom attributes is available here. For the rest of this post, I'll be concentrating on the ones that deal with P/Invoke. P/Invoke attributes P/Invoke is built right into the CLR at quite a deep level; there are 2 metadata tables within an assembly dedicated solely to p/invoke interop, and many more that affect it. Furthermore, all the attributes used to specify p/invoke methods in C# or VB have their own keywords and syntax within IL. For example, the following C# method declaration: [DllImport("mscorsn.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.U1)] private static extern bool StrongNameSignatureVerificationEx( [MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, [MarshalAs(UnmanagedType.U1)] bool fForceVerification, [MarshalAs(UnmanagedType.U1)] ref bool pfWasVerified); compiles to the following IL definition: .method private static pinvokeimpl("mscorsn.dll" lasterr winapi) bool marshal(unsigned int8) StrongNameSignatureVerificationEx( string marshal(lpwstr) wszFilePath, bool marshal(unsigned int8) fForceVerification, bool& marshal(unsigned int8) pfWasVerified) cil managed preservesig {} As you can see, all the p/invoke and marshal properties are specified directly in IL, rather than using attributes. And, rather than creating entries in CustomAttribute, a whole bunch of metadata is emitted to represent this information. This single method declaration results in the following metadata being output to the assembly: A MethodDef entry containing basic information on the method Four ParamDef entries for the 3 method parameters and return type An entry in ModuleRef to mscorsn.dll An entry in ImplMap linking ModuleRef and MethodDef, along with the name of the function to import and the pinvoke options (lasterr winapi) Four FieldMarshal entries containing the marshal information for each parameter. Phew! Applying attributes Most of the time, when you apply an attribute to an element, an entry in the CustomAttribute table will be created to represent that application. However, some attributes represent concepts in IL that aren't expressible in the language you're coding in, and can instead result in a single bit change (SerializableAttribute and NonSerializedAttribute), or many extra metadata table entries (the p/invoke attributes) being emitted to the output assembly.

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I fix a corrupted harddrive after failed upgrade?

    - by Nil
    The problem originated when I was trying to fix this problem. Things went horribly, horribly wrong and I ended up with a new problem altogether. The last thing I did was run sudo apt-get install and that caused my system to freeze. I restarted my computer and it would not boot from the harddrive. I ran a copy of Ubuntu 12.10 from a flashdrive that I had and ran gparted to see if my partitions were all there. It returned this message: Invalid partition table on /dev/sda -- wrong signature 5208. The drive appeared as a 2TiB unallocated drive with an error. The drive had 4 partitions before (plus random unallocated space). There was a fat32 partition, an ext4 partition which contained ubuntu 13.04/13.10 (I don't even know which one at this point), an extended partition which contained a swap partition for my ubuntu partition (I was meaning to move that ubuntu partition into the extended partition, never got around to it), and another partition (I don't remember how I formatted it). I should also mention this is a 1TB harddrive. So in short, I have a corrupted partition table on my primary harddrive from which I boot from, how can I fix this? I tried mounting the drive with sudo mount /dev/sda1 /media/ubuntu then I changed my directory to said folder and tried to list files and this monstrosity happened: $ ls ls: cannot access ??w?j^?.: Input/output error ls: cannot access ??(? ?x?.|: Input/output error ls: cannot access 6W_@?)?._??: Input/output error ls: cannot access HB0v???.A}?: Input/output error ls: cannot access ???.?X: Input/output error ls: cannot access t)?.+?l: Input/output error ls: cannot access ?h@ ?.@ : Input/output error ls: cannot access >? @??.???: Input/output error ls: cannot access m???.??: Input/output error ls: cannot access @ if??a?: Input/output error ls: cannot access ?M!vN$?.??n: Input/output error ls: cannot access ?o? ??.Bm`: Input/output error ls: cannot access ?:I??? M. : Input/output error ls: cannot access W??.??: Input/output error ls: cannot access ?: Input/output error ls: cannot access ?W?s??: Input/output error ls: cannot access ?v?k?.???: Input/output error ls: cannot access 5?$<N??: Input/output error .x????.??i: Input/output error ls: cannot access je????.j?1: Input/output error XjD?.???: Input/output error ls: cannot access W??n???.?: Input/output error ls: cannot access ?^x.$"?: Input/output error ls: cannot access !??*!??j.??: Input/output error ls: cannot access '-??k?^?.???: Input/output error ls: cannot access b?w?w?b.\??: Input/output error ls: cannot access o????"z.??B: Input/output error ls: cannot access ??b?h.?3-: Input/output error ls: cannot access ??.$7: Input/output error ls: cannot access )??K.bk: Input/output error ls: cannot access s??z?.?(?: Input/output error ls: cannot access ?F@?0?.@?: Input/output error .?D: Input/output error .??: Input/output error ls: cannot access?????. @: Input/output error ls: cannot access ?/?? ?.??: No such file or directory ls: cannot access rk?p4q(?.?k: Input/output error This looks promising. This is the output of fdisk -l $ sudo fdisk -l /dev/sda Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x5208 of partition table 5 will be corrected by w(rite) Disk /dev/sda: 2199.0 GB, 2199023132672 bytes 255 heads, 63 sectors/track, 267349 cylinders, total 4294967056 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x44fdfe06 Device Boot Start End Blocks Id System /dev/sda1 113305600 894715903 390705152 c W95 FAT32 (LBA) /dev/sda2 894715904 1489307647 297295872 83 Linux /dev/sda3 1489309694 1497307135 3998721 5 Extended /dev/sda4 1497309184 1953523711 228107264 7 HPFS/NTFS/exFAT /dev/sda5 ? 3013257822 3688738171 337740175 aa Unknown

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >