Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 488/1102 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • PHP PDO fetch null

    - by Jacob
    How do you check if a columns value is null? Example code: $db = DBCxn::getCxn(); $sql = "SELECT exercise_id, author_id, submission, result, submission_time, total_rating_votes, total_rating_values FROM submissions LEFT OUTER JOIN submission_ratings ON submissions.exercise_id=submission_ratings.exercise_id WHERE id=:id"; $st = $db->prepare($sql); $st->bindParam(":id", $this->id, PDO::PARAM_INT); $st->execute(); $row = $st->fetch(); if($this->total_rating_votes == null) // this doesn't seem to work even though there is no record in submission_ratings???? { ... }

    Read the article

  • Oracle (PL/SQL): Is UPDATE RETURNING concurrent?

    - by Jaap
    I'm using table with a counter to ensure unique id's on a child element. I know it is usually better to use a sequence, but I can't use it because I have a lot of counters (a customer can create a couple of buckets and each of them needs to have their own counter, they have to start with 1 (it's a requirement, my customer needs "human readable" keys). I'm creating records (let's call them items) that have a prikey (bucket_id, num = counter). I need to guarantee that the bucket_id / num combination is unique (so using a sequence as prikey won't fix my problem). The creation of rows doesn't happen in pl/sql, so I need to claim the number (btw: it's not against the requirements to have gaps). My solution was: UPDATE bucket SET counter = counter + 1 WHERE id = param_id RETURNING counter INTO num_forprikey; PL/SQL returns var_num_forprikey so the item record can be created. Question: Will I always get unique num_forprikey even if the user concurrently asks for new items in a bucket?

    Read the article

  • Best data store for billions of rows

    - by Jody Powlette
    I need to be able to store small bits of data (approximately 50-75 bytes) for billions of records (~3 billion/month for a year). The only requirement is fast inserts and fast lookups for all records with the same GUID and the ability to access the data store from .net. I'm a SQL server guy and I think SQL Server can do this, but with all the talk about BigTable, CouchDB, and other nosql solutions, it's sounding more and more like an alternative to a traditional RDBS may be best due to optimizations for distributed queries and scaling. I tried cassandra and the .net libraries don't currently compile or are all subject to change (along with cassandra itself). I've looked into many nosql data stores available, but can't find one that meets my needs as a robust production-ready platform. If you had to store 36 billion small, flat records so that they're accessible from .net, what would choose and why?

    Read the article

  • Updating rows using "in" operator in "where" clause

    - by doublep
    Hi. I stumbled upon SQL behavior I don't understand. I needed to update several rows in a table at once; started with just finding them: SELECT * FROM some_table WHERE field1 IN (SELECT ...) This returned a selection of about 60 rows. Now I was pretty confident I got the subquery right, so I modified the first part only: UPDATE some_table SET field2 = some_value WHERE field1 IN (SELECT ...) In other words, this was exactly as the first query after the WHERE. However, it resulted in 0 rows updated, whereas I would expect those 60. Note that the statement above would change field2, i.e. I verified that some_value was not present in the selected rows. The subquery was a modestly complicated SQL piece with 2 (different) tables, 1 view, joins and its own WHERE clause. In case this matters, it happened with Oracle Database 10g. So, the question is, why UPDATE didn't touch the rows returned by SELECT?

    Read the article

  • DBCC CHECKDB WITH DATA_PURITY gives out of range error

    - by Mark Allison
    Hi there, I have restore a SQL Server 2000 database onto SQL Server 2005 and then run DBCC CHECKDB WITH DATA_PURITY and I get this error: Msg 2570, Level 16, State 3, Line 2 Page (1:19558), slot 13 in object ID 181575685, index ID 1, partition ID 293374720802816, alloc unit ID 11899744092160 (type "In-row data"). Column "NumberOfShares" value is out of range for data type "numeric". Update column to a legal value. The column NumberOfShares is a numeric (19,6) data type. If I run the following select max (NumberOfShares) from AUDIT_Table select min (NumberOfShares) from AUDIT_Table I get: 22678647.839110 -1845953000.000000 These values are inside the bounds of a numeric (19,6) so I'm not sure why the DBCC check fails. Any ideas to find out why it fails? Do I need to use DBCC PAGE? How would you troubleshoot this? Thanks, Mark.

    Read the article

  • Rails Named Scope and overlapping conditions

    - by Tumtu
    Hi everyone, have a question about rails SQL generation: class Organization < ActiveRecord::Base has_many :people named_scope :active, :conditions => { :active => 'Yes' } end class Person < ActiveRecord::Base belongs_to :organization end Rails SQL for all active people in the first organiztion Organization.first.people.active.all [4;36;1mOrganization Load (0.0ms)[0m [0;1mSELECT TOP 1 * FROM [organizations] [0m [4;35;1mPerson Load (0.0ms)[0m [0mSELECT * FROM [people] WHERE ((([people].[active] = 'Yes') AND ([people].organization_id = 1)) AND ([people].organization_id = 1)) [0m Why Rails generates "[people].organization_id = 1" condition twice ? Does someone know how to make it DRY ? e.g. SELECT * FROM [people] WHERE (([people].[active] = 'Yes') AND ([people].organization_id = 1))

    Read the article

  • INSERT INTO statement that copies rows and auto-increments non-identity key ID column

    - by AmoebaMan17
    Given a table that has three columns ID (Primary Key, not-autoincrementing) GroupID SomeValue I am trying to write a single SQL INSERT INTO statement that will make a copy of every row that has one GroupID into a new GroupID. Example beginning table: ID | GroupID | SomeValue ------------------------ 1 | 1 | a 2 | 1 | b Goal after I run a simple INSERT INTO statement: ID | GroupID | SomeValue ------------------------ 1 | 1 | a 2 | 1 | b 3 | 2 | a 4 | 2 | b I thought I could do something like: INSERT INTO MyTable ( [ID] ,[GroupID] ,[SomeValue] ) ( SELECT (SELECT MAX(ID) + 1 FROM MyTable) ,@NewGroupID ,[SomeValue] FROM MyTable WHERE ID = @OriginalGroupID ) This causes a PrimaryKey violation since it will end up reusing the same Max(ID)+1 value multiple times as it seems. Is my only recourse to a bunch of INSERT statements in a T-SQL WHILE statement that has an incrementing Counter value? I also don't have the option of turning the ID into an auto-incrementing Identity column since that would breaking code I don't have source for.

    Read the article

  • Run Sql*Plus commands on Application Express

    - by pesantos
    Hi, I am new to PL/SQL, I'm trying to execute the commands that I learned at the course. VARIABLE area NUMBER DECLARE radius NUMBER(2) := &s_radius; pi CONSTANT NUMBER := 3.14; BEGIN :area := pi * radius * radius; END; I understand that I can run this using SqlPlus, but I remember my teacher was running this from the web browser using Application Express. I try to run the same commands there, at HOME SQLSQL Commands, but I keep getting the error "ORA-00900: invalid SQL statement" . Can you help me run it in Application Express or point me to a way where I can use an editor to run these course exercises? Thanks!

    Read the article

  • Problem updating through LINQtoSQL in MVC application using StructureMap, Repository Pattern and UoW

    - by matt
    I have an ASP MVC application using LINQ to SQL for data access. I am trying to use the Repository and Unit of Work patterns, with a service layer consuming the repositories and unit of work. I am experiencing a problem when attempting to perform updates on a particular repository. My application architecture is as follows: My service class: public class MyService { private IRepositoryA _RepositoryA; private IRepositoryB _RepositoryB; private IUnitOfWork _unitOfWork; public MyService(IRepositoryA ARepositoryA, IRepositoryB ARepositoryB, IUnitOfWork AUnitOfWork) { _unitOfWork = AUnitOfWork; _RepositoryA = ARepositoryA; _RepositoryB = ARepositoryB; } public PerformActionOnObject(Guid AID) { MyObject obj = _RepositoryA.GetRecords() .WithID(AID); obj.SomeProperty = "Changed to new value"; _RepositoryA.UpdateRecord(obj); _unitOfWork.Save(); } } Repository interface: public interface IRepositoryA { IQueryable<MyObject> GetRecords(); UpdateRecord(MyObject obj); } Repository LINQtoSQL implementation: public class LINQtoSQLRepositoryA : IRepositoryA { private MyDataContext _DBContext; public LINQtoSQLRepositoryA(IUnitOfWork AUnitOfWork) { _DBConext = AUnitOfWork as MyDataContext; } public IQueryable<MyObject> GetRecords() { return from records in _DBContext.MyTable select new MyObject { ID = records.ID, SomeProperty = records.SomeProperty } } public bool UpdateRecord(MyObject AObj) { MyTableRecord record = (from u in _DB.MyTable where u.ID == AObj.ID select u).SingleOrDefault(); if (record == null) { return false; } record.SomeProperty = AObj.SomePropery; return true; } } Unit of work interface: public interface IUnitOfWork { void Save(); } Unit of work implemented in data context extension. public partial class MyDataContext : DataContext, IUnitOfWork { public void Save() { SubmitChanges(); } } StructureMap registry: public class DataServiceRegistry : Registry { public DataServiceRegistry() { // Unit of work For<IUnitOfWork>() .HttpContextScoped() .TheDefault.Is.ConstructedBy(() => new MyDataContext()); // RepositoryA For<IRepositoryA>() .Singleton() .Use<LINQtoSQLRepositoryA>(); // RepositoryB For<IRepositoryB>() .Singleton() .Use<LINQtoSQLRepositoryB>(); } } My problem is that when I call PerformActionOnObject on my service object, the update never fires any SQL. I think this is because the datacontext in the UnitofWork object is different to the one in RepositoryA where the data is changed. So when the service calls Save() on it's IUnitOfWork, the underlying datacontext does not have any updated data so no update SQL is fired. Is there something I've done wrong in the StrutureMap registry setup? Or is there a more fundamental problem with the design? Many thanks.

    Read the article

  • ssis package from SQL agent failed

    - by Pramodtech
    I have simple package which reads data from csv file and loads into SQL table. File is located on another server and it is shared. I use UNC path in package. package is scheduled using sql agent job. Job worked fine for 1 week and suddenly started giving error "The file name "\\124.0.48.173\basel2\Commercial\Input\ACBS_GSU.csv" specified in the connection was not valid. End Error Error: 2010-04-20 16:15:07.19 Code: 0xC0202070 Source: ACBS_GSU Connection manager "CSV file conection" Description: Connection "CSV file conection" failed validation." Any help will be appreciated.

    Read the article

  • Why does php show error for my SQL query

    - by ZincX
    UPDATE: My mistake - I made a typo. Nevermind this question. I'm using php to update a mysql database. The resultant query I'm using when i print it out on my webpage before executing is as follows: INSERT INTO perch2_content_items (itemOrder, regionID, pageID, itemRev, itemID, itemJSON, itemSearch ) SELECT MAX(itemOrder)+1, 105, 81, 11, 118, 'json', 'search' FROM perch2_content_items WHERE regionID=105 When I copy and paste this query directly into the phpmyadmin SQL interface, it works fine. The table gets updated. However, when I try to execute it using my php code as follows, it throws an error. $insertToPerch = "INSERT INTO perch2_content_items (itemOrder, regionID, pageID, itemRev, itemID, itemJSON, itemSearch ) SELECT MAX(itemOrder)+1, $regionID, $pageID, $regionRev, $newItemID, 'json', 'search' FROM perch2_content_items WHERE regionID=$regionID"; mysql_query(insertToPerch) or die(mysql_error()); The error I'm getting is: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'insertToPerch' at line 1 Can anybody help me figure out why it is failing.

    Read the article

  • Loop through multiple tables to execute same query

    - by pcvnes
    Hi, I have a database wherein per day a table is created to log process instances. The tables are labeled MESSAGE_LOG_YYYYMMDD Currently i want to sequentially execute the same QUERY against all those tables. I wrote the PL/SQL below, but got stuck at line 10. How can i execute the SQL statement against successfully against all tables here ? DECLARE CURSOR all_tables IS SELECT table_name FROM all_tables WHERE TABLE_NAME like 'MESSAGE_LOG_2%' ORDER BY TABLE_NAME ; BEGIN FOR msglog IN all_tables LOOP SELECT count(*) FROM TABLE msglog.TABLE_NAME ; END LOOP; END; / Cheers, Peter

    Read the article

  • Xquery get value from attribute

    - by Steven
    Hi, I have some xml and need to extract values using sql <?xml version="1.0" ?> <fields> <field name="fld_AccomAttic"> <value>0</value> </field> <field name="fld_AccomBathroom"> <value>1</value> </field> </fields> </xml> I need to get column name fld_AccomAttic Value 1 The xml is held in a sql server 2005 db I have used xquery before and it has worked. Can any one show me how to extract these values Im baffeled as to why i am unable to do this Thanks Sp

    Read the article

  • Problem with updating multiple rows which are in conflict with unique index

    - by GUZ
    I am using Microsoft SQL Server and I have a master-detail scenario where I need to store the order of details. So in the Detail table I have ID, MasterID, Position and some other columns. There is also a unique index on MasterID and Position. It works OK except one case: when I have some existing details and I change their order. For example when I change a detail on position 3 with a detail on position 2. When I save the detail on position 2 (which in the database has Position equal to 3) SQL Server protests, because the index uniqueness constraint. How to solve this problem in a reasonable way? Thank you in advance Lukasz Glaz

    Read the article

  • SQL join from multiple tables

    - by Kenny Anderson
    Hi all We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy) SELECT output.UID, output.description, input.data from output.dbo.description LEFT JOIN (SELECT input.UID, input.data FROM [output.sourcedb].dbo.datatable ) AS input ON input.UID=output.UID Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?

    Read the article

  • Top 3 Max entries for a Combination which a condition

    - by Asharmb
    I am new to sql side. so if this questoin sound very easy then please spare me. I have a 4 coloumns in a sql table.Let say A,B,C,D . For any BC combination I may get any number of rows. I need to get at max 3 rows (which inturn give me 3 unique value of A for that BC ombination) for these selected rows i should have Top 3 Max value of D. As compare to other entries for that BC combination. So there can be any number of BC combination so the above logic should imply to all of them.

    Read the article

  • SQLite subquery syntax/error/difference from MySQL

    - by Rudie
    I was under the impression this is valid SQLite syntax: SELECT *, (SELECT amount AS target FROM target_money WHERE start_year <= p.bill_year AND start_month <= p.bill_month ORDER BY start_year ASC, start_month ASC LIMIT 1) AS target FROM payments AS p; But I guess it's not, because SQLite returns this error: no such column: p.bill_year What's wrong with how I refer to p.bill_year? Yes, I am positive table payments hosts a column bill_year. Am I crazy or is this just valid SQL syntax? It would work in MySQL wouldn't it?? I don't have any other SQL present so I can't test others, but I thought SQLite was quite standardlike.

    Read the article

  • Problem saving Text file in database using Hibernate

    - by Marquinio
    I'm having a problem saving large text files to MySQL database. If the text file size is around 5KB it successfully saves. If file is 148KB then I get this error from Hibernate: org.hibernate.exception.DataException: Could not execute JDBC batch update These is the SQL shows by Hibernate: Hibernate: insert into file_table (ID,FILE) values (?, ?) And in my hibernate file I'm using java.sql.Blob to store the file. Anyone knows why it fails to save a file size of 148KB but if I open that same file, cut it down to around 5KB, it will successfully save it? I thought the default limit was I think 2GB? This is weird. Thanks.

    Read the article

  • How cast in XML for aggregate functions

    - by renegm
    In SQL Server 2008. I need execute a query like that: DECLARE @x AS xml SET @x=N'<r><c>First Text</c></r><r><c>Other Text</c></r>' SELECT @x.query('fn:max(r/c)') But return nothing (apparently because convert xdt:untypedAtomic to numeric) How to "cast" r/c to varchar? Something like SELECT @x.query('fn:max(«CAST(r/c «AS varchar(20))»)') Edit: Using Nodes the function MAX is from T-SQL no fn:max function In this code: DECLARE @x xml; SET @x = ''; SELECT @x.query('fn:max((1, 2))'); SELECT @x.query('fn:max(("First Text", "Other Text"))'); both query return expected: 2 and "Other Text" fn:max can evaluate string expression ad hoc. But the first query dont work. How to force string arguments to fn:max?

    Read the article

  • Updating Many-to-Many relationship with LinqToSQL

    - by Noffie
    If I had, for example, a Many-to-Many mapping table called "RolesToUsers" between a Users and an Roles table, here is how I do it: // DataContext is db, usr is a User entity // newUserRolesMappings is a collection with the desired new mappings, probably // derived by looking at selections in a checkbox list of Roles on a User Edit page db.RolesToUsers.DeleteAllOnSubmit(usr.RolesToUsers); usr.RolesToUsers.Clear(); usr.RolesToUsers.AddRange(newUserRolesMappings); I used the SQL profiler once, and this seems to generate very intelligent SQL - it will only drop the rows which are no longer in the mapping relationship, and only add rows which did not already exist in the relationship. It doesn't blindly do a complete clearing and re-construction of the relationship, as I thought it would. The internet is surprisingly quiet on the subject, and the query "LinqToSQL many-to-many" mostly just turns up articles about how the LinqToSQL data mapper doesn't "support" it very well. How does everyone else update many-to-many with LinqToSQL?

    Read the article

  • Dynamic query to immediate execute?

    - by Curtis White
    I am using the MSDN Dynamic linq to sql package. It allows using strings for queries. But, the returned type is an IQueryable and not an IQueryable<T>. I do not have the ToList() method. How can I this immediate execute without manually enumerating over the IQueryable? My goal is to databind to the Selecting event on a linqtosql datasource and that throws a datacontext disposed exception. I can set the query as the Datasource on a gridview though. Any help greatly appreciated! Thanks. The dynamic linq to sql is the one from the samples that comes with visual studio.

    Read the article

  • My database has been deleted suddenly on the server how to recover it?

    - by user2728312
    I'm running an application on windows server that connect to a SQL Server database. Today, when I opened SQL Server Management Studio, I was surprised the database is not in the list of the databases! I don't know what's the reason. I searched in the server files but I can't find the database and also in the recycle bin. I put my database in C:\db\myWeb.mdf and suddenly it's been removed! Can anyone tell me how to recover the database?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >