Search Results

Search found 35019 results on 1401 pages for 'sql documentation'.

Page 667/1401 | < Previous Page | 663 664 665 666 667 668 669 670 671 672 673 674  | Next Page >

  • Alter multiple tables' columns length

    - by gdoron
    So, we just found out that 254 tables in our Oracle DBMS have one column named "Foo" with the wrong length- Number(10) instead of Number(3). That foo column is a part from the PK of the tables. Those tables have other tables with forigen keys to it. What I did is: backed-up the table with a temp table. Disabled the forigen keys to the table. Disabled the PK with the foo column. Nulled the foo column for all the rows. Restored all the above But now we found out it's not just couple of tables but 254 tables. Is there an easy way, (or at least easier than this) to alter the columns length? P.S. I have DBA permissions.

    Read the article

  • MYSQL: How to limit inner join?

    - by Sergii Rechmp
    I need some help with my query. I have 2 tables: all: art|serie sootv: name|art|foo I need to get result like name|serie. My query is: SELECT t2.NAME, t1.serie FROM ( SELECT * FROM `all` WHERE `serie` LIKE '$serie' ) t1 INNER JOIN sootv t2 ON t1.art = t2.art; it works, but sootv table contains data like name|art|foo abc | 1 | 5 abc | 1 | 6 i get 2 same results. Its not what i need. Help me please - how i can get only one result: abc|1 Thanks.

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

  • Question about Transact SQL syntax

    - by Yousui
    Hi guys, The following code works like a charm: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; DECLARE @ErrorMessage NVARCHAR(4000), @ErrorSeverity int; SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(); RAISERROR(@ErrorMessage, @ErrorSeverity, 1); END CATCH But this code gives an error: BEGIN TRY BEGIN TRANSACTION COMMIT TRANSACTION END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), 1); END CATCH Why?

    Read the article

  • Does Table.InsertOnSubmit create a copy of the original table?

    - by Bryan
    Using InsertOnSubmit seems to have some memory overhead. I have a System.Data.Linq.Table<User> table. When I do table.InsertOnSubmit(user) and then int count = table.Count(), the memory usage of my application increases by roughly the size of the User table, but the count is the number of items before user was inserted. So I'm guess an enumeration after InsertOnSubmit will create a copy of the table. Is that true?

    Read the article

  • How to use SQL trigger to record the affected column's row number

    - by Freeman
    I want to have an 'updateinfo' table in order to record every update/insert/delete operations on another table. In oracle I've written this: CREATE TABLE updateinfo ( rnumber NUMBER(10), tablename VARCHAR2(100 BYTE), action VARCHAR2(100 BYTE), UPDATE_DATE date ) DROP TRIGGER TRI_TABLE; CREATE OR REPLACE TRIGGER TRI_TABLE AFTER DELETE OR INSERT OR UPDATE ON demo REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW BEGIN if inserting then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'insert',sysdate); elsif updating then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'update',sysdate); elsif deleting then insert into updateinfo(rnumber,tablename,action,update_date ) values(rownum,'demo', 'delete',sysdate); end if; -- EXCEPTION -- WHEN OTHERS THEN -- Consider logging the error and then re-raise -- RAISE; END TRI_TABLE; but when checking updateinfo, all rnumber column is zero. is there anyway to retrieve the correct row number?

    Read the article

  • Fill data gaps - UNION, PARTITION BY, or JOIN?

    - by Dave Jarvis
    Problem There are data gaps that need to be filled. Would like to avoid UNION or PARTITION BY if possible. Query Statement The select statement reads as follows: SELECT count( r.incident_id ) AS incident_tally, r.severity_cd, r.incident_typ_cd FROM report_vw r GROUP BY r.severity_cd, r.incident_typ_cd ORDER BY r.severity_cd, r.incident_typ_cd Data Sources The severity codes and incident type codes are from: severity_vw incident_type_vw The columns are: incident_tally severity_cd incident_typ_cd Actual Result Data 36 0 ENVIRONMENT 1 1 DISASTER 27 1 ENVIRONMENT 4 2 SAFETY 1 3 SAFETY Required Result Data 36 0 ENVIRONMENT 0 0 DISASTER 0 0 SAFETY 27 1 ENVIRONMENT 0 1 DISASTER 0 1 SAFETY 0 2 ENVIRONMENT 0 2 DISASTER 4 2 SAFETY 0 3 ENVIRONMENT 0 3 DISASTER 1 3 SAFETY Question How would you use UNION, PARTITION BY, or LEFT JOIN to fill in the zero counts?

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • SqlException: Login failed for user

    - by Younes
    I use a dbml for my Data Access Layer to provide the data that i need in my app. When i connect from the server explorer everything seems fine. I choose to use my windows authentication and the connection test shows everything works just fine. When i Build my solution and run it on my IIS it says that i'm using a login that is not working. How to solve this issue?

    Read the article

  • How to get details about the DTS Step in a running job?

    - by vikasde
    I have scheduled a DTS to run from a scheduled job. The DTS has several steps in it. Now whenever the job is running and I take a look at the jobs section in Enterprise manager, then it always displays the following in the status: Executing Job Step 1'.... although its running all steps properly. How do I know at what step the DTS is running at?

    Read the article

  • Insert not working

    - by user1642318
    I've searched evreywhere and tried all suggestions but still no luck when running the following code. Note that some code is commented out. Thats just me trying different things. SqlConnection connection = new SqlConnection("Data Source=URB900-PC\SQLEXPRESS;Initial Catalog=usersSQL;Integrated Security=True"); string password = PasswordTextBox.Text; string email = EmailTextBox.Text; string firstname = FirstNameTextBox.Text; string lastname = SurnameTextBox.Text; //command.Parameters.AddWithValue("@UserName", username); //command.Parameters.AddWithValue("@Password", password); //command.Parameters.AddWithValue("@Email", email); //command.Parameters.AddWithValue("@FirstName", firstname); //command.Parameters.AddWithValue("@LastName", lastname); command.Parameters.Add("@UserName", SqlDbType.VarChar); command.Parameters["@UserName"].Value = username; command.Parameters.Add("@Password", SqlDbType.VarChar); command.Parameters["@Password"].Value = password; command.Parameters.Add("@Email", SqlDbType.VarChar); command.Parameters["@Email"].Value = email; command.Parameters.Add("@FirstName", SqlDbType.VarChar); command.Parameters["@FirstName"].Value = firstname; command.Parameters.Add("@LasttName", SqlDbType.VarChar); command.Parameters["@LasttName"].Value = lastname; SqlCommand command2 = new SqlCommand("INSERT INTO users (UserName, Password, UserEmail, FirstName, LastName)" + "values (@UserName, @Password, @Email, @FirstName, @LastName)", connection); connection.Open(); command2.ExecuteNonQuery(); //command2.ExecuteScalar(); connection.Close(); When I run this, fill in the textboxes and hit the button I get...... Must declare the scalar variable "@UserName". Any help would be greatly appreciated. Thanks.

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Stop invalid data in a attribute with foreign key constraint using triggers?

    - by Eternal Learner
    How to specify a trigger which checks if the data inserted into a tables foreign key attribute, actually exists in the references table. If it exist no action should be performed , else the trigger should delete the inserted tuple. Eg: Consider have 2 tables R(A int Primary Key) and S(B int Primary Key , A int Foreign Key References R(A) ) . I have written a trigger like this : Create Trigger DelS BEFORE INSERT ON S FOR EACH ROW BEGIN Delete FROM S where New.A <> ( Select * from R;) ); End; I am sure I am making a mistake while specifying the inner sub query within the Begin and end Blocks of the trigger. My question is how do I make such a trigger ?

    Read the article

  • rails howto compare datetime ?

    - by fenec
    hello, i have games in my sqLite DB with the attribute starting_date( t.date :starting_date). i would like to know all the games that have alreday started so i am using this lines of code: Game.find :all,:conditions=>"starting_date <= #{Date.today}" Game.find_by_sql("SELECT * FROM "games" WHERE (created_at < 2010-05-13)") the result is nill,even though i know that i have games that have already started like this one : #<Game id: 1, team_1_id: 2, team_2_id: 1, status: 2, team_1_points: nil, team_2_points: nil, starting_date: "2010-05-05", winner: 1, sport: "football", country: nil, league: "calcio", created_at: "2010-04-07 00:09:21", updated_at: "2010-05-13 00:57:19"> what am i doing wrong here?

    Read the article

  • Best way to model Customer <--> Address

    - by Jen
    Every Customer has a physical address and an optional mailing address. What is your preferred way to model this? Option 1. Customer has foreign key to Address Customer (id, phys_address_id, mail_address_id) Address (id, street, city, etc.) Option 2. Customer has one-to-many relationship to Address, which contains a field to describe the address type Customer (id) Address (id, customer_id, address_type, street, city, etc.) Option 3. Address information is de-normalized and stored in Customer Customer (id, phys_street, phys_city, etc. mail_street, mail_city, etc.) One of my overriding goals is to simplify the object-relational mappings, so I'm leaning towards the first approach. What are your thoughts?

    Read the article

  • Speed up a web service for auto complete and avoid too many method calls.

    - by jphenow
    So I've got my jquery autocomplete 'working,' but its a little fidgety since I call the webservice method each time a keydown() fires so I get lots of methods hanging and sometimes to get the "auto" to work I have to type it out and backspace a bit because i'm assuming it got its return value a little slow. I've limited the query results to 8 to mininmize time. Is there anything i can do to make this a little snappier? This thing seems near useless if I don't get it a little more responsive. javascript $("#clientAutoNames").keydown(function () { $.ajax({ type: "POST", url: "WebService.asmx/LoadData", data: "{'input':" + JSON.stringify($("#clientAutoNames").val()) + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { if (data.d != null) { var serviceScript = data.d; } $("#autoNames").html(serviceScript); $('#clientAutoNames').autocomplete({ minLength: 2, source: autoNames, delay: 100, focus: function (event, ui) { $('#project').val(ui.item.label); return false; }, select: function (event, ui) { $('#clientAutoNames').val(ui.item.label); $('#projectid').val(ui.item.value); $('#project-description').html(ui.item.desc); pkey = $('#project-id').val; return false; } }) .data("autocomplete")._renderItem = function (ul, item) { return $("<li></li>") .data("item.autocomplete", item) .append("<a>" + item.label + "<br>" + item.desc + "</a>") .appendTo(ul); } } }); }); WebService.asmx <WebMethod()> _ Public Function LoadData(ByVal input As String) As String Dim result As String = "<script>var autoNames = [" Dim sqlOut As Data.SqlClient.SqlDataReader Dim connstring As String = *Datasource* Dim strSql As String = "SELECT TOP 2 * FROM v_Clients WHERE (SearchName Like '" + input + "%') ORDER BY SearchName" Dim cnn As Data.SqlClient.SqlConnection = New Data.SqlClient.SqlConnection(connstring) Dim cmd As Data.SqlClient.SqlCommand = New Data.SqlClient.SqlCommand(strSql, cnn) cnn.Open() sqlOut = cmd.ExecuteReader() Dim c As Integer = 0 While sqlOut.Read() result = result + "{" result = result + "value: '" + sqlOut("ContactID").ToString() + "'," result = result + "label: '" + sqlOut("SearchName").ToString() + "'," 'result = result + "desc: '" + title + " from " + company + "'," result = result + "}," End While result = result + "];</script>" sqlOut.Close() cnn.Close() Return result End Function I'm sure I'm just going about this slightly wrong or not doing a better balance of calls or something. Greatly appreciated!

    Read the article

  • Select *, max(date) works in phpMyAdmin but not in my code

    - by kdobrev
    OK, my statement executes well in phpMyAdmin, but not how I expect it in my php page. This is my statement: SELECT egid , group_name , limit , MAX( date ) FROM employee_groups GROUP BY egid ORDER BY egid DESC ; This is may table: CREATE TABLE employee_groups ( egid int(10) unsigned NOT NULL, date date NOT NULL, group_name varchar(50) NOT NULL, limit smallint(5) unsigned NOT NULL, PRIMARY KEY (egid,date) ) ENGINE=MyISAM DEFAULT CHARSET=cp1251; I want to extract the most recent list of groups, e.g. if a group has been changed I want to have only the last change. And I need it as a list (all groups).

    Read the article

  • Double Inner Join generates unexpected error

    - by Itamar Marom
    In my database I have three tables: Users: UserID (Auto Numbering), UserName, UserPassword and a few other unimportant fields. PrivateMessages: MessageID (Auto Numbering), SenderID and a few other fields defining the message content. MessageStatus: MessageID, ReceiverID, MessageWasRead (Boolean) What I need is a query to which I input a user's id and I get all the private messages he has received. In addition, I also need to receive each message's sender UserName. For this I wrote the following query: SELECT Users.*, PrivateMessages.*, MessageStatus.* FROM PrivateMessages INNER JOIN Users ON PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageID WHERE MessageStatus.ReceiverID=[@userid]; But for some reason when I try saving it in my Access database, I get the following error (translated to English by me, since my office is in a different language): Syntax error (missing operator) at expression: "PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageI". Any ideas what could cause this? Thanks.

    Read the article

  • Reading from an write-only(OUT) parameter in pl/sql

    - by sqlgrasshopper5
    When I tried writing to an read-only parameter(IN) of a function, Oracle complains with an error. But that is not the case when reading from an write-only(OUT) parameter of a function. Oracle silently allows this without any error. What is the reason for this behaviour?. The following code executes without any assignment happening to "so" variable: create or replace function foo(a OUT number) return number is so number; begin so := a; --no assignment happens here a := 42; dbms_output.put_line('HiYA there'); dbms_output.put_line('VAlue:' || so); return 5; end; / declare somevar number; a number := 6; begin dbms_output.put_line('Before a:'|| a); somevar := foo(a); dbms_output.put_line('After a:' || a); end; / Here's the output I got: Before a:6 HiYA there VAlue: After a:42

    Read the article

< Previous Page | 663 664 665 666 667 668 669 670 671 672 673 674  | Next Page >