Search Results

Search found 84007 results on 3361 pages for 'sql system table'.

Page 226/3361 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • SQL Binary Microsoft Access - Combining two tables if specific field values are equal

    - by Jordan
    I am new to Microsoft Access and SQL but have a decent programming background and I believe this problem should be relatively simple. I have two tables that I have imported into Access. I will give you a little context. One table is huge and contains generic, global data. The other table is still big but contains specific, regional data. There is only one common field (or column) between the two tables. Let’s call this common field CF. The other fields in both tables are different. I’ll take you through one iteration of what I need to do. I need to take each CF value in the regional, smaller table and find the common CF value in the larger, global table. After finding the match, I need to take the whole “record” or “row” from the global data and copy it over to the corresponding record in the smaller regional table (This should involve creating the new fields). I need to do this for all CF values in the regional, smaller table. I was recommended to use SQL and a binary search, but I am unfamiliar. Let me know if you have any questions. I appreciate the help!

    Read the article

  • Create table and call it from sql

    - by user1770816
    I have a PL/SQL function which creates a new temporary table. For creating the table I use execute immediate. When I run my function in oracle sql developer everything is ok; the function creates the temp table without errors. But when U use SQL: Select function_name from table_name I get an exceptions: ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML ORA-06512: at "SYSTEM.GET_USERS", line 10 14552. 00000 - "cannot perform a DDL, commit or rollback inside a query or DML " *Cause: DDL operations like creation tables, views etc. and transaction control statements such as commit/rollback cannot be performed inside a query or a DML statement. Update Sorry, write from tablet PC and have problems with format text. My function: CREATE OR REPLACE FUNCTION GET_USERS ( USERID IN VARCHAR2 ) RETURN VARCHAR2 AS request VARCHAR2(520) := 'CREATE GLOBAL TEMPORARY TABLE '; BEGIN request := request || 'temp_table_' || userid || '(user_name varchar2(50), user_id varchar2(20), is_administrator varchar2(5)') || ' ON COMMIT PRESERVE ROWS'; EXECUTE IMMEDIATE (request); RETURN 'true'; END GET_USERS;

    Read the article

  • How do I use a concatenation of 2 columns in a SQL DB in ASP.NET properly?

    - by user293357
    I'm using LinqToSql like this with a CheckBoxList in ASP.NET: var teachers = from x in dc.teachers select x; cbl.DataSource = teachers; cbl.DataTextField = "name"; cbl.DataValueField = "teacherID"; cbl.DataBind(); I want to display both "firstname" and "name" in the DataTextField however. I found this solution but I'm using LINQ: http://stackoverflow.com/questions/839223/concatenate-two-fields-in-a-dropdown How do I do this?

    Read the article

  • [MySQL] Load data from .csv applying regex before insert into table

    - by Gabriel L. Oliveira
    I know that there is a code to import .csv data into a mysql table, and I'm using this one: LOAD DATA INFILE "file.csv" INTO TABLE foo FIELDS TERMINATED BY "," LINES TERMINATED BY "\\r\\n"; The data inside this .csv are lines like this example: 08/e0/Breast_Cancer_Res_2001_Nov_2_3(1)_55-60.tar.gz Breast Cancer Res. 2001 Nov 2; 3(1):55-60 PMC13900 b0/ac/Breast_Cancer_Res_2001_Nov_9_3(1)_61-65.tar.gz Breast Cancer Res. 2001 Nov 9; 3(1):61-65 PMC13901 I just want the first part (the .tar.gz path), always on the pattern (letter or number)(letter or number) / (letter or number)(letter or number)/... and the part starting by 'PMC', always on the pattern PMC(number...) where 'number' means a number between 0 to 9 and a letter means a letter between a to z (both upper and lower case) So, applying the LOAD DATA, and the regex, and inserting the result entries on my sql table, the result table should be: 1 08/e0/Breast_Cancer_Res_2001_Nov_2_3(1)_55-60.tar.gz PMC13900 2 b0/ac/Breast_Cancer_Res_2001_Nov_9_3(1)_61-65.tar.gz PMC13901 What should be the SQL command to do all this?

    Read the article

  • Concurrent usage of table causing issues

    - by Sven
    Hello In our current project we are interfacing with a third party data provider. They need to insert data in a table of ours. This inserting can be frequent every 1 min, every 5min, every 30, depends on the amount of new data they need to provide. The use the isolation level read committed. On our end we have an application, windows service, that calls a webservice every 2 minutes to see if there is new data in this table. Our isolation level is repeatable read. We retrieve the records and update a column on these rows. Now the problem is that sometimes this third party provider needs to insert a lot of data, let's say 5000 records. They do this per transaction (5rows per transaction), but they don't close the connection. They do one transaction and then the next untill all records are inserted. This caused issues for our process, we receive a timeout. If this goes on for a long time the database get's completely unstable. For instance, they maybe stopped, but the table somehow still stays unavailable. When I try to do a select on the table, I get several records but at a certain moment I don't get any response anymore. It just says retrieving data but nothing comes anymore until I get a timeout exception. Only solution is to restart the database and then I see the other records. How can we solve this. What is the ideal isolation level setting in this scenario?

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • With SQL can you use a sub-query in a WHERE LIKE clause?

    - by Jason
    I'm not even sure how to even phrase this as it sounds weird conceptually, but I'll give it a try. Basically I'm looking for a way to create a query that is essentially a WHERE IN LIKE SELECT statement. As an example, if I wanted to find all user records with a hotmail.com email address, I could do something like: SELECT UserEmail FROM Users WHERE (UserEmail LIKE '%hotmail.com') But what if I wanted to use a subquery as the matching criteria? Something like this: SELECT UserEmail FROM Users WHERE (UserEmail LIKE (SELECT '%'+ Domain FROM Domains)) Is that even possible? If so, what's the right syntax?

    Read the article

  • Why are there performance differences when a SQL function is called from .Net app vs when the same c

    - by Dan Snell
    We are having a problem in our test and dev environments with a function that runs quite slowly at times when called from an .Net Application. When we call this function directly from management studio it works fine. Here are the differences when they are profiled: From the Application: CPU: 906 Reads: 61853 Writes: 0 Duration: 926 From SSMS: CPU: 15 Reads: 11243 Writes: 0 Duration: 31 Now we have determined that when we recompile the function the performance returns to what we are expecting and the performance profile when run from the application matches that of what we get when we run it from SSMS. It will start slowing down again at what appear to random intervals. We have not seen this in prod but they may be in part because everything is recompiled there on a weekly basis. So what might cause this sort of behavior?

    Read the article

  • Stairway to T-SQL DML Level 11: How to Delete Rows from a Table

    You may have data in a database that was inserted into a table by mistake, or you may have data in your tables that is no longer of value. In either case, when you have unwanted data in a table you need a way to remove it. The DELETE statement can be used to eliminate data in a table that is no longer needed. In this article you will see the different ways to use the DELETE statement to identify and remove unwanted data from your SQL Server tables.

    Read the article

  • custom sorting or ordering a table without resorting the whole shebang

    - by fuugus
    for ten years we've been using the same custom sorting on our tables, i'm wondering if there is another solution which involves fewer updates, especially since today we'd like to have a replication/publication date and would'nt like to have our replication replicate unnecessary entries. i had a look into nested sets, but it does'nt seem to do the job for us. base table: id | a_sort ---+------- 1 10 2 20 3 30 after inserting insert into table (a_sort) values(15) an entry at the second position. id | a_sort ---+------- 1 10 2 20 3 30 4 15 ordering the table with select * from table order by a_sort and resorting all the a_sort entries, updating at least id=(2,3,4) will of course produce the desired output id | a_sort ---+------- 1 10 4 20 2 30 3 40 the column names, the column count, datatypes, a possible join, possible triggers or the way the resorting is done is/are irrelevant to the problem. also we've found some pretty neat ways to do this task fast. only; how the heck can we reduce the updates in the db to 1 or 2 max. seems like an awfully common problem. the captain obvious in me thougth once "use an a_sort float(53), insert using a fixed value of ordervaluefirstentry+abs(ordervaluefirstentry-ordervaluenextentry)/2".. but this would only allow around 1040 "in between" entries - so never resorting seems a bit problematic ;)

    Read the article

  • How to synchronize two (or n) replication processes for MS SQL databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above?

    Read the article

  • Delete trigger does not catch table truncation

    - by Tomaz.tsql
    Sample shows table truncation will not fire delete trigger. USE AdventureWorks; GO -- STAGING IF EXISTS (SELECT * FROM sys.objects WHERE name = 'est_del_trigger_log' AND type = 'U') DROP TABLE test_del_trigger_log; GO IF EXISTS (SELECT * FROM sys.objects WHERE name = 'est_del_trigger' AND type = 'U') DROP TABLE test_del_trigger; GO CREATE TABLE test_del_trigger (id INT IDENTITY(1,1) ,tkt VARCHAR(10) CONSTRAINT pk_test_del_trigger PRIMARY KEY (id) ); GO INSERT INTO...(read more)

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

  • Does SQL Server guarantee sequential inserting of an identity column?

    - by balpha
    In other words, is the following "cursoring" approach guaranteed to work: retrieve rows from DB save the largest ID from the returned records for later, e.g. in LastMax later, "SELECT * FROM MyTable WHERE Id > {0}", LastMax In order for that to work, I have to be sure that every row I didn't get in step 1 has an Id greater than LastMax. Is this guaranteed, or can I run into weird race conditions?

    Read the article

  • MSSQL / T-SQL : How to update equal percentages of a resultset?

    - by Kent Comeaux
    I need a way to take a resultset of KeyIDs and divide it up as equally as possible and update records differently for each division based on the KeyIDs. In other words, there is SELECT KeyID FROM TableA WHERE (some criteria exists) I want to update TableA 3 different ways by 3 equal portions of KeyIDs. UPDATE TableA SET FieldA = Value1 WHERE KeyID IN (the first 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value2 WHERE KeyID IN (the second 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value3 WHERE KeyID IN (the third 1/3 of the SELECT resultset above) or something to that effect. Thanks for any and all of your responses.

    Read the article

  • Insert Into Two SQL Tables From XML Maintaining Relationship

    - by Thx
    I am looking to insert records from xml into two different tables. For example <Root> <A> <AValue>1</AValue> <Children> <B> <BValue>2</BValue> </B> </Children> </A> </Root> Would insert a record into table A AID AValue # 1 also insert a record into table B BID AID BValue # #(Same as AID Above) 2 I have this DECLARE @idoc INT DECLARE @doc NVARCHAR(MAX) SET @doc = ' <Root> <A> <AValue>1</AValue> <Children> <B> <BValue>2</BValue> </B> </Children> </A> </Root> ' EXEC sp_xml_preparedocument @idoc OUTPUT, @doc CREATE TABLE #A ( AID INT IDENTITY(1, 1) , AValue INT ) INSERT INTO #A SELECT * FROM OPENXML (@idoc, '/Root/A',2) WITH (AValue INT ) CREATE TABLE #B ( BID INT IDENTITY(1, 1) , AID INT , BValue INT ) INSERT INTO #B SELECT * FROM OPENXML (@idoc, '/Root/A/Children/B',2) WITH ( AID INT, BValue INT ) SELECT * FROM #A SELECT * FROM #B DROP TABLE #A DROP TABLE #B Thanks!

    Read the article

  • copy rows with special condition

    - by pooria_googooli
    I have a table with a lot of columns. For example I have a table with these columns : ID,Fname,Lname,Tel,Mob,Email,Job,Code,Company,...... ID column is auto number column. I want to copy all rows in this table to this table and change the company column value to 12 in this copied row. I don't want to write name all of the columns because I have a lot of table with a lot of columns. I tried this code but I had this error : declare @c int; declare @i int; select * into CmDet from CmDet; select @C= count(id) from CmDet; while @i < @C begin UPDATE CmDet SET company =12 WHERE company=11 set @i += 1 end error : Msg 2714, Level 16, State 6, Line 3 There is already an object named 'CmDet' in the database. I changed the code to this declare @c int declare @i int insert into CmDet select * from CmDet; select @C= count(id) from CmDet; while @i < @C begin UPDATE CmDet SET company =12 WHERE company=11 set @i += 1 end and I had this error : Msg 8101, Level 16, State 1, Line 3 An explicit value for the identity column in table 'CmDet' can only be specified when a column list is used and IDENTITY_INSERT is ON. What should I do ?

    Read the article

  • How to synchronize two (or n) replication processes for SQL Server databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above? Or, in other words, how to compare last time of replication in each database? UPD: The only way I see to synchronize is to continuously write timestamps into service tables in each database and to check these timestamps on replicated servers. Is that acceptable solution?

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records).

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Most efficient way to maintain a 'set' in SQL Server?

    - by SEVEN YEAR LIBERAL ARTS DEGREE
    I have ~2 million rows or so of data, each row with an artificial PK, and two Id fields (so: PK, ID1, ID2). I have a unique constraint (and index) on ID1+ID2. I get two sorts of updates, both with a distinct ID1 per update. 100-1000 rows of all-new data (ID1 is new) 100-1000 rows of largely, but not necessarily completely overlapping data (ID1 already exists, maybe new ID1+ID2 pairs) What's the most efficient way to maintain this 'set'? Here are the options as I see them: Delete all the rows with ID1, insert all the new rows (yikes) Query all the existing rows from the set of new data ID1+ID2, only insert the new rows Insert all the new rows, ignore inserts that trigger unique constraint violations Any thoughts?

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >