Search Results

Search found 31129 results on 1246 pages for 'number of rows of tiles'.

Page 118/1246 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • Faster way to know the tolal number of rows in Mysql Database?

    - by Starx
    If I need to know the total number of rows in a table of database I do something like $query = "SELECT * FROM tablename WHERE link='1';"; $result = mysql_query($query); $rows = mysql_fetch_array($result); $count = count($rows); So you see the total number of data is recovered scanning through entire database Is there a better way

    Read the article

  • MySQL command-line tool: How to find out number of rows affected by a DELETE?

    - by ambivalence
    I'm trying to run a script that deletes a bunch of rows in a MySQL (innodb) table in batches, by executing the following in a loop: mysql --user=MyUser --password=MyPassword MyDatabase < SQL_FILE where SQL_FILE contains a DELETE FROM ... LIMIT X command. I need to keep running this loop until there's no more matching rows. But unlike running in the mysql shell, the above command does not return the number of rows affected. I've tried -v and -t but neither works. How can I find out how many rows the batch script affected? Thanks!

    Read the article

  • True / false evaluation doesn't work as expected in Scheme

    - by ron
    I'm trying to compare two booleans : (if (equal? #f (string->number "123b")) "not a number" "indeed a number") When I run this in the command line of DrRacket I get "not a number" , however , when I put that piece of code in my larger code , the function doesn't return that string ("not a number") , here's the code : (define (testing x y z) (define badInput "ERROR") (if (equal? #f (string->number "123b")) "not a number" "indeed a number") (display x)) And from command line : (testing "123" 1 2) displays : 123 Why ? Furthermore , how can I return a value , whenever I choose ? Here is my "real" problem : I want to do some input check to the input of the user , but the thing is , that I want to return the error message if I need , before the code is executed , because if won't - then I would run the algorithm of my code for some incorrect input : (define (convert originalNumber s_oldBase s_newBase) (define badInput "ERROR") ; Input check - if one of the inputs is not a number then return ERROR (if (equal? #f (string->number originalNumber)) badInput) (if (equal? #f (string->number s_oldBase)) badInput) (if (equal? #f (string->number s_newBase)) badInput) (define oldBase (string->number s_oldBase)) (define newBase (string->number s_newBase)) (define outDecimal (convertIntoDecimal originalNumber oldBase)) (define result "") ; holds the new number (define remainder 0) ; remainder for each iteration (define whole 0) ; the whole number after dividing (define temp 0) (do() ((= outDecimal 0)) ; stop when the decimal value reaches 0 (set! whole (quotient outDecimal newBase)) ; calc the whole number (set! temp (* whole newBase)) (set! remainder (- outDecimal temp)) ; calc the remainder (set! result (appending result remainder)) ; append the result (set! outDecimal (+ whole 0)) ; set outDecimal = whole ) ; end of do (if (> 1 0) (string->number (list->string(reverse (string->list result))))) ) ;end of method This code won't work since it uses another method that I didn't attach to the post (but it's irrelevant to the problem . Please take a look at those three IF-s ... I want to return "ERROR" if the user put some incorrect value , for example (convert "23asb4" "b5" "9") Thanks

    Read the article

  • Why is it an issue that it takes 2 digits to represent the number 10 in decimal?

    - by Crizly
    So we use hexadecimal which has the advantage of going up to 15 in single digits A-F, but why is it an issue that it takes 2 digits to represent the number 10 in decimal? I was reading up about hexadecimal and I came across these 2 lines: Base 16 suggests the digits 0 to 15, but the problem we face is that it requires 2 digits to represent 10 to 15. Hexadecimal solves this problem by using the letters A to F. My question is, why do we care how many digits it takes to represent 15? What is the importance of this number, and how does denoting it with a single character have any value?

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • Creating new Entities from Stored Procedure

    - by SK
    I have a stored procedure that retrieves existing rows from a table and also creates includes new rows that match the table definition and mapped entity (.net 3.5 entity framework). These new rows are not written to the database in the stored procedure. The stored procedure executes, but the new rows that were created will not load the navigation properties sucessfully i.e. the rows that do not actually exist in the database. e.g. database rows: key, data, FK 1, xxx, a 2, xxx, b returned rows from stored procedure: key, data, FK 1, xxx, a 2, xxx, b 3, yyy, a 4, yyy, b The entity will load FK entities a and b for rows 1 and 2, but for rows 3 and 4 the FK entity is null. Do I somehow need to add the new rows to the data context? or turn off tracking?

    Read the article

  • how to query sqlite for certain rows, i.e. dividing it into pages (perl DBI)

    - by user1380641
    sorry for my noob question, I'm currently writing a perl web application with sqlite database behind it. I would like to be able to show in my app query results which might get thousands of rows - these should be split in pages - routing should be like /webapp/N - where N is the page number. what is the correct way to query the sqlite db using DBI, in order to fetch only the relavent rows. for instance, if I show 25 rows per page so I want to query the db for 1-25 rows in the first page, 26-50 in the second page etc.... Thanks in advanced!

    Read the article

  • [MYSQL] select statement that combines similar rows with certain ids ?

    - by vegatron
    hi I have a warehouse_products table which defines how many products in the warehouses so lets say I have 20 records/rows in the table, some rows may contain the same product id but in a different warehouse I need to create select statement that give every product one row, and in this row I must have the quantity in warehouse A and warehouse B .. so in the end I will get for example 10 rows that contain all the data id domain_id warehouse_id wh_product_id quantity 2 1 2 2 84 3 1 1 3 221 4 1 3 3 0 5 1 1 3 14 6 1 1 2 73 7 1 1 1 123

    Read the article

  • polymorphism in C++

    - by user550413
    I am trying to implement the next 2 functions Number& DoubleClass::operator+( Number& x); Number& IntClass::operator+(Number& x); I am not sure how to do it..(their unidirectionality is explained below): class IntClass; class DoubleClass; class Number { //return a Number object that's the results of x+this, when x is either //IntClass or DoubleClass virtual Number& operator+(Number& x) = 0; }; class IntClass : public Number { private: int my_number; //return a Number object that's the result of x+this. //The actual class of the returned object depends on x. //If x is IntClass, then the result if IntClass. //If x is DoubleClass, then the results is DoubleClass. public: Number& operator+(Number& x); }; class DoubleClass : public Number { private: double my_number; public: //return a DoubleClass object that's the result of x+this. //This should work if x is either IntClass or DoubleClass Number& operator+( Number& x); };

    Read the article

  • Common and maximum number of virtual machines per server?

    - by Rabarberski
    For a project I am trying to get real-life estimates for the number of virtual machines per server, both typically and maximally. Of course, the maximum number of VMs would be depending on the type of applications (disk intensive, network intensive, ...), and server hardware (like number of cores, memory, ...), but still it would be useful to know if a typical maximum is about 10, 20 or 30 VMs per server. Can anybody give practical numbers?

    Read the article

  • Is there a way to bundle pdf tiles to a Kindle friendly file?

    - by Maciej Swic
    I'm downloading PDF approach plates from Navigraph, and i have a folder per airport with files named after their corresponding approach / departure etc. Now I'd like to take such a folder with a bunch of PDF files, automatically generate an index and combine them to a single .mobi file that i can send to my Kindle. The index created can be very simple and consist of the file name (without the extension). Tapping an index item should jump to the correct page for that chart. I know there is a host of apps that combine comic book jpg's to ebooks, but is there anything that does the above please?

    Read the article

  • Can mod_fcgid maintain a hard-minimum number of available appserver processes?

    - by user9795
    ...and if so, how? I'm using Apache2 + mod_fcgid to serve a perl Catalyst application, on a box that I own, and I'd like for mod_fcgid to maintain a minimum number of spun-up processes ready to go. The docs say that FcgidMinProcessesPerClass only enforces a minimum number of processes that will be retained in a process class after finishing requests How do I get apache to start up with a certain number of appserver subprocesses on an idle server without using artificial load to get there?

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • How is starting and ending row number of a Range obtained?

    - by Robert Kerr
    Given a user-selected Range, what is the simplest way to determine the starting row number, and ending row number? Range.Address returns a string containing any number of possible formats. There has to be something simpler. Desired: Dim oRange As Range Dim startRow As Integer Dim endRow As Integer oRange = Range("A1:X50") startRow = oRange.Address.StartRow endRow = oRange.Address.EndRow of course, those properties do not exist. I want to do the same to return column letters.

    Read the article

  • Wondering about the Windows 7 serial number my laptop has, and its uses.

    - by overmann
    So that's the serial number of my pre-installed windows copy, I take it. But am I allowed to use it again when, say, I don't know, my system gets crippled by a sneaky virus? If I format my computer and install windows starter again from a USB drive (speculating. I've never format before, I suppose is completely possible) Is that serial number still valid? I'm talking about the number printed on the back of my laptop.

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • How do I find the Serial Number of a USB Drive?

    - by jamuraa
    I'm trying to re-enable USB Autoplay in a secure way, by installing a program on each of the computers that I use so that I can run my launcher (PStart in this case) whenever I plug in my specific USB drive. The tool that I'm using to enable this - AutoRunGuard - needs the serial number of the USB drive that I am using. I can't figure out where to find this in Windows. Ideally I would not need to install and run a separate program to do this (seemingly) simple task. Since this is a pretty easy question, bonus points if you also tell me how to discover it in Linux as well. What steps do I need to take to retrieve a USB Drive's serial number? UPDATE: Just incase people come here looking for the answer for AutoRunGuard, I discovered that they don't want the USB device serial number, but the volume serial number. The drive serial can be found by going into the command line, navigating to the drive, and executing dir. The volume serial number is found in the top two lines - use it without the dash.

    Read the article

  • how to know which display number for the variable DISPLAY to be exported when ssh to server

    - by insidepower
    When i ssh to server using -X, i always confuse about which display number i should export. It seems to me sometimes the display number has been used by something, so what i can do is only export DISPLAY=localhost:0 && xclock export DISPLAY=localhost:1 && xclock export DISPLAY=localhost:2 && xclock export DISPLAY=localhost:... until the clock appear. Then i will use that display number. Each time log in to the server, the display number which is able to tunnel the gui data correct would be different. I know many of such similar questions has been asked and answer. However I couldn't find answer to my question, anyone know about it? Thanks!

    Read the article

  • How to auto dial 9+Number on Cisco 7941 redial?

    - by NotDan
    Is it possible to set up a Cisco 7941 phone to dial 9 before the redial number? When I view missed calls, and try to redial one of the numbers, it always fails because it doesn't dial 9 first. I have to write the number down and then manually dial the 9 and then the number.

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >