Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 37/293 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How to SELECT the last 10 rows of an SQL table which has no ID field?

    - by Edward Tanguay
    I have an MySQL table with 25000 rows. This is an imported CSV file so I want to look at the last ten rows to make sure it imported everything. However, since there is no ID column, I can't say: SELECT * FROM big_table ORDER BY id DESC What SQL statement would show me the last 10 rows of this table? The structure of the table is simply this: columns are: A, B, C, D, ..., AA, AB, AC, ... (like Excel) all fields are of type TEXT

    Read the article

  • Faster way to know the total number of rows in MySQL database?

    - by Starx
    If I need to know the total number of rows in a table of database I do something like this: $query = "SELECT * FROM tablename WHERE link='1';"; $result = mysql_query($query); $rows = mysql_fetch_array($result); $count = count($rows); So you see the total number of data is recovered scanning through the entire database. Is there a better way?

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • Faster way to know the tolal number of rows in Mysql Database?

    - by Starx
    If I need to know the total number of rows in a table of database I do something like $query = "SELECT * FROM tablename WHERE link='1';"; $result = mysql_query($query); $rows = mysql_fetch_array($result); $count = count($rows); So you see the total number of data is recovered scanning through entire database Is there a better way

    Read the article

  • MySQL command-line tool: How to find out number of rows affected by a DELETE?

    - by ambivalence
    I'm trying to run a script that deletes a bunch of rows in a MySQL (innodb) table in batches, by executing the following in a loop: mysql --user=MyUser --password=MyPassword MyDatabase < SQL_FILE where SQL_FILE contains a DELETE FROM ... LIMIT X command. I need to keep running this loop until there's no more matching rows. But unlike running in the mysql shell, the above command does not return the number of rows affected. I've tried -v and -t but neither works. How can I find out how many rows the batch script affected? Thanks!

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • Creating new Entities from Stored Procedure

    - by SK
    I have a stored procedure that retrieves existing rows from a table and also creates includes new rows that match the table definition and mapped entity (.net 3.5 entity framework). These new rows are not written to the database in the stored procedure. The stored procedure executes, but the new rows that were created will not load the navigation properties sucessfully i.e. the rows that do not actually exist in the database. e.g. database rows: key, data, FK 1, xxx, a 2, xxx, b returned rows from stored procedure: key, data, FK 1, xxx, a 2, xxx, b 3, yyy, a 4, yyy, b The entity will load FK entities a and b for rows 1 and 2, but for rows 3 and 4 the FK entity is null. Do I somehow need to add the new rows to the data context? or turn off tracking?

    Read the article

  • how to query sqlite for certain rows, i.e. dividing it into pages (perl DBI)

    - by user1380641
    sorry for my noob question, I'm currently writing a perl web application with sqlite database behind it. I would like to be able to show in my app query results which might get thousands of rows - these should be split in pages - routing should be like /webapp/N - where N is the page number. what is the correct way to query the sqlite db using DBI, in order to fetch only the relavent rows. for instance, if I show 25 rows per page so I want to query the db for 1-25 rows in the first page, 26-50 in the second page etc.... Thanks in advanced!

    Read the article

  • [MYSQL] select statement that combines similar rows with certain ids ?

    - by vegatron
    hi I have a warehouse_products table which defines how many products in the warehouses so lets say I have 20 records/rows in the table, some rows may contain the same product id but in a different warehouse I need to create select statement that give every product one row, and in this row I must have the quantity in warehouse A and warehouse B .. so in the end I will get for example 10 rows that contain all the data id domain_id warehouse_id wh_product_id quantity 2 1 2 2 84 3 1 1 3 221 4 1 3 3 0 5 1 1 3 14 6 1 1 2 73 7 1 1 1 123

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • How do I print values of an array in Three Rows?

    - by Ramkumar
    I need this output.. 1 3 5 2 4 6 I want to use array function like array(1,2,3,4,5,6). If I edit this array like array(1,2,3), it means the output need to show like 1 2 3 The concept is maximum 3 column only. If we give array(1,2,3,4,5), it means the output should be 1 3 5 2 4 Suppose we will give array(1,2,3,4,5,6,7,8,9), then it means output is 1 4 7 2 5 8 3 6 9 that is, maximum 3 column only. Depends upon the the given input, the rows will be created with 3 columns. Is this possible with PHP? I am doing small Research & Development in array functions. I think this is possible. Will you help me? For more info: * input: array(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) * output: 1 6 11 2 7 12 3 8 13 4 9 14 5 10 15

    Read the article

  • What would you do if you just had this code dumped in your lap?

    - by chickeninabiscuit
    Man, I just had this project given to me - expand on this they say. This is an example of ONE function: <?php //500+ lines of pure wonder. function page_content_vc($content) { global $_DBH, $_TPL, $_SET; $_SET['ignoreTimezone'] = true; lu_CheckUpdateLogin(); if($_SESSION['dash']['VC']['switch'] == 'unmanned' || $_SESSION['dash']['VC']['switch'] == 'touchscreen') { if($content['page_name'] != 'vc') { header('Location: /vc/'); die(); } } if($_GET['l']) { unset($_SESSION['dash']['VC']); if($loc_id = lu_GetFieldValue('ID', 'Location', $_GET['l'])) { if(lu_CheckPermissions('vc', $loc_id)) { $timezone = lu_GetFieldValue('Time Zone', 'Location', $loc_id, 'ID'); if(strlen($timezone) > 0) { $_SESSION['time_zone'] = $timezone; } $_SESSION['dash']['VC']['loc_ID'] = $loc_id; header('Location: /vc/'); die(); } } } if($_SESSION['dash']['VC']['loc_ID']) { $timezone = lu_GetFieldValue('Time Zone', 'Location', $_SESSION['dash']['VC']['loc_ID'], 'ID'); if(strlen($timezone) > 0) { $_SESSION['time_zone'] = $timezone; } $loc_id = $_SESSION['dash']['VC']['loc_ID']; $org_id = lu_GetFieldValue('record_ID', 'Location', $loc_id); $_TPL->assign('loc_id', $loc_id); $location_name = lu_GetFieldValue('Location Name', 'Location', $loc_id); $_TPL->assign('LocationName', $location_name); $customer_name = lu_GetFieldValue('Customer Name', 'Organisation', $org_id); $_TPL->assign('CustomerName', $customer_name); $enable_visitor_snap = lu_GetFieldValue('VisitorSnap', 'Location', $loc_id); $_TPL->assign('EnableVisitorSnap', $enable_visitor_snap); $lacps = explode("\n", lu_GetFieldValue('Location Access Control Point', 'Location', $loc_id)); array_walk($lacps, 'trim_value'); if(count($lacps) > 0) { if(count($lacps) == 1) { $_SESSION['dash']['VC']['lacp'] = $lacps[0]; } else { if($_GET['changeLACP'] && in_array($_GET['changeLACP'], $lacps)) { $_SESSION['dash']['VC']['lacp'] = $_GET['changeLACP']; header('Location: /vc/'); die(); } else if(!in_array($_SESSION['dash']['VC']['lacp'], $lacps)) { $_SESSION['dash']['VC']['lacp'] = $lacps[0]; } $_TPL->assign('LACP_array', $lacps); } $_TPL->assign('current_LACP', $_SESSION['dash']['VC']['lacp']); $_TPL->assign('showContractorSearch', true); /* if($contractorStaff = lu_GetTableRow('ContractorStaff', $org_id, 'record_ID', 'record_Inactive != "checked"')) { foreach($contractorStaff['rows'] as $contractor) { $lacp_rights = lu_OrganiseCustomDataFunctionMultiselect($contractor[lu_GetFieldName('Location Access Rights', 'ContractorStaff')]); if(in_array($_SESSION['dash']['VC']['lacp'], $lacp_rights)) { $_TPL->assign('showContractorSearch', true); } } } */ } $selectedOptions = explode(',', lu_GetFieldValue('Included Fields', 'Location', $_SESSION['dash']['VC']['loc_ID'])); $newOptions = array(); foreach($selectedOptions as $selOption) { $so_array = explode('|', $selOption, 2); if(count($so_array) > 1) { $newOptions[$so_array[0]] = $so_array[1]; } else { $newOptions[$so_array[0]] = "Both"; } } if($newOptions[lu_GetFieldName('Expected Length of Visit', 'Visitor')]) { $alert = false; if($visitors = lu_OrganiseVisitors( lu_GetTableRow('Visitor', 'checked', lu_GetFieldName('Checked In', 'Visitor'), lu_GetFieldName('Location for Visit', 'Visitor').'="'.$_SESSION['dash']['VC']['loc_ID'].'" AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked"'), false, true, true)) { foreach($visitors['rows'] as $key => $visitor) { if($visitor['expected'] && $visitor['expected'] + (60*30) < time()) { $alert = true; } } } if($alert == true) { $_TPL->assign('showAlert', 'red'); } else { //$_TPL->assign('showAlert', 'green'); } } $_TPL->assign('switch', $_SESSION['dash']['VC']['switch']); if($_SESSION['dash']['VC']['switch'] == 'touchscreen') { $_TPL->assign('VC_unmanned', true); } if($_GET['check'] == 'in') { if($_SESSION['dash']['VC']['switch'] == 'touchscreen') { lu_CheckInTouchScreen(); } else { lu_CheckIn(); } } else if($_GET['check'] == 'out') { if($_SESSION['dash']['VC']['switch'] == 'touchscreen') { lu_CheckOutTouchScreen(); } else { lu_CheckOut(); } } else if($_GET['switch'] == 'unmanned') { $_SESSION['dash']['VC']['switch'] = 'unmanned'; if($_GET['printing'] == true && (lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "No" && lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "")) { $_SESSION['dash']['VC']['printing'] = true; } else { $_SESSION['dash']['VC']['printing'] = false; } header('Location: /vc/'); die(); } else if($_GET['switch'] == 'touchscreen') { $_SESSION['dash']['VC']['switch'] = 'touchscreen'; if($_GET['printing'] == true && (lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "No" && lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "")) { $_SESSION['dash']['VC']['printing'] = true; } else { $_SESSION['dash']['VC']['printing'] = false; } header('Location: /vc/'); die(); } else if($_GET['switch'] == 'manned') { if($_POST['password']) { if(md5($_POST['password']) == $_SESSION['dash']['password']) { unset($_SESSION['dash']['VC']['switch']); //setcookie('email', "", time() - 3600); //setcookie('location', "", time() - 3600); header('Location: /vc/'); die(); } else { $_TPL->assign('switchLoginError', 'Incorrect Password'); } } $_TPL->assign('switchLogin', 'true'); } else if($_GET['m'] == 'visitor') { lu_ModifyVisitorVC(); } else if($_GET['m'] == 'enote') { lu_ModifyEnoteVC(); } else if($_GET['m'] == 'medical') { lu_ModifyMedicalVC(); } else if($_GET['print'] == 'label' && $_GET['v']) { lu_PrintLabelVC(); } else { unset($_SESSION['dash']['VC']['checkin']); unset($_SESSION['dash']['VC']['checkout']); $_TPL->assign('icon', 'GroupCheckin'); if($_SESSION['dash']['VC']['switch'] != 'unmanned' && $_SESSION['dash']['VC']['switch'] != 'touchscreen') { $staff_ids = array(); if($staffs = lu_GetTableRow('Staff', $_SESSION['dash']['VC']['loc_ID'], 'record_ID')) { foreach($staffs['rows'] as $staff) { $staff_ids[] = $staff['ID']; } } if($_GET['view'] == "tomorrow") { $dateStart = date('Y-m-d', mktime(0, 0, 0, date("m") , date("d")+1, date("Y"))); $dateEnd = date('Y-m-d', mktime(0, 0, 0, date("m") , date("d")+1, date("Y"))); } else if($_GET['view'] == "month") { $dateStart = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d"), date("Y"))); $dateEnd = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d")+30, date("Y"))); } else if($_GET['view'] == "week") { $dateStart = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d"), date("Y"))); $dateEnd = date('Y-m-d', mktime(0, 0, 0, date("m"), date("d")+7, date("Y"))); } else { $dateStart = date('Y-m-d'); $dateEnd = date('Y-m-d'); } if(lu_GetFieldValue('Enable Survey', 'Location', $_SESSION['dash']['VC']['loc_ID']) == 'checked' && lu_GetFieldValue('Add Survey', 'Location', $_SESSION['dash']['VC']['loc_ID']) == 'checked') { $_TPL->assign('enableSurvey', true); } //lu_GetFieldName('Checked In', 'Visitor') //!= "checked" //date('d/m/Y'), lu_GetFieldName('Date of Visit', 'Visitor') if($visitors = lu_OrganiseVisitors(lu_GetTableRow('Visitor', $_SESSION['dash']['VC']['loc_ID'], lu_GetFieldName('Location for Visit', 'Visitor'), lu_GetFieldName('Checked In', 'Visitor').' != "checked" AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked" AND '.lu_GetFieldName('Date of Visit', 'Visitor').' >= "'.$dateStart.'" AND '.lu_GetFieldName('Date of Visit', 'Visitor').' <= "'.$dateEnd.'"'))) { foreach($visitors['days'] as $day => $visitors_day) { foreach($visitors_day['rows'] as $key => $visitor) { $visitors['days'][$day]['rows'][$key]['visiting'] = lu_GetTableRow('Staff', $visitor['record_ID'], 'ID'); $visitors['days'][$day]['rows'][$key]['visiting']['notify'] = $_DBH->getRow('SELECT * FROM lu_notification WHERE ent_ID = "'.$visitor['record_ID'].'"'); } } //array_dump($visitors); $_TPL->assign('visitors', $visitors); } if($_GET['conGroup']) { if($_GET['action'] == 'add') { $_SESSION['dash']['VC']['conGroup'][$_GET['conGroup']] = $_GET['conGroup']; } else { unset($_SESSION['dash']['VC']['conGroup'][$_GET['conGroup']]); } } if(count($_SESSION['dash']['VC']['conGroup']) > 0) { if($conGroupResult = lu_GetTableRow('ContractorStaff', '1', '1', ' ID IN ('.implode(',', $_SESSION['dash']['VC']['conGroup']).')')) { if($_POST['_submit'] == 'Check-In Group >>') { $form = lu_GetForm('VisitorStandard'); $standarddata = array(); foreach($form['items'] as $key=>$item) { $standarddata[$key] = $_POST[lu_GetFieldName($item['name'], 'Visitor')]; } foreach($conGroupResult['rows'] as $conStaff) { $data = $standarddata; foreach($form['items'] as $key=>$item) { if($key != 'ID' && $key != 'record_ID' && $conStaff[lu_GetFieldName(lu_GetNameField($key, 'Visitor'), 'ContractorStaff')]) { $data[$key] = $conStaff[lu_GetFieldName(lu_GetNameField($key, 'Visitor'), 'ContractorStaff')]; } } $data['record_ID'] = $data[lu_GetFieldName('Visiting', 'Visitor')]; $data[lu_GetFieldName('Date of Visit', 'Visitor')] = date('Y-m-d'); $data[lu_GetFieldName('Time of Visit', 'Visitor')] = date('H:i'); $data[lu_GetFieldName('Checked In', 'Visitor')] = 'checked'; $data[lu_GetFieldName('Location for Visit', 'Visitor')] = $_SESSION['dash']['VC']['loc_ID']; $data[lu_GetFieldName('ConStaff ID', 'Visitor')] = $conStaff['ID']; $data[lu_GetFieldName('From', 'Visitor')] = lu_GetFieldValue('Legal Name', 'Contractor', $conStaff[lu_GetFieldName('Contractor', 'ContractorStaff')]); $id = lu_UpdateData($form, $data); lu_VisitorCheckIn($id); //array_dump($data); //array_dump($id); } unset($_SESSION['dash']['VC']['conGroup']); header('Location: /vc/'); die(); } if(count($conGroupResult['rows'])) { foreach($conGroupResult['rows'] as $key => $cstaff) { $conGroupResult['rows'][$key]['contractor'] = lu_GetTableRow('Contractor', $cstaff[lu_GetFieldName('Contractor', 'ContractorStaff')], 'ID'); } $_TPL->assign('conGroupResult', $conGroupResult); } $conGroupForm = lu_GetForm('VisitorConGroup'); $conGroupForm = lu_OrganiseVisitorForm($conGroupForm, $_SESSION['dash']['VC']['loc_ID'], 'Contractor'); $secure_options_array = lu_GetSecureOptions($org_id); if($secure_options_array[$_SESSION['dash']['VC']['loc_ID']]) { $conGroupForm['items'][lu_GetFieldName('Secure Area', 'Visitor')]['options']['values'] = $secure_options_array[$_SESSION['dash']['VC']['loc_ID']]; $conGroupForm['items'][lu_GetFieldName('Secure Area', 'Visitor')]['name'] = 'Secure Area'; } else { unset($conGroupForm['items'][lu_GetFieldName('Secure Area', 'Visitor')]); } if($secure_options_array) { $form['items'][lu_GetFieldName('Secure Area', 'Visitor')]['options']['values'] = $secure_options_array; $form['items'][lu_GetFieldName('Secure Area', 'Visitor')]['name'] = 'Secure Area'; } else { unset($form['items'][lu_GetFieldName('Secure Area', 'Visitor')]); } $_TPL->assign('conGroupForm', $conGroupForm); $_TPL->assign('hideFormCancel', true); } } if($_GET['searchVisitors']) { $_TPL->assign('searchVisitorsQuery', $_GET['searchVisitors']); $where = ''; if($_GET['searchVisitorsIn'] == 'Yes') { $where .= ' AND '.lu_GetFieldName('Checked In', 'Visitor').' = "checked"'; $_TPL->assign('searchVisitorsIn', 'Yes'); } else { $where .= ' AND '.lu_GetFieldName('Checked In', 'Visitor').' != "checked"'; $_TPL->assign('searchVisitorsIn', 'No'); } if($_GET['searchVisitorsOut'] == 'Yes') { $where = ''; $where .= ' AND '.lu_GetFieldName('Checked Out', 'Visitor').' = "checked"'; $_TPL->assign('searchVisitorsOut', 'Yes'); } else { $where .= ' AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked"'; $_TPL->assign('searchVisitorsOut', 'No'); } if($searchVisitors = lu_OrganiseVisitors(lu_GetTableRow('Visitor', $_GET['searchVisitors'], '#search#', lu_GetFieldName('Location for Visit', 'Visitor').'="'.$_SESSION['dash']['VC']['loc_ID'].'"'.$where))) { foreach($searchVisitors['rows'] as $key => $visitor) { $searchVisitors['rows'][$key]['visiting'] = lu_GetTableRow('Staff', $visitor['record_ID'], 'ID'); } $_TPL->assign('searchVisitors', $searchVisitors); } else { $_TPL->assign('searchVisitorsNotFound', true); } } else if($_GET['searchStaff']) { if($_POST['staff_id']) { if(lu_CheckPermissions('staff', $_POST['staff_id'])) { $_DBH->query('UPDATE '.lu_GetTableName('Staff').' SET '.lu_GetFieldName('Current Location', 'Staff').' = "'.$_POST['current_location'].'" WHERE ID="'.$_POST['staff_id'].'"'); } } $locations = lu_GetTableRow('Location', $org_id, 'record_ID'); if(count($locations['rows']) > 1) { $_TPL->assign('staffLocations', $locations); } $loc_ids = array(); foreach($locations['rows'] as $location) { $loc_ids[] = $location['ID']; } // array_dump($locations); // array_dump($_POST); $_TPL->assign('searchStaffQuery', $_GET['searchStaff']); $where = ' AND record_Inactive != "checked"'; if($_GET['searchStaffIn'] == 'Yes' && $_GET['searchStaffOut'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Staff Status', 'Staff').' = "" OR '.lu_GetFieldName('Staff Status', 'Staff').' = "On-Site")'. $_TPL->assign('searchStaffIn', 'Yes'); $_TPL->assign('searchStaffOut', 'No'); } else if($_GET['searchStaffOut'] == 'Yes' && $_GET['searchStaffIn'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Staff Status', 'Staff').' != "" AND '.lu_GetFieldName('Staff Status', 'Staff').' != "On-Site")'. $_TPL->assign('searchStaffOut', 'Yes'); $_TPL->assign('searchStaffIn', 'No'); } else { $_TPL->assign('searchStaffOut', 'Yes'); $_TPL->assign('searchStaffIn', 'Yes'); } if($searchStaffs = lu_GetTableRow('Staff', $_GET['searchStaff'], '#search#', 'record_ID IN ('.implode(',', $loc_ids).')'.$where, lu_GetFieldName('First Name', 'Staff').','.lu_GetFieldName('Surname', 'Staff'))) { $_TPL->assign('searchStaffs', $searchStaffs); } else { $_TPL->assign('searchStaffNotFound', true); } } else if($_GET['searchContractor']) { $_TPL->assign('searchContractorQuery', $_GET['searchContractor']); //$where = ' AND '.lu_GetTableName('ContractorStaff').'.record_Inactive != "checked"'; $where = ' '; if($_GET['searchContractorIn'] == 'Yes' && $_GET['searchContractorOut'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Onsite Status', 'ContractorStaff').' = "Onsite")'; $_TPL->assign('searchContractorIn', 'Yes'); $_TPL->assign('searchContractorOut', 'No'); } else if($_GET['searchContractorOut'] == 'Yes' && $_GET['searchContractorIn'] != 'Yes') { $where .= ' AND ('.lu_GetFieldName('Onsite Status', 'ContractorStaff').' != "Onsite")'. $_TPL->assign('searchContractorOut', 'Yes'); $_TPL->assign('searchContractorIn', 'No'); } else { $_TPL->assign('searchContractorOut', 'Yes'); $_TPL->assign('searchContractorIn', 'Yes'); } $join = 'LEFT JOIN '.lu_GetTableName('Contractor').' ON '.lu_GetTableName('Contractor').'.ID = '.lu_GetTableName('ContractorStaff').'.'.lu_GetFieldName('Contractor', 'ContractorStaff'); $extrasearch = array ( lu_GetTableName('Contractor').'.'.lu_GetFieldName('Legal Name', 'Contractor') ); if($searchContractorResult = lu_GetTableRow('ContractorStaff', $_GET['searchContractor'], '#search#', lu_GetTableName('ContractorStaff').'.record_ID = "'.$org_id.'" '.$where, lu_GetFieldName('First Name', 'ContractorStaff').','.lu_GetFieldName('Surname', 'ContractorStaff'), $join, $extrasearch)) { /* foreach($searchContractorResult['rows'] as $key=>$contractor) { $lacp_rights = lu_OrganiseCustomDataFunctionMultiselect($contractor[lu_GetFieldName('Location Access Rights', 'ContractorStaff')]); if(!in_array($_SESSION['dash']['VC']['lacp'], $lacp_rights)) { unset($searchContractorResult['rows'][$key]); } } */ if(count($searchContractorResult['rows'])) { foreach($searchContractorResult['rows'] as $key => $cstaff) { /* if($cstaff[lu_GetFieldName('Onsite_Status', 'Contractor')] == 'Onsite')) { if($visitor['rows'][0][lu_GetFieldName('ConStaff ID', 'Visitor')]) { $_DBH->query('UPDATE '.lu_GetTableName('ContractorStaff').' SET '.lu_GetFieldName('Onsite Status', 'ContractorStaff').' = "" WHERE ID="'.$visitor['rows'][0][lu_GetFieldName('ConStaff ID', 'Visitor')].'"'); } } */ if($cstaff[lu_GetFieldName('SACN Expiry Date', 'ContractorStaff')] != '0000-00-00') { if(strtotime($cstaff[lu_GetFieldName('SACN Expiry Date', 'ContractorStaff')]) < time()) { $searchContractorResult['rows'][$key]['sacn_expiry'] = true; } else { $searchContractorResult['rows'][$key]['sacn_expiry'] = false; } } else { $searchContractorResult['rows'][$key]['sacn_expiry'] = false; } if($cstaff[lu_GetFieldName('Induction Valid Until', 'ContractorStaff')] != '0000-00-00') { if(strtotime($cstaff[lu_GetFieldName('Induction Valid Until', 'ContractorStaff')]) < time()) { $searchContractorResult['rows'][$key]['induction_expiry'] = true; } else { $searchContractorResult['rows'][$key]['induction_expiry'] = false; } } else { $searchContractorResult['rows'][$key]['induction_expiry'] = false; } $searchContractorResult['rows'][$key]['contractor'] = lu_GetTableRow('Contractor', $cstaff[lu_GetFieldName('Contractor', 'ContractorStaff')], 'ID'); } $_TPL->assign('searchContractorResult', $searchContractorResult); } else { $_TPL->assign('searchContractorNotFound', true); } } else { $_TPL->assign('searchContractorNotFound', true); } } $occupancy = array(); $occupancy['staffNumber'] = $_DBH->getOne('SELECT count(*) FROM '.lu_GetTableName('Staff').' WHERE record_ID = "'.$_SESSION['dash']['VC']['loc_ID'].'" AND record_Inactive != "checked" AND '.lu_GetFieldName('Ignore Counts', 'Staff').' != "checked"'); $occupancy['staffNumberOnsite']= $_DBH->getOne( 'SELECT count(*) FROM '.lu_GetTableName('Staff').' WHERE ( (record_ID = "'.$_SESSION['dash']['VC']['loc_ID'].'" AND ('.lu_GetFieldName('Staff Status', 'Staff').' = "" OR '.lu_GetFieldName('Staff Status', 'Staff').' = "On-Site")) OR '.lu_GetFieldName('Current Location', 'Staff').' = "'.$_SESSION['dash']['VC']['loc_ID'].'") AND record_Inactive != "checked" AND '.lu_GetFieldName('Ignore Counts', 'Staff').' != "checked"'); $occupancy['visitorsOnsite'] = $_DBH->getOne('SELECT count(*) FROM '.lu_GetTableName('Visitor').' WHERE '.lu_GetFieldName('Location for Visit', 'Visitor').' = "'.$_SESSION['dash']['VC']['loc_ID'].'" AND '.lu_GetFieldName('Checked In', 'Visitor').' = "checked" AND '.lu_GetFieldName('Checked Out', 'Visitor').' != "checked"'); $_TPL->assign('occupancy', $occupancy); if($enotes = lu_GetTableRow('Enote', $org_id, 'record_ID', lu_GetFieldName('Note Emailed', 'Enote').' = "0000-00-00" AND '.lu_GetFieldName('Note Passed On', 'Enote').' != "Yes"')) { $_TPL->assign('EnoteNotice', true); } if($medical = lu_GetTableRow('MedicalRoom', $_SESSION['dash']['VC']['loc_ID'], 'record_ID', 'record_Inactive != "Yes"')) { $_TPL->assign('MedicalNotice', true); } if(lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "No" && lu_GetFieldValue('Printing', 'Location', $_SESSION['dash']['VC']['loc_ID']) != "") { $_TPL->assign('UnmannedPrinting', true); } } else { if($_SESSION['dash']['VC']['printing'] == true) { $_TPL->assign('UnmannedPrinting', true); } } // enable if contractor check-in buttons should be enabled if(lu_GetFieldValue('Enable Contractor Check In', 'Location', $_SESSION['dash']['VC']['loc_ID']) == "checked") { $_TPL->assign('ContractorCheckin', true); } } if($_SESSION['dash']['entity_id'] && $_GET['fixupCon'] == 'true') { $conStaffs = lu_GetTableRow('ContractorStaff', $_SESSION['dash']['ModifyConStaffs']['org_ID'], 'record_ID', '', lu_GetFieldName('First Name', 'ContractorStaff').','.lu_GetFieldName('Surname', 'ContractorStaff')); foreach($conStaffs['rows'] as $key => $cstaff) { if($cstaff[lu_GetFieldName('Site Access Card Number', 'ContractorStaff')] && $cstaff[lu_GetFieldName('Site Access Card Type', 'ContractorStaff')]) { echo $cstaff['ID'].' '; $_DBH->query('UPDATE '.lu_GetTableName('Visitor').' SET '.lu_GetFieldName('Site Access Card Number', 'Visitor').' = "'.$cstaff[lu_GetFieldName('Site Access Card Number', 'ContractorStaff')].'", '.lu_GetFieldName('Site Access Card Type', 'Visitor').' = "'.$cstaff[lu_GetFieldName('Site Access Card Type', 'ContractorStaff')].'" WHERE '.lu_GetFieldName('ConStaff ID', 'Visitor').'="'.$cstaff['ID'].'"'); } } } } else { if($_SESSION['dash']['staffs']) { foreach($_SESSION['dash']['staffs']['rows'] as $staff) { if($staff[lu_GetFieldName('Reception Manager', 'Staff')] == 'checked') { $loc_id = $staff['record_ID']; unset($_SESSION['dash']['VC']); if($loc_id = lu_GetFieldValue('ID', 'Location', $loc_id)) { $_SESSION['dash']['VC']['loc_ID'] = $loc_id; header('Location: /vc/'); die(); } } } } $_TPL->assign('mode', 'public'); } $content['page_content'] = $_TPL->fetch('modules/vc.htm'); return $content; } ?> die();die();die();die();die(); This question will probably be closed - i just need some support from my coding brothers and sisters. *SOB*

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What can be used instead of Datatable in LINQ

    - by Kabi
    I have a SQL query that returns a Datatable: var routesTable = _dbhelper.Select("SELECT [RouteId],[UserId],[SourceName],[CreationTime] FROM [Routes] WHERE UserId=@UserId AND RouteId=@RouteId", inputParams); and then we can work with Datatable object of routesTable if (routesTable.Rows.Count == 1) { result = new Route(routeId) { Name = (string)routesTable.Rows[0]["SourceName"], Time = routesTable.Rows[0]["CreationTime"] is DBNull ? new DateTime() : Convert.ToDateTime(routesTable.Rows[0]["CreationTime"]) }; result.TrackPoints = GetTrackPointsForRoute(routeId); } I want to change this code to linq but I don't know how can I simulate Datatable in LINQ ,I wrote this part: Route result = null; aspnetdbDataContext aspdb = new aspnetdbDataContext(); var Result = from r in aspdb.RouteLinqs where r.UserId == userId && r.RouteId==routeId select r; .... but I don't know how can I change this part: if (routesTable.Rows.Count == 1) { result = new Route(routeId) { Name = (string)routesTable.Rows[0]["SourceName"], Time = routesTable.Rows[0]["CreationTime"] is DBNull ? new DateTime() : Convert.ToDateTime(routesTable.Rows[0]["CreationTime"]) }; would you please tell me how can I do this? EDIT here you can see the whole block of code in original public Route GetById(int routeId, Guid userId) { Route result = null; var inputParams = new Dictionary<string, object> { {"UserId", userId}, {"RouteId", routeId} }; var routesTable = _dbhelper.Select("SELECT [RouteId],[UserId],[SourceName],[CreationTime] FROM [Routes] WHERE UserId=@UserId AND RouteId=@RouteId", inputParams); if (routesTable.Rows.Count == 1) { result = new Route(routeId) { Name = (string)routesTable.Rows[0]["SourceName"], Time = routesTable.Rows[0]["CreationTime"] is DBNull ? new DateTime() : Convert.ToDateTime(routesTable.Rows[0]["CreationTime"]) }; result.TrackPoints = GetTrackPointsForRoute(routeId); } return result; }

    Read the article

  • Search option: jqGrid + PHP

    - by Felix Guerrero
    Hi, I'm trying to get work this code: $("#list").jqGrid({ url: 'docente.php', datatype: 'json', mtype: 'POST', postData: {c : "", n: "", a: "", d: "", t: "", e: "", f: "", g: $('#selectD').value(), p: "", call: "report"}, colNames: ['C&eacute;dula','Nombres', 'Apellidos', 'Direcci&oacute;n', 'E-mail','Tel&eacute;fono', 'Profesi&oacute;n'], colModel: [ { name:'rows.cedula', index: 'cedula', search:true, jsonmap: 'cedula', width: 150, align: 'left', sortable:true}, { name:'rows.nombre', index: 'nombre', jsonmap: 'nombre', width: 150, align: 'left'}, { name:'rows.apellido', index: 'apellido', jsonmap: 'apellido', width: 240, align: 'left'}, { name:'rows.direccion', index: 'direccion', jsonmap: 'direccion', width: 330, align: 'left'}, { name:'rows.email', index: 'email', jsonmap: 'email',width: 200, align: 'left'}, { name:'rows.telefono', index: 'telefono', jsonmap: 'telefono', width: 170, align: 'left'}, { name:'rows.descripcion_profesion', index: 'descripcion_profesion', jsonmap: 'descripcion_profesion',width: 120, align: 'left'}], pager: '#pager', rowNum: 8, autowidth: true, rowList: [10, 20], sortname: 'cedula', sortorder: 'asc', viewrecords: true, caption: 'Docentes', jsonReader : { root: "rows", repeatitems: false }, height: 350, width: 900 }); $("#list").jqGrid('navGrid','#pager',{search:true, searchtext: "buscar",edit:false,add:false,del:false}); The HMTL I'm using: <div id="reporte"> <table id="list"></table> <div id="pager"> </div> <div> And the PHP code: $total_pages = 0; $limit=10; $count = Profesor::CountAll(); if($count> 0) $total_pages= ceil($count/$limit); else $total_pages=0; $cnx = new PDO("mysql:host=localhost;dbname=appsms","root",""); $rows = array(); $strQuery="select cedula, nombre, apellido, direccion, telefono, email, descripcion_profesion from Profesor join Profesion on " . " (profesor.profesion_id = profesion.id) and (profesor.grado_id_grado = '" .$this->grado . "') order by cedula;"; $stmt = $cnx->prepare($strQuery); $stmt->execute(array("reporte")); $rows = $stmt->fetchAll(PDO::FETCH_ASSOC); $response->page = 1; $response->total= $total_pages; $response->records = sizeof($rows); $i=0; foreach($rows as $result){ $response->rows[$i] = $result; $i++; } echo json_encode($response); The JSON I get: {"page":1,"total":1,"records":2, "rows":[{"cedula":"v-108984103","nombre":"Soneia","apellido":"Rond\u00f3n Contreras","direccion":"El Rosal, calle 2-44","telefono":"04544247008457","email":"[email protected]","descripcion_profesion":"Docente"}, {"cedula":"v-135254741","nombre":"Judith","apellido":"Rangel M\u00e1rquez","direccion":"Sabaneta","telefono":"04247644152499","email":"","descripcion_profesion":"Docente"}]} It loads the query from PHP but the search option doesn't work neither the sorting function on "cedula"'s column. What I need to do?. Thanks in advance.

    Read the article

  • edit row in gridview

    - by user576998
    Hi.I would like to help me with my code. I have 2 gridviews. In the first gridview the user can choose with a checkbox every row he wants. These rows are transfered in the second gridview. All these my code does them well.Now, I want to edit the quantity column in second gridview to change the value but i don't know what i must write in edit box. Here is my code: using System; using System.Data; using System.Data.SqlClient; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Collections; public partial class ShowLand : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindPrimaryGrid(); BindSecondaryGrid(); } } private void BindPrimaryGrid() { string constr = ConfigurationManager.ConnectionStrings["conString"].ConnectionString; string query = "select * from Land"; SqlConnection con = new SqlConnection(constr); SqlDataAdapter sda = new SqlDataAdapter(query, con); DataTable dt = new DataTable(); sda.Fill(dt); gridview2.DataSource = dt; gridview2.DataBind(); } private void GetData() { DataTable dt; if (ViewState["SelectedRecords1"] != null) dt = (DataTable)ViewState["SelectedRecords1"]; else dt = CreateDataTable(); CheckBox chkAll = (CheckBox)gridview2.HeaderRow .Cells[0].FindControl("chkAll"); for (int i = 0; i < gridview2.Rows.Count; i++) { if (chkAll.Checked) { dt = AddRow(gridview2.Rows[i], dt); } else { CheckBox chk = (CheckBox)gridview2.Rows[i] .Cells[0].FindControl("chk"); if (chk.Checked) { dt = AddRow(gridview2.Rows[i], dt); } else { dt = RemoveRow(gridview2.Rows[i], dt); } } } ViewState["SelectedRecords1"] = dt; } private void SetData() { CheckBox chkAll = (CheckBox)gridview2.HeaderRow.Cells[0].FindControl("chkAll"); chkAll.Checked = true; if (ViewState["SelectedRecords1"] != null) { DataTable dt = (DataTable)ViewState["SelectedRecords1"]; for (int i = 0; i < gridview2.Rows.Count; i++) { CheckBox chk = (CheckBox)gridview2.Rows[i].Cells[0].FindControl("chk"); if (chk != null) { DataRow[] dr = dt.Select("id = '" + gridview2.Rows[i].Cells[1].Text + "'"); chk.Checked = dr.Length > 0; if (!chk.Checked) { chkAll.Checked = false; } } } } } private DataTable CreateDataTable() { DataTable dt = new DataTable(); dt.Columns.Add("id"); dt.Columns.Add("name"); dt.Columns.Add("price"); dt.Columns.Add("quantity"); dt.Columns.Add("total"); dt.AcceptChanges(); return dt; } private DataTable AddRow(GridViewRow gvRow, DataTable dt) { DataRow[] dr = dt.Select("id = '" + gvRow.Cells[1].Text + "'"); if (dr.Length <= 0) { dt.Rows.Add(); dt.Rows[dt.Rows.Count - 1]["id"] = gvRow.Cells[1].Text; dt.Rows[dt.Rows.Count - 1]["name"] = gvRow.Cells[2].Text; dt.Rows[dt.Rows.Count - 1]["price"] = gvRow.Cells[3].Text; dt.Rows[dt.Rows.Count - 1]["quantity"] = gvRow.Cells[4].Text; dt.Rows[dt.Rows.Count - 1]["total"] = gvRow.Cells[5].Text; dt.AcceptChanges(); } return dt; } private DataTable RemoveRow(GridViewRow gvRow, DataTable dt) { DataRow[] dr = dt.Select("id = '" + gvRow.Cells[1].Text + "'"); if (dr.Length > 0) { dt.Rows.Remove(dr[0]); dt.AcceptChanges(); } return dt; } protected void CheckBox_CheckChanged(object sender, EventArgs e) { GetData(); SetData(); BindSecondaryGrid(); } private void BindSecondaryGrid() { DataTable dt = (DataTable)ViewState["SelectedRecords1"]; gridview3.DataSource = dt; gridview3.DataBind(); } } and the source code is <asp:GridView ID="gridview2" runat="server" AutoGenerateColumns="False" DataKeyNames="id" DataSourceID="SqlDataSource5"> <Columns> <asp:TemplateField> <HeaderTemplate> <asp:CheckBox ID="chkAll" runat="server" onclick = "checkAll(this);" AutoPostBack = "true" OnCheckedChanged = "CheckBox_CheckChanged"/> </HeaderTemplate> <ItemTemplate> <asp:CheckBox ID="chk" runat="server" onclick = "Check_Click(this)" AutoPostBack = "true" OnCheckedChanged = "CheckBox_CheckChanged" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="id" HeaderText="id" InsertVisible="False" ReadOnly="True" SortExpression="id" /> <asp:BoundField DataField="name" HeaderText="name" SortExpression="name" /> <asp:BoundField DataField="price" HeaderText="price" SortExpression="price" /> <asp:BoundField DataField="quantity" HeaderText="quantity" SortExpression="quantity" /> <asp:BoundField DataField="total" HeaderText="total" SortExpression="total" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource5" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand="SELECT * FROM [Land]"></asp:SqlDataSource> <br /> </div> <div> <asp:GridView ID="gridview3" runat="server" AutoGenerateColumns = "False" DataKeyNames="id" EmptyDataText = "No Records Selected" > <Columns> <asp:BoundField DataField = "id" HeaderText = "id" /> <asp:BoundField DataField = "name" HeaderText = "name" ReadOnly="True" /> <asp:BoundField DataField = "price" HeaderText = "price" DataFormatString="{0:c}" ReadOnly="True" /> <asp:TemplateField HeaderText="quantity"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("quantity")%>'</asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("quantity") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField = "total" HeaderText = "total" DataFormatString="{0:c}" ReadOnly="True" /> <asp:CommandField ShowEditButton="True" /> </Columns> </asp:GridView> <asp:Label ID="totalLabel" runat="server"></asp:Label> <br /> </div> </form> </body> </html>

    Read the article

  • How do I print out objects in an array in python?

    - by Jonathan
    I'm writing a code which performs a k-means clustering on a set of data. I'm actually using the code from a book called collective intelligence by O'Reilly. Everything works, but in his code he uses the command line and i want to write everything in notepad++. As a reference his line is >>>kclust=clusters.kcluster(data,k=10) >>>[rownames[r] for r in k[0]] Here is my code: from PIL import Image,ImageDraw def readfile(filename): lines=[line for line in file(filename)] # First line is the column titles colnames=lines[0].strip( ).split('\t')[1:] rownames=[] data=[] for line in lines[1:]: p=line.strip( ).split('\t') # First column in each row is the rowname rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data from math import sqrt def pearson(v1,v2): # Simple sums sum1=sum(v1) sum2=sum(v2) # Sums of the squares sum1Sq=sum([pow(v,2) for v in v1]) sum2Sq=sum([pow(v,2) for v in v2]) # Sum of the products pSum=sum([v1[i]*v2[i] for i in range(len(v1))]) # Calculate r (Pearson score) num=pSum-(sum1*sum2/len(v1)) den=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1))) if den==0: return 0 return 1.0-num/den class bicluster: def __init__(self,vec,left=None,right=None,distance=0.0,id=None): self.left=left self.right=right self.vec=vec self.id=id self.distance=distance def hcluster(rows,distance=pearson): distances={} currentclustid=-1 # Clusters are initially just the rows clust=[bicluster(rows[i],id=i) for i in range(len(rows))] while len(clust)>1: lowestpair=(0,1) closest=distance(clust[0].vec,clust[1].vec) # loop through every pair looking for the smallest distance for i in range(len(clust)): for j in range(i+1,len(clust)): # distances is the cache of distance calculations if (clust[i].id,clust[j].id) not in distances: distances[(clust[i].id,clust[j].id)]=distance(clust[i].vec,clust[j].vec) #print 'i' #print i #print #print 'j' #print j #print d=distances[(clust[i].id,clust[j].id)] if d<closest: closest=d lowestpair=(i,j) # calculate the average of the two clusters mergevec=[ (clust[lowestpair[0]].vec[i]+clust[lowestpair[1]].vec[i])/2.0 for i in range(len(clust[0].vec))] # create the new cluster newcluster=bicluster(mergevec,left=clust[lowestpair[0]], right=clust[lowestpair[1]], distance=closest,id=currentclustid) # cluster ids that weren't in the original set are negative currentclustid-=1 del clust[lowestpair[1]] del clust[lowestpair[0]] clust.append(newcluster) return clust[0] def kcluster(rows,distance=pearson,k=4): # Determine the minimum and maximum values for each point ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))] # Create k randomly placed centroids clusters=[[random.random( )*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)] lastmatches=None for t in range(100): print 'Iteration %d' % t bestmatches=[[] for i in range(k)] # Find which centroid is the closest for each row for j in range(len(rows)): row=rows[j] bestmatch=0 for i in range(k): d=distance(clusters[i],row) if d<distance(clusters[bestmatch],row): bestmatch=i bestmatches[bestmatch].append(j) # If the results are the same as last time, this is complete if bestmatches==lastmatches: break lastmatches=bestmatches # Move the centroids to the average of their members for i in range(k): avgs=[0.0]*len(rows[0]) if len(bestmatches[i])>0: for rowid in bestmatches[i]: for m in range(len(rows[rowid])): avgs[m]+=rows[rowid][m] for j in range(len(avgs)): avgs[j]/=len(bestmatches[i]) clusters[i]=avgs return bestmatches

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • How to dynamic adding rows into asp.net table ?

    - by user359706
    How can I add rows in a table from server-side? if (!Page.IsPostBack) { Session["table"] = TableId; }else TableId = (Table)Session["table"]; } protected void btnAddinRow_Click(object sender, EventArgs e) { num_row = (TableId.Rows).Count; TableRow r = new TableRow(); TableCell c1 = new TableCell(); TableCell c2 = new TableCell(); TextBox t = new TextBox(); t.ID = "textID" + num_row; t.EnableViewState = true; r.ID = "newRow" + num_row; c1.ID = "newC1" + num_row; c2.ID = "newC2" + num_row; c1.Text = "New Cell - " + num_row; c2.Controls.Add(t); r.Cells.Add(c1); r.Cells.Add(c2); TableId.Rows.Add(r); Session["table"] = TableId; } in debug I found out the number in the "TableID", but the rows are not drawn. Have you got an idea about this issue? Thanks

    Read the article

  • Excel: Is it possible to have rows with sub-totals that doesn't ruin the total-total?

    - by Svish
    I have an excel sheet which looks something like this: date text value other-value date text value other-value date text value other-value date text value other-value totals sum other-sum The sum is of course a sum of all the values above it. What I am wondering is if there is a way I can add rows with a sub-total at random places through out that list. For example if I would like to have a sub-total for each new month. My problem is of course that the total sum will become way to large because it will also sum the rows with the sub-totals. Does Excel have a feature that can help me so I can have the sub-totals and a correct total-total? I'm using Excel 2010.

    Read the article

  • How do I combine data from multiple rows in excel to one cell?

    - by Steve
    I have a list of product skus in one column in excel. I have thousands of these skus that need to be combined in one cell separated by commas with no spaces. There are too many rows of data to use the concatenate function. Not sure how to get this done. Here's an example of what I'm working with but with 6,000+ more rows. I'm using Excel 2003. A 140-12 1074-156 903-78 876-65 349-09 986-43 237-12 342-11 450-187 677-133

    Read the article

  • Can I insert rows next to a locked column in Excel?

    - by Tom
    If I lock cells A1:A3000, is there a way to insert rows in columns B-Z? I highlight them and I don't get the option to insert even though it is selected in the lock options. (Bottom line is that I need column A static, not to move.) Any ideas? Is it even possible? Better yet, is there any way to have formulas in column A static, as I insert rows in column B? Column A formulas change cell location when I do so.

    Read the article

  • How can I get cross-browser consistent behavior for TR heights within a table with a set height? [migrated]

    - by Dan
    I have an arbitrary number of tables with an arbitrary number of rows in each, and all tables are the same height. My initial approach was to just set the overall height of the table and hope the rows were smart enough to distribute themselves appropriately. That's not the case. I have 4 different behaviors going on with 4 browsers, but I need them to all render at the very least in a similar way. Safari & Chrome (WebKit): All rows are equal height, creating scroll bars as needed and fitting within table height. Firefox: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Additionally, If the content of the rows does not take up all of the height, only the part of the table with content in it takes the background (though it seems, through use of Firebug, that the actual table [and TR] extend to the bottom of the proper table height). IE: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Obviously this only includes one version of each browser and additional variation would likely appear with more being tested. Ideally, a solution where the browser renders TRs with less content smaller than those with larger content, while still using scrolling within the variable height TRs when the overall height of the table is not enough would be optimum. I could potentially see a solution to achieve that with JS, but can it be done with CSS? Or, if not, can the behavior that WebKit displays be made to work across the browsers? Thanks! PS: Example can be found here.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >