Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 84/293 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What's wrong with my destructor?

    - by Ahmed Sharara
    // Sparse Array Assignment.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include<iostream> using namespace std; struct node{ int row; int col; int value; node* next_in_row; node* next_in_col; }; class MultiLinkedListSparseArray { private: char *logfile; node** rowPtr; node** colPtr; // used in constructor node* find_node(node* out); node* ins_node(node* ins,int col); node* in_node(node* ins,node* z); node* get(node* in,int row,int col); bool exist(node* so,int row,int col); node* dummy; int rowd,cold; //add anything you need public: MultiLinkedListSparseArray(int rows, int cols); ~MultiLinkedListSparseArray(); void setCell(int row, int col, int value); int getCell(int row, int col); void display(); void log(char *s); void dump(); }; MultiLinkedListSparseArray::MultiLinkedListSparseArray(int rows,int cols){ rowPtr=new node* [rows+1]; colPtr=new node* [cols+1]; for(int n=0;n<=rows;n++) rowPtr[n]=NULL; for(int i=0;i<=cols;i++) colPtr[i]=NULL; rowd=rows;cold=cols; } MultiLinkedListSparseArray::~MultiLinkedListSparseArray(){ cout<<"array is deleted"<<endl; for(int i=rowd;i>=0;i--){ for(int j=cold;j>=0;j--){ if(exist(rowPtr[i],i,j)) delete get(rowPtr[i],i,j); } } // it stops in the last loop & doesnt show the done word cout<<"done"<<endl; delete [] rowPtr; delete [] colPtr; delete dummy; } void MultiLinkedListSparseArray::log(char *s){ logfile=s; } void MultiLinkedListSparseArray::setCell(int row,int col,int value){ if(exist(rowPtr[row],row,col)){ (*get(rowPtr[row],row,col)).value=value; } else{ if(rowPtr[row]==NULL){ rowPtr[row]=new node; (*rowPtr[row]).value=value; (*rowPtr[row]).row=row; (*rowPtr[row]).col=col; (*rowPtr[row]).next_in_row=NULL; (*rowPtr[row]).next_in_col=NULL; } else if((*find_node(rowPtr[row])).col<col){ node* out; out=find_node(rowPtr[row]); (*out).next_in_row=new node; (*((*out).next_in_row)).col=col; (*((*out).next_in_row)).row=row; (*((*out).next_in_row)).value=value; (*((*out).next_in_row)).next_in_row=NULL; } else if((*find_node(rowPtr[row])).col>col){ node* ins; ins=in_node(rowPtr[row],ins_node(rowPtr[row],col)); node* g=(*ins).next_in_row; (*ins).next_in_row=new node; (*((*ins).next_in_row)).col=col; (*(*ins).next_in_row).row=row; (*(*ins).next_in_row).value=value; (*(*ins).next_in_row).next_in_row=g; } } } int MultiLinkedListSparseArray::getCell(int row,int col){ return (*get(rowPtr[row],row,col)).value; } void MultiLinkedListSparseArray::display(){ for(int i=1;i<=5;i++){ for(int j=1;j<=5;j++){ if(exist(rowPtr[i],i,j)) cout<<(*get(rowPtr[i],i,j)).value<<" "; else cout<<"0"<<" "; } cout<<endl; } } node* MultiLinkedListSparseArray::find_node(node* out) { while((*out).next_in_row!=NULL) out=(*out).next_in_row; return out; } node* MultiLinkedListSparseArray::ins_node(node* ins,int col){ while(!((*ins).col>col)) ins=(*ins).next_in_row; return ins; } node* MultiLinkedListSparseArray::in_node(node* ins,node* z){ while((*ins).next_in_row!=z) ins=(*ins).next_in_col; return ins; } node* MultiLinkedListSparseArray::get(node* in,int row,int col){ dummy=new node; dummy->value=0; while((*in).col!=col){ if((*in).next_in_row==NULL){ return dummy; } in=(*in).next_in_row; } return in; } bool MultiLinkedListSparseArray::exist(node* so,int row,int col){ if(so==NULL) return false; else{ while((*so).col!=col){ if((*so).next_in_row==NULL) return false; else so=(*so).next_in_row; } return true; } }

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • EXCEL VBA STUDENTS DATABASE [on hold]

    - by BENTET
    I AM DEVELOPING AN EXCEL DATABASE TO RECORD STUDENTS DETAILS. THE HEADINGS OF THE TABLE ARE DATE,YEAR, PAYMENT SLIP NO.,STUDENT NUMBER,NAME,FEES,AMOUNT PAID, BALANCE AND PREVIOUS BALANCE. I HAVE BEEN ABLE TO PUT UP SOME CODE WHICH IS WORKING, BUT THERE ARE SOME SETBACKS THAT I WANT TO BE ADDRESSED.I ACTUALLY DEVELOPED A USERFORM FOR EACH PROGRAMME OF THE INSTITUTION AND ASSIGNED EACH TO A SPECIFIC SHEET BUT WHENEVER I ADD A RECORD, IT DOES NOT GO TO THE ASSIGNED SHEET BUT GOES TO THE ACTIVE SHEET.ALSO I WANT TO HIDE ALL SHEETS AND BE WORKING ONLY ON THE USERFORMS WHEN THE WORKBOOK IS OPENED.ONE PROBLEM AM ALSO FACING IS THE UPDATE CODE.WHENEVER I UPDATE A RECORD ON A SPECIFIC ROW, IT RATHER EDIT THE RECORD ON THE FIRST ROW NOT THE RECORD EDITED.THIS IS THE CODE I HAVE BUILT SO FAR.I AM VIRTUALLY A NOVICE IN PROGRAMMING. Private Sub cmdAdd_Click() Dim lastrow As Long lastrow = Sheets("Sheet4").Range("A" & Rows.Count).End(xlUp).Row Cells(lastrow + 1, "A").Value = txtDate.Text Cells(lastrow + 1, "B").Value = ComBox1.Text Cells(lastrow + 1, "C").Value = txtSlipNo.Text Cells(lastrow + 1, "D").Value = txtStudentNum.Text Cells(lastrow + 1, "E").Value = txtName.Text Cells(lastrow + 1, "F").Value = txtFees.Text Cells(lastrow + 1, "G").Value = txtAmountPaid.Text txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" txtFees.Text = "" txtAmountPaid.Text = "" End Sub Private Sub cmdClear_Click() txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" txtFees.Text = "" txtAmountPaid.Text = "" txtBalance.Text = "" End Sub Private Sub cmdClearD_Click() txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" txtFees.Text = "" txtAmountPaid.Text = "" txtBalance.Text = "" End Sub Private Sub cmdClose_Click() Unload Me End Sub Private Sub cmdDelete_Click() 'declare the variables Dim findvalue As Range Dim cDelete As VbMsgBoxResult 'check for values If txtStudentNum.Value = "" Or txtName.Value = "" Or txtDate.Text = "" Or ComBox1.Text = "" Or txtSlipNo.Text = "" Or txtFees.Text = "" Or txtAmountPaid.Text = "" Or txtBalance.Text = "" Then MsgBox "There is not data to delete" Exit Sub End If 'give the user a chance to change their mind cDelete = MsgBox("Are you sure that you want to delete this student", vbYesNo + vbDefaultButton2, "Are you sure????") If cDelete = vbYes Then 'delete the row Set findvalue = Sheet4.Range("D:D").Find(What:=txtStudentNum, LookIn:=xlValues) findvalue.EntireRow.Delete End If 'clear the controls txtDate.Text = "" ComBox1.Text = "" txtSlipNo.Text = "" txtStudentNum.Text = "" txtName.Text = "" 'txtFees.Text = "" txtAmountPaid.Text = "" txtBalance.Text = "" End Sub Private Sub cmdSearch_Click() Dim lastrow As Long Dim currentrow As Long Dim studentnum As String lastrow = Sheets("Sheet4").Range("A" & Rows.Count).End(xlUp).Row studentnum = txtStudentNum.Text For currentrow = 2 To lastrow If Cells(currentrow, 4).Text = studentnum Then txtDate.Text = Cells(currentrow, 1) ComBox1.Text = Cells(currentrow, 2) txtSlipNo.Text = Cells(currentrow, 3) txtStudentNum.Text = Cells(currentrow, 4).Text txtName.Text = Cells(currentrow, 5) txtFees.Text = Cells(currentrow, 6) txtAmountPaid.Text = Cells(currentrow, 7) txtBalance.Text = Cells(currentrow, 8) End If Next currentrow txtStudentNum.SetFocus End Sub Private Sub cmdSearchName_Click() Dim lastrow As Long Dim currentrow As Long Dim studentname As String lastrow = Sheets("Sheet4").Range("A" & Rows.Count).End(xlUp).Row studentname = txtName.Text For currentrow = 2 To lastrow If Cells(currentrow, 5).Text = studentname Then txtDate.Text = Cells(currentrow, 1) ComBox1.Text = Cells(currentrow, 2) txtSlipNo.Text = Cells(currentrow, 3) txtStudentNum.Text = Cells(currentrow, 4) txtName.Text = Cells(currentrow, 5).Text txtFees.Text = Cells(currentrow, 6) txtAmountPaid.Text = Cells(currentrow, 7) txtBalance.Text = Cells(currentrow, 8) End If Next currentrow txtName.SetFocus End Sub Private Sub cmdUpdate_Click() Dim tdate As String Dim tlevel As String Dim tslipno As String Dim tstudentnum As String Dim tname As String Dim tfees As String Dim tamountpaid As String Dim currentrow As Long Dim lastrow As Long 'If Cells(currentrow, 5).Text = studentname Then 'txtDate.Text = Cells(currentrow, 1) lastrow = Sheets("Sheet4").Range("A" & Columns.Count).End(xlUp).Offset(0, 1).Column For currentrow = 2 To lastrow tdate = txtDate.Text Cells(currentrow, 1).Value = tdate txtDate.Text = Cells(currentrow, 1) tlevel = ComBox1.Text Cells(currentrow, 2).Value = tlevel ComBox1.Text = Cells(currentrow, 2) tslipno = txtSlipNo.Text Cells(currentrow, 3).Value = tslipno txtSlipNo = Cells(currentrow, 3) tstudentnum = txtStudentNum.Text Cells(currentrow, 4).Value = tstudentnum txtStudentNum.Text = Cells(currentrow, 4) tname = txtName.Text Cells(currentrow, 5).Value = tname txtName.Text = Cells(currentrow, 5) tfees = txtFees.Text Cells(currentrow, 6).Value = tfees txtFees.Text = Cells(currentrow, 6) tamountpaid = txtAmountPaid.Text Cells(currentrow, 7).Value = tamountpaid txtAmountPaid.Text = Cells(currentrow, 7) Next currentrow txtDate.SetFocus ComBox1.SetFocus txtSlipNo.SetFocus txtStudentNum.SetFocus txtName.SetFocus txtFees.SetFocus txtAmountPaid.SetFocus txtBalance.SetFocus End Sub PLEASE I WAS THINKING IF I CAN DEVELOP SOMETHING THAT WILL USE ONLY ONE USERFORM TO SEND DATA TO DIFFERENT SHEETS IN THE WORKBOOK.

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • How do I use setFilmSize in panda3d to achieve the correct view?

    - by lhk
    I'm working with Panda3d and recently switched my game to isometric rendering. I moved the virtual camera accordingly and set an orthographic lens. Then I implemented the classes "Map" and "Canvas". A canvas is a dynamically generated mesh: a flat quad. I'm using it to render the ingame graphics. Since the game itself is still set in a 3d coordinate system I'm planning to rely on these canvases to draw sprites. I could have named this class "Tile" but as I'd like to use it for non-tile sketches (enemies, environment) as well I thought canvas would describe it's function better. Map does exactly what it's name suggests. Its constructor receives the number of rows and columns and then creates a standard isometric map. It uses the canvas class for tiles. I'm planning to write a map importer that reads a file to create maps on the fly. Here's the canvas implementation: class Canvas: def __init__(self, texture, vertical=False, width=1,height=1): # create the mesh format=GeomVertexFormat.getV3t2() format = GeomVertexFormat.registerFormat(format) vdata=GeomVertexData("node-vertices", format, Geom.UHStatic) vertex = GeomVertexWriter(vdata, 'vertex') texcoord = GeomVertexWriter(vdata, 'texcoord') # add the vertices for a flat quad vertex.addData3f(1, 0, 0) texcoord.addData2f(1, 0) vertex.addData3f(1, 1, 0) texcoord.addData2f(1, 1) vertex.addData3f(0, 1, 0) texcoord.addData2f(0, 1) vertex.addData3f(0, 0, 0) texcoord.addData2f(0, 0) prim = GeomTriangles(Geom.UHStatic) prim.addVertices(0, 1, 2) prim.addVertices(2, 3, 0) self.geom = Geom(vdata) self.geom.addPrimitive(prim) self.node = GeomNode('node') self.node.addGeom(self.geom) # this is the handle for the canvas self.nodePath=NodePath(self.node) self.nodePath.setSx(width) self.nodePath.setSy(height) if vertical: self.nodePath.setP(90) # the most important part: "Drawing" the image self.texture=loader.loadTexture(""+texture+".png") self.nodePath.setTexture(self.texture) Now the code for the Map class class Map: def __init__(self,rows,columns,size): self.grid=[] for i in range(rows): self.grid.append([]) for j in range(columns): # create a canvas for the tile. For testing the texture is preset tile=Canvas(texture="../assets/textures/flat_concrete",width=size,height=size) x=(i-1)*size y=(j-1)*size # set the tile up for rendering tile.nodePath.reparentTo(render) tile.nodePath.setX(x) tile.nodePath.setY(y) # and store it for later access self.grid[i].append(tile) And finally the usage def loadMap(self): self.map=Map(10, 10, 1) this function is called within the constructor of the World class. The instantiation of world is the entry point to the execution. The code is pretty straightforward and runs good. Sadly the output is not as expected: Please note: The problem is not the white rectangle, it's my player object. The problem is that although the map should have equal width and height it's stretched weirdly. With orthographic rendering I expected the map to be a perfect square. What did I do wrong ? UPDATE: I've changed the viewport. This is how I set up the orthographic camera: lens = OrthographicLens() lens.setFilmSize(40, 20) base.cam.node().setLens(lens) You can change the "aspect" by modifying the parameters of setFilmSize. I don't know exactly how they are related to window size and screen resolution but after testing a little the values above seem to work for me. Now everything is rendered correctly as long as I don't resize the window. Every change of the window's size as well as switching to fullscreen destroys the correct rendering. I know that implementing a listener for resize events is not in the scope of this question. However I wonder why I need to make the Film's height two times bigger than its width. My window is quadratic ! Can you tell me how to find out correct setting for the FilmSize ? UPDATE 2: I can imagine that it's hard to envision the behaviour of the game. At first glance the obvious solution is to pass the window's width and height in pixels to setFilmSize. There are two problems with that approach. The parameters for setFilmSize are ingame units. You'll get a way to big view if you pass the pixel size For some strange reason the image is distorted if you pass equal values for width and height. Here's the output for setFilmSize(800,800) You'll have to stress your eyes but you'll see what I mean

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • OpenCV Mat creation memory leak

    - by Royi Freifeld
    My memory is getting full fairly quick once using the next piece of code. Valgrind shows a memory leak, but everything is allocated on stack and (supposed to be) freed once the function ends. void mult_run_time(int rows, int cols) { Mat matrix(rows,cols,CV_32SC1); Mat row_vec(cols,1,CV_32SC1); /* initialize vector and matrix */ for (int col = 0; col < cols; ++col) { for (int row = 0; row < rows; ++row) { matrix.at<unsigned long>(row,col) = rand() % ULONG_MAX; } row_vec.at<unsigned long>(1,col) = rand() % ULONG_MAX; } /* end initialization of vector and matrix*/ matrix*row_vec; } int main() { for (int row = 0; row < 20; ++row) { for (int col = 0; col < 20; ++col) { mult_run_time(row,col); } } return 0; } Valgrind shows that there is a memory leak in line Mat row_vec(cols,1,CV_32CS1): ==9201== 24,320 bytes in 380 blocks are definitely lost in loss record 50 of 50 ==9201== at 0x4026864: malloc (vg_replace_malloc.c:236) ==9201== by 0x40C0A8B: cv::fastMalloc(unsigned int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x41914E3: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x8048BE4: cv::Mat::create(int, int, int) (mat.hpp:368) ==9201== by 0x8048B2A: cv::Mat::Mat(int, int, int) (mat.hpp:68) ==9201== by 0x80488B0: mult_run_time(int, int) (mat_by_vec_mult.cpp:26) ==9201== by 0x80489F5: main (mat_by_vec_mult.cpp:59) Is it a known bug in OpenCV or am I missing something?

    Read the article

  • Cocoa equivalent of the Carbon method getPtrSize

    - by Michael Minerva
    I need to translate the a carbon method into cocoa into and I am having trouble finding any documentation about what the carbon method getPtrSize really does. From the code I am translating it seems that it returns the byte representation of an image but that doesn't really match up with the name. Could someone give me a good explanation of this method or link me to some documentation that describes it. The code I am translating is in a common lisp implementation called MCL that has a bridge to carbon (I am translating into CCL which is a common lisp implementation with a Cocoa bridge). Here is the MCL code (#_before a method call means that it is a carbon method): (defmethod COPY-CONTENT-INTO ((Source inflatable-icon) (Destination inflatable-icon)) ;; check for size compatibility to avoid disaster (unless (and (= (rows Source) (rows Destination)) (= (columns Source) (columns Destination)) (= (#_getPtrSize (image Source)) (#_getPtrSize (image Destination)))) (error "cannot copy content of source into destination inflatable icon: incompatible sizes")) ;; given that they are the same size only copy content (setf (is-upright Destination) (is-upright Source)) (setf (height Destination) (height Source)) (setf (dz Destination) (dz Source)) (setf (surfaces Destination) (surfaces Source)) (setf (distance Destination) (distance Source)) ;; arrays (noise-map Source) ;; accessor makes array if needed (noise-map Destination) ;; ;; accessor makes array if needed (dotimes (Row (rows Source)) (dotimes (Column (columns Source)) (setf (aref (noise-map Destination) Row Column) (aref (noise-map Source) Row Column)) (setf (aref (altitudes Destination) Row Column) (aref (altitudes Source) Row Column)))) (setf (connectors Destination) (mapcar #'copy-instance (connectors Source))) (setf (visible-alpha-threshold Destination) (visible-alpha-threshold Source)) ;; copy Image: slow byte copy (dotimes (I (#_getPtrSize (image Source))) (%put-byte (image Destination) (%get-byte (image Source) i) i)) ;; flat texture optimization: do not copy texture-id -> destination should get its own texture id from OpenGL (setf (is-flat Destination) (is-flat Source)) ;; do not compile flat textures: the display list overhead slows things down by about 2x (setf (auto-compile Destination) (not (is-flat Source))) ;; to make change visible we have to reset the compiled flag (setf (is-compiled Destination) nil))

    Read the article

  • Is there more than one jQuery Autocomplete widget?

    - by Cheeso
    I thought there was only one - included in jQuery UI and documented here. I know there are third-party autocomplete widgets that plug-in to jQuery, like the one from devbridge. But I would describe that as an autocomplete widget for jQuery, rather than the jQuery autocomplete widget. But on Stackoverflow, I see questions asking about an autocomplete widget that does not use the syntax described in the jQuery UI documentation. For example: jquery.autocomplete.js - how does autocomplete work? Jquery AutoComplete Plugin calling Help with jquery autocomplete and json response The jQuery UI syntax looks like this: $("#input1").autocomplete({ source: function(req, responseFn) { ... }, select: function(value, data){ ... } }); Whereas some of those other questions hae a syntax like this: $("#city").autocomplete("CUList.asmx/GetCUList", { dataType: 'jsonp', parse: function(data) { var rows = new Array(); for(var i=0; i<data.length; i++){ rows[i] = { data:data[i], value:data[i].CUName, result:data[i].CUName }; } return rows; }, formatItem: function(row, i, n) { return row.CUName + ', ' + row.CUCity; }, max: 50 }); What's the explanation for the discrepancy? People ask about "jquery autocomplete" without specifying which one. With no direction, shouldn't I assume THE jquery UI autocomplete?

    Read the article

  • Zend Framework, Zend_Form_Element how to set custom name?

    - by ikso
    Hello, I have form, where some fields are looks like rows, so I can add/delete them using JS. For example: Field with ID=1 (existing row) <input id="id[1]" type="text" name="id[1]" value="1" /> <input id="name[1]" type="text" name="name[1]" value="100" /> Field with ID=2 (existing row) <input id="name[2]" type="text" name="name[2]" value="200" /> <input id="name[2]" type="text" name="name[2]" value="200" /> new row created by default (to allow add one more row to existing rows) <input id="id[n0]" type="text" name="id[n0]" value="" /> <input id="name[n0]" type="text" name="name[n0]" value="" /> new row created by JS <input id="id[n1]" type="text" name="id[n1]" value="" /> <input id="name[n1]" type="text" name="name[n1]" value="" /> So than we will proceed form, we will know what rows to update and what to add (if index starts with "n" - new, if index is number - existent element). I tried subforms... but do I have to create subform for each field? If I use following code: $subForm = new Zend_Form_SubForm(); $subForm->addElement('Text', 'n0'); $this->addSubForm($subForm, 'pid'); $subForm = new Zend_Form_SubForm(); $subForm->addElement('Text', 'n0'); $this->addSubForm($subForm, 'name'); What is the best way for this? 1) Use subforms? 2) Extend Zend/Form/Decorator/ViewHelper.php to use names like name[nX]? 3) Other solutions? Thanks.

    Read the article

  • jqGrid with JSON data renders table as empty

    - by jgreep
    I'm trying to create a jqgrid, but the table is empty. The table renders, but the data doesn't show. The data I'm getting back from the php call is: { "page":"1", "total":1, "records":"10", "rows":[ {"id":"2:1","cell":["1","image","Chief Scout","Highest Award test","0"]}, {"id":"2:2","cell":["2","image","Link Badge","When you are invested as a Scout, you may be eligible to receive a Link Badge. (See page 45)","0"]}, {"id":"2:3","cell":["3","image","Pioneer Scout","Upon completion of requirements, the youth is invested as a Pioneer Scout","0"]}, {"id":"2:4","cell":["4","image","Voyageur Scout Award","Voyageur Scout Award is the right after Pioneer Scout.","0"]}, {"id":"2:5","cell":["5","image","Voyageur Citizenship","Learning about and caring for your community.","0"]}, {"id":"2:6","cell":["6","image","Fish and Wildlife","Demonstrate your knowledge and involvement in fish and wildlife management.","0"]}, {"id":"2:7","cell":["7","image","Photography","To recognize photography knowledge and skills","0"]}, {"id":"2:8","cell":["8","image","Recycling","Demonstrate your knowledge and involvement in Recycling","0"]}, {"id":"2:10","cell":["10","image","Voyageur Leadership ","Show leadership ability","0"]}, {"id":"2:11","cell":["11","image","World Conservation","World Conservation Badge","0"]} ]} The javascript configuration looks like so: $("#"+tableId).jqGrid ({ url:'getAwards.php?id='+classId, dataType : 'json', mtype:'POST', colNames:['Id','Badge','Name','Description',''], colModel : [ {name:'awardId', width:30, sortable:true, align:'center'}, {name:'badge', width:40, sortable:false, align:'center'}, {name:'name', width:180, sortable:true, align:'left'}, {name:'description', width:380, sortable:true, align:'left'}, {name:'selected', width:0, sortable:false, align:'center'} ], sortname: "awardId", sortorder: "asc", pager: $('#'+tableId+'_pager'), rowNum:15, rowList:[15,30,50], caption: 'Awards', viewrecords:true, imgpath: 'scripts/jqGrid/themes/green/images', jsonReader : { root: "rows", page: "page", total: "total", records: "records", repeatitems: true, cell: "cell", id: "id", userdata: "userdata", subgrid: {root:"rows", repeatitems: true, cell:"cell" } }, width: 700, height: 200 }); The HTML looks like: <table class="awardsList" id="awardsList2" class="scroll" name="awardsList" /> <div id="awardsList2_pager" class="scroll"></div> I'm not sure that I needed to define jsonReader, since I've tried to keep to the default. If the php code will help, I can post it too.

    Read the article

  • Faster way to transfer table data from linked server

    - by spender
    After much fiddling, I've managed to install the right ODBC driver and have successfully created a linked server on SQL Server 2008, by which I can access my PostgreSQL db from SQL server. I'm copying all of the data from some of the tables in the PgSQL DB into SQL Server using merge statements that take the following form: with mbRemote as ( select * from openquery(someLinkedDb,'select * from someTable') ) merge into someTable mbLocal using mbRemote on mbLocal.id=mbRemote.id when matched /*edit*/ /*clause below really speeds things up when many rows are unchanged*/ /*can you think of anything else?*/ and not (mbLocal.field1=mbRemote.field1 and mbLocal.field2=mbRemote.field2 and mbLocal.field3=mbRemote.field3 and mbLocal.field4=mbRemote.field4) /*end edit*/ then update set mbLocal.field1=mbRemote.field1, mbLocal.field2=mbRemote.field2, mbLocal.field3=mbRemote.field3, mbLocal.field4=mbRemote.field4 when not matched then insert ( id, field1, field2, field3, field4 ) values ( mbRemote.id, mbRemote.field1, mbRemote.field2, mbRemote.field3, mbRemote.field4 ) WHEN NOT MATCHED BY SOURCE then delete; After this statement completes, the local (SQL Server) copy is fully in sync with the remote (PgSQL server). A few questions about this approach: is it sane? it strikes me that an update will be run over all fields in local rows that haven't necessarily changed. The only prerequisite is that the local and remote id field match. Is there a more fine grained approach/a way of constraining the merge statment to only update rows that have actually changed?

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • Javascript html5 database transaction problem in loops

    - by Marek
    I'm hittig my head on this and i will be glad for any help. I need to store in a database a context (a list of string ids) for another page. I open a page with a list of artworks and this page save into the database those artowrks ids. When i click on an artwork i open its webpage but i can access the database to know the context and refer to the next and prev artwork. This is my code to retrieve the contex: updateContext = function () { alert("updating context"); try { mydb.transaction( function(transaction) { transaction.executeSql("select artworks.number from artworks, collections where collections.id = artworks.section_id and collections.short_name in ('cro', 'cra', 'crp', 'crm');", [], contextDataHandler, errorHandler); }); } catch(e) { alert(e.message); } } the contextDatahandler function then iterates through the results and fill again the current_context table: contextDataHandler = function(transaction, results) { try { mydb.transaction( function(transaction) { transaction.executeSql("drop table current_context;", [], nullDataHandler, errorHandler); transaction.executeSql("create table current_context(id String);", [], nullDataHandler, errorHandler); } ) } catch(e) { alert(e.message); } var s = ""; for (var i=0; i < results.rows.length; i++) { var item = results.rows.item(0); s += item['number'] + " "; mydb.transaction( function(tx) { tx.executeSql("insert into current_context(id) values (?);", [item['number']]); } ) } alert(s); } the result is, i get the current_context table deleted, recreated, and filled, but all the rows are filled with the LAST artwork id. the query to retrieve the artworks ids works, so i think it's a transaction problem, but i cant figure out where. thanks for any help

    Read the article

  • mysql - query group/concat question??

    - by tom smith
    I have a situation where I return results with mutiple rows. I'm looking for aw way to return a single row, with the multiple items in separate cols within the row. My initial query: SELECT a.name, a.city, a.address, a.abbrv, b.urltype, b.url FROM jos__universityTBL as a LEFT JOIN jos__university_urlTBL as b on b.universityID = a.ID WHERE a.stateVAL = 'CA' ...my output: | University Of Southern Califor | Los Angeles | | usc | 2 | http://web-app.usc.edu/ws/soc/api/ | | University Of Southern Califor | Los Angeles | | usc | 4 | http://web-app.usc.edu/ws/soc/api/ | | University Of Southern Califor | Los Angeles | | usc | 1 | www.usc.edu | | San Jose State University | San Jose | | sjsu | 2 | http://info.sjsu.edu/home/schedules.html | | San Jose State University | San Jose | | sjsu | 4 | https://cmshr.sjsu.edu/psp/HSJPRDF/EMPLOYEE/HSJPRD/c/COMMUNITY_ACCESS.CLASS_SEARCH.GBL?FolderPath=PORTAL_ROOT_OBJECT.PA_HC_CLASS_SEARCH | | San Jose State University | San Jose | | sjsu | 1 | www.sjsu.edu My tbl schema... mysql> describe jos_universityTBL; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | name | varchar(50) | NO | UNI | | | | repos_dir_name | varchar(50) | NO | | | | | city | varchar(20) | YES | | | | | stateVAL | varchar(5) | NO | | | | | address | varchar(50) | NO | | | | | abbrv | varchar(20) | NO | | | | | childtbl | varchar(200) | NO | | | | | userID | int(10) | NO | | 0 | | | ID | int(10) | NO | PRI | NULL | auto_increment | +----------------+--------------+------+-----+---------+----------------+ 9 rows in set (0.00 sec) mysql> describe jos_university_urlTBL; +--------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+---------+----------------+ | universityID | int(10) | NO | | 0 | | | urltype | int(5) | NO | | 0 | | | url | varchar(200) | NO | MUL | | | | actionID | int(5) | YES | | 0 | | | status | int(5) | YES | | 0 | | | ID | int(10) | NO | PRI | NULL | auto_increment | +--------------+--------------+------+-----+---------+----------------+ 6 rows in set (0.00 sec) I'm really trying to get something like: |<<the concated urltype-url >>| ucs | losangeles | usc.edu | 1-u1 | 2-u2 | 3-u2 | Any pointers would be helpful!

    Read the article

  • ng-grid checkbox with filtering

    - by WilliamLou
    If I have a huge table with ng-grid, and I enabled a checkbox to select all. Is there a way for me to combine this selectAll feature with the filtering box. I mean when I filter out the rows, I want to click the checkbox so that the rows filtered will be all selected; once I clean out filter, those selectedRows are still left so that I can add more rows into it with other filters. I created a Plunker code template here. copy code here as well: // main.js var app = angular.module('myApp', ['ngGrid']); app.controller('MyCtrl', function($scope) { $scope.mySelections = []; $scope.myData = [{name: "Moroni", age: 50}, {name: "Tiancum", age: 43}, {name: "Jacob", age: 27}, {name: "Nephi", age: 29}, {name: "Enos", age: 34}, {name: "Moroni", age: 50}, {name: "Tiancum", age: 43}, {name: "Jacob", age: 27}, {name: "Nephi", age: 29}, {name: "Enos", age: 34}, {name: "Moroni", age: 50}, {name: "Tiancum", age: 43}, {name: "Jacob", age: 27}, {name: "Nephi", age: 29}, {name: "Enos", age: 34}, {name: "Moroni", age: 50}, {name: "Tiancum", age: 43}, {name: "Jacob", age: 27}, {name: "Nephi", age: 29}, {name: "Enos", age: 34},]; $scope.filterOptions = { filterText: '' }; $scope.gridOptions = { data: 'myData', checkboxHeaderTemplate: '<input class="ngSelectionHeader" type="checkbox" ng-model="allSelected" ng-change="toggleSelectAll(allSelected)"/>', showSelectionCheckbox: true, selectWithCheckboxOnly: false, selectedItems: $scope.mySelections, multiSelect: true, filterOptions: $scope.filterOptions }; });

    Read the article

  • GridView doesn't remember state between postbacks

    - by Ryan
    Hi, I have a simple ASP page with databound grid (bound to an object source). The grid is within the page of a wizard and has a 'select' checkbox for each row. In one stage of the wizard, I bind the GridView: protected void Wizard1_NextButtonClick(object sender, WizardNavigationEventArgs e) { ... // Bind and display matches GridViewMatches.EnableViewState = true; GridViewMatches.DataSource = getEmailRecipients(); GridViewMatches.DataBind(); And when the finish button is clicked, I iterate through the rows and check what's selected: protected void Wizard1_FinishButtonClick(object sender, WizardNavigationEventArgs e) { // Set the selected values, depending on the checkboxes on the grid. foreach (GridViewRow gr in GridViewMatches.Rows) { Int32 personID = Convert.ToInt32(gr.Cells[0].Text); CheckBox selected = (CheckBox) gr.Cells[1].FindControl("CheckBoxSelectedToSend"); But at this stage GridViewMatches.Rows.Count = 0! I don't re-bind the grid, I shouldn't need to, right? I expect the view-state to maintain the state. (Also, if I do rebind the grid, my selection checkboxes will be cleared) NB: This page also dynamically adds user controls in OnInit method. I have heard that it might mess with the view state, but as far as I can tell, I am doing it correctly and the viewstate for those added controls seems to work (values are persisted between postbacks) Thanks a lot in advance for any help! Ryan UPDATE: Could this be to do with the fact I am setting the datasource programatically? I wondered if the asp engine was databinding the grid during the page lifecycle to a datasource that was not yet defined. (In a test page, the GridView is 'automatically' databound'. I don't want the grid to re-bound I just want the values from the viewstate from the previous post! Also, I have this in the asp header: ViewStateEncryptionMode="Never" - this was to resolve an occasional 'Invalid Viewstate Validation MAC' message Thanks again Ryan

    Read the article

  • cakephp paginate using mysql SQL_CALC_FOUND_ROWS

    - by michael
    I'm trying to make Cakephp paginate take advantage of the SQL_CALC_FOUND_ROWS feature in mysql to return a count of total rows while using LIMIT. Hopefully, this can eliminate the double query of paginateCount(), then paginate(). http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_found-rows I've put this in my app_model.php, and it basically works, but it could be done better. Can someone help me figure out how to override paginate/paginateCount so it executes only 1 SQL stmt? /** * Overridden paginateCount method, to using SQL_CALC_FOUND_ROWS */ public function paginateCount($conditions = null, $recursive = 0, $extra = array()) { $options = array_merge(compact('conditions', 'recursive'), $extra); $options['fields'] = "SQL_CALC_FOUND_ROWS `{$this->alias}`.*"; Q: how do you get the SAME limit value used in paginate()? $options['limit'] = 1; // ideally, should return same rows as in paginate() Q: can we somehow get the query results to paginate directly, without the extra query? $cache_results_from_paginateCount = $this->find('all', $options); /* * note: we must run the query to get the count, but it will be cached for multiple paginates, so add comment to query */ $found_rows = $this->query("/* do not cache for {$this->alias} */ SELECT FOUND_ROWS();"); $count = array_shift($found_rows[0][0]); return $count; } /** * Overridden paginate method */ public function paginate($conditions, $fields, $order, $limit, $page = 1, $recursive = null, $extra = array()) { $options = array_merge(compact('conditions', 'fields', 'order', 'limit', 'page', 'recursive'), $extra); Q: can we somehow get $cache_results_for_paginate directly from paginateCount()? return $cache_results_from_paginateCount; // paginate in only 1 SQL stmt return $this->find('all', $options); // ideally, we could skip this call entirely }

    Read the article

  • Submitting a multidimensional array via POST with php

    - by Fireflight
    I have a php form that has a known number of columns (ex. top diameter, bottom diameter, fabric, colour, quantity), but has an unknown number of rows, as users can add rows as they need. I've discovered how to take each of the fields(columns) and place them into an array of their own. <input name="topdiameter['+current+']" type="text" id="topdiameter'+current+'" size="5" /> <input name="bottomdiameter['+current+']" type="text" id="bottomdiameter'+current+'" size="5" /> So what I end up with in the HTML is: <tr> <td><input name="topdiameter[0]" type="text" id="topdiameter0" size="5" /></td> <td><input name="bottomdiameter[0]" type="text" id="bottomdiameter0" size="5" /></td> </tr> <tr> <td><input name="topdiameter[1]" type="text" id="topdiameter0" size="5" /></td> <td><input name="bottomdiameter[1]" type="text" id="bottomdiameter0" size="5" /></td> </tr> ...and so on. What I would like to do now is take all the rows and columns put them into a multidimensional array and email the contents of that to the client (preferably in a nicely formatted table). I haven't been able to really comprehend how to combine all those inputs and selects into a nice array. At this point, I'm going to have to try to use several 1D arrays, although I have the idea that using a single 2D array would be a better practice than using several 1D arrays.

    Read the article

  • Shake based application

    - by Sid
    hii frnz, i am developing an application which deletes rows from a table view when the user shakes the iPhone.i have created a navigation based project. now when the user shakes the iPhone i want that the title of navigation bar should change to "DELETE" and a delete button should appear on the navigation bar on the same view (but this operation should take place only when the user shakes the iPhone )otherwise when a user selects a particular row then it should move to next view.I have written the followin code but its not working.plz help me out..... (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //NSLog(@"hiiiii"); if (isShaked == NO) { //logic to move to next view goes here. } else { self.title = @"Delete Rows"; delete=[[UIBarButtonItem alloc] initWithTitle:@"Delete rows" style: UIBarButtonItemStyleBordered target:self action:@selector(deleteItemsSelected)] ; self.navigationItem.rightBarButtonItem=self.delete; MyTableCell *targetCustomCell = (MyTableCell *)[tableView cellForRowAtIndexPath:indexPath]; [targetCustomCell checkAction]; [self.tempArray addObject: [myModal.listOfStates objectAtIndex:indexPath.row]]; //[delete addTarget:self action:@selector(deleteItemsSelected:) forControlEvents:UIControlEventTouchUpInside]; self.tempTableView = tableView; } } -(void)deleteItemsSelected { //for(int i = 0; i <= [tempArray count]; i++) //{ //} [myModal.listOfStates removeObjectsInArray:tempArray]; [tempTableView reloadData]; } checkAction method is a custom cell method which is used to put a tickmark on the row selected

    Read the article

  • UTF-8 character encoding battles json_encode()

    - by Dave Jarvis
    Quest I am looking to fetch rows that have accented characters. The encoding for the column (NAME) is latin1_swedish_ci. The Code The following query returns Abord â Plouffe using phpMyAdmin: SELECT C.NAME FROM CITY C WHERE C.REGION_ID=10 AND C.NAME_LOWERCASE LIKE '%abor%' ORDER BY C.NAME LIMIT 30 The following displays expected values (function is called db_fetch_all( $result )): while( $row = mysql_fetch_assoc( $result ) ) { foreach( $row as $value ) { echo $value . " "; $value = utf8_encode( $value ); echo $value . " "; } $r[] = $row; } The displayed values: 5482 5482 Abord â Plouffe Abord â Plouffe The array is then encoded using json_encode: $rows = db_fetch_all( $result ); echo json_encode( $rows ); Problem The web browser receives the following value: {"ID":"5482","NAME":null} Instead of: {"ID":"5482","NAME":"Abord â Plouffe"} (Or the encoded equivalent.) Question The documentation states that json_encode() works on UTF-8. I can see the values being encoded from LATIN1 to UTF-8. After the call to json_encode(), however, the value becomes null. How do I make json_encode() encode the UTF-8 values properly? One possible solution is to use the Zend Framework, but I'd rather not if it can be avoided.

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >