Search Results

Search found 11153 results on 447 pages for 'count zero'.

Page 54/447 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Does anyone get zero-height select fields in Firefox 3.6.3?

    - by user350635
    If you open this HTML in Firefox 3.6.3 (confirmed in some earlier versions too), and click the drawStuff() link repeatedly, it doesn't render the contents of the last div consistently. Looking more closely it seems like it's rendering select fields with height=0. Any idea why this would happen? <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title> A Page </title> <script type="text/javascript"> function drawStuff() { for (var i = 1; i <= 5; i++) { var curHtmlArr = []; for (var j = 0; j < 5; j++){ curHtmlArr.push("<select>"); curHtmlArr.push(getOptgroup()); curHtmlArr.push(getOptgroup()); curHtmlArr.push(getOptgroup()); curHtmlArr.push("<\/select>"); } var foobar = document.getElementById('elem_' + i); foobar.innerHTML = curHtmlArr.join(''); } } function getOptgroup(){ var htmlArr = []; htmlArr.push('<optgroup label="Whatever">'); for (var ii = 0; ii < 32; ii++){ htmlArr.push(' <option value="' + ii + '"> Blah ' + "<\/option>"); } htmlArr.push("<\/optgroup>"); return htmlArr.join(''); } </script> </head> <body> <table border=1 style="width:900px;" summary="A Table"> <tr> <td> <div id="elem_1"></div> </td> <td> <div id="elem_2"></div> </td> <td> <div id="elem_3"></div> </td> <td> <div id="elem_4"></div> </td> <td> <div>abc</div> <div id="elem_5"></div> </td> </tr> </table> <a href="javascript:drawStuff()"> drawStuff() </a> <script type="text/javascript"> drawStuff(); </script> </body> </html>

    Read the article

  • Global variable in a recursive function how to keep it at zero?

    - by Grammin
    So if I have a recursive function with a global variable var_: int var_; void foo() { if(var_ == 3) return; else var_++; foo(); } and then I have a function that calls foo() so: void bar() { foo(); return; } what is the best way to set var_ =0 everytime foo is called thats not from within itself. I know I could just do: void bar() { var_ =0; foo(); return; } but I'm using the recursive function a lot and I don't want to call foo and forget to set var_=0 at a later date. Does anyone have any suggestions on how to solve this? Thanks, Josh

    Read the article

  • How to give position zero of spinner a prompt value?

    - by Eugene H
    The database is then transferring the data to a spinner which I want to leave position 0 blank so I can add a item to the spinner with no value making it look like a prompt. I have been going at it all day. FAil after Fail MainActivity public class MainActivity extends Activity { Button AddBtn; EditText et; EditText cal; Spinner spn; SQLController SQLcon; ProgressDialog PD; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); AddBtn = (Button) findViewById(R.id.addbtn_id); et = (EditText) findViewById(R.id.et_id); cal = (EditText) findViewById(R.id.et_cal); spn = (Spinner) findViewById(R.id.spinner_id); spn.setOnItemSelectedListener(new OnItemSelectedListenerWrapper( new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) { SQLcon.open(); Cursor c = SQLcon.readData(); if (c.moveToPosition(pos)) { String name = c.getString(c .getColumnIndex(DBhelper.MEMBER_NAME)); String calories = c.getString(c .getColumnIndex(DBhelper.KEY_CALORIES)); et.setText(name); cal.setText(calories); } SQLcon.close(); // closing database } @Override public void onNothingSelected(AdapterView<?> parent) { // TODO Auto-generated method stub } })); SQLcon = new SQLController(this); // opening database SQLcon.open(); loadtospinner(); AddBtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { new MyAsync().execute(); } }); } public void loadtospinner() { ArrayList<String> al = new ArrayList<String>(); Cursor c = SQLcon.readData(); c.moveToFirst(); while (!c.isAfterLast()) { String name = c.getString(c.getColumnIndex(DBhelper.MEMBER_NAME)); String calories = c.getString(c .getColumnIndex(DBhelper.KEY_CALORIES)); al.add(name + ", Calories: " + calories); c.moveToNext(); } ArrayAdapter<String> aa1 = new ArrayAdapter<String>( getApplicationContext(), android.R.layout.simple_spinner_item, al); spn.setAdapter(aa1); // closing database SQLcon.close(); } private class MyAsync extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { super.onPreExecute(); PD = new ProgressDialog(MainActivity.this); PD.setTitle("Please Wait.."); PD.setMessage("Loading..."); PD.setCancelable(false); PD.show(); } @Override protected Void doInBackground(Void... params) { String name = et.getText().toString(); String calories = cal.getText().toString(); // opening database SQLcon.open(); // insert data into table SQLcon.insertData(name, calories); return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); loadtospinner(); PD.dismiss(); } } } DataBase public class SQLController { private DBhelper dbhelper; private Context ourcontext; private SQLiteDatabase database; public SQLController(Context c) { ourcontext = c; } public SQLController open() throws SQLException { dbhelper = new DBhelper(ourcontext); database = dbhelper.getWritableDatabase(); return this; } public void close() { dbhelper.close(); } public void insertData(String name, String calories) { ContentValues cv = new ContentValues(); cv.put(DBhelper.MEMBER_NAME, name); cv.put(DBhelper.KEY_CALORIES, calories); database.insert(DBhelper.TABLE_MEMBER, null, cv); } public Cursor readData() { String[] allColumns = new String[] { DBhelper.MEMBER_ID, DBhelper.MEMBER_NAME, DBhelper.KEY_CALORIES }; Cursor c = database.query(DBhelper.TABLE_MEMBER, allColumns, null, null, null, null, null); if (c != null) { c.moveToFirst(); } return c; } } Helper public class DBhelper extends SQLiteOpenHelper { // TABLE INFORMATTION public static final String TABLE_MEMBER = "member"; public static final String MEMBER_ID = "_id"; public static final String MEMBER_NAME = "name"; public static final String KEY_CALORIES = "calories"; // DATABASE INFORMATION static final String DB_NAME = "MEMBER.DB"; static final int DB_VERSION = 2; // TABLE CREATION STATEMENT private static final String CREATE_TABLE = "create table " + TABLE_MEMBER + "(" + MEMBER_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + MEMBER_NAME + " TEXT NOT NULL," + KEY_CALORIES + " INT NOT NULL);"; public DBhelper(Context context) { super(context, DB_NAME, null, DB_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_TABLE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub db.execSQL("DROP TABLE IF EXISTS " + TABLE_MEMBER); onCreate(db); } }

    Read the article

  • How do I get a array to just count how many numbers there are instead of counting the value of each number?

    - by Chad Loos
    //This is the output of the program *** start of 276 2D Arrays_03.cpp program *** Number Count Total 1 3 3 2 6 9 3 15 24 4 6 30 5 9 39 *** end of 276 2D Arrays_03.cpp program *** #include <iostream> #include <string> #include <iomanip> using namespace std; const int COLUMN_SIZE = 13; int main(void) { const int ROW_SIZE = 3; const int COUNT_SIZE = 5; void countValues(const int[][COLUMN_SIZE], const int, int[]); void display(const int [], const int); int numbers[ROW_SIZE][COLUMN_SIZE] = {{1, 3, 4, 5, 3, 2, 3, 5, 3, 4, 5, 3, 2}, {2, 1, 3, 4, 5, 3, 2, 3, 5, 3, 4, 5, 3}, {3, 4, 5, 3, 2, 1, 3, 4, 5, 3, 2, 3, 5}}; int counts[COUNT_SIZE] = {0}; string choice; cout << "*** start of 276 2D Arrays_03.cpp program ***" << endl; cout << endl; countValues(numbers, ROW_SIZE, counts); display(counts, COUNT_SIZE); cout << endl; cout << endl; cout << "*** end of 276 2D Arrays_03.cpp program ***" << endl << endl; cin.get(); return 0; } // end main() This is the function where I need to count each of the values. I know how to sum rows and cols, but I'm not quite sure of the code to just tally the values themselves. void countValues(const int numbers[][COLUMN_SIZE], const int ROW_SIZE, int counts[]) This is what I have so far. { for (int index = 0; index < ROW_SIZE; index++) counts[index]; {

    Read the article

  • Null pointer to struct which has zero size (empty)... It is a good practice?

    - by ProgramWriter
    Hi2All.. I have some null struct, for example: struct null_type { NullType& someNonVirtualMethod() { return *this; } }; And in some function i need to pass reference to this type. Reason: template <typename T1 = null_type, typename T2 = null_type, ... > class LooksLikeATupleButItsNotATuple { public: LooksLikeATupleButItsNotATuple(T1& ref1 = defParamHere, T2& ref2 = andHere..) : _ref1(ref1), _ref2(ref2), ... { } void someCompositeFunctionHere() { _ref1.someNonVirtualMethod(); _ref2.someNonVirtualMethod(); ... } private: T1& _ref1; T2& _ref2; ...; }; It is a good practice to use null reference as a default parameter?: *static_cast<NullType*>(0) It works on MSVC, but i have some doubts...

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • puzzled with java if else performance

    - by user1906966
    I am doing an investigation on a method's performance and finally identified the overhead was caused by the "else" portion of the if else statement. I have written a small program to illustrate the performance difference even when the else portion of the code never gets executed: public class TestIfPerf { public static void main( String[] args ) { boolean condition = true; long time = 0L; int value = 0; // warm up test for( int count=0; count<10000000; count++ ) { if ( condition ) { value = 1 + 2; } else { value = 1 + 3; } } // benchmark if condition only time = System.nanoTime(); for( int count=0; count<10000000; count++ ) { if ( condition ) { value = 1 + 2; } } time = System.nanoTime() - time; System.out.println( "1) performance " + time ); time = System.nanoTime(); // benchmark if else condition for( int count=0; count<10000000; count++ ) { if ( condition ) { value = 1 + 2; } else { value = 1 + 3; } } time = System.nanoTime() - time; System.out.println( "2) performance " + time ); } } and run the test program with java -classpath . -Dmx=800m -Dms=800m TestIfPerf. I performed this on both Mac and Linux Java with 1.6 latest build. Consistently the first benchmark, without the else is much faster than the second benchmark with the else section even though the code is structured such that the else portion is never executed because of the condition. I understand that to some, the difference might not be significant but the relative performance difference is large. I wonder if anyone has any insight to this (or maybe there is something I did incorrectly). Linux benchmark (in nano) performance 1215488 performance 2629531 Mac benchmark (in nano) performance 1667000 performance 4208000

    Read the article

  • Java Matcher groups: Understanding The difference between "(?:X|Y)" and "(?:X)|(?:Y)"

    - by user358795
    Can anyone explain: Why the two patterns used below give different results? (answered below) Why the 2nd example gives a group count of 1 but says the start and end of group 1 is -1? public void testGroups() throws Exception { String TEST_STRING = "After Yes is group 1 End"; { Pattern p; Matcher m; String pattern="(?:Yes|No)(.*)End"; p=Pattern.compile(pattern); m=p.matcher(TEST_STRING); boolean f=m.find(); int count=m.groupCount(); int start=m.start(1); int end=m.end(1); System.out.println("Pattern=" + pattern + "\t Found=" + f + " Group count=" + count + " Start of group 1=" + start + " End of group 1=" + end ); } { Pattern p; Matcher m; String pattern="(?:Yes)|(?:No)(.*)End"; p=Pattern.compile(pattern); m=p.matcher(TEST_STRING); boolean f=m.find(); int count=m.groupCount(); int start=m.start(1); int end=m.end(1); System.out.println("Pattern=" + pattern + "\t Found=" + f + " Group count=" + count + " Start of group 1=" + start + " End of group 1=" + end ); } } Which gives the following output: Pattern=(?:Yes|No)(.*)End Found=true Group count=1 Start of group 1=9 End of group 1=21 Pattern=(?:Yes)|(?:No)(.*)End Found=true Group count=1 Start of group 1=-1 End of group 1=-1

    Read the article

  • Downloading a webpage in C# example

    - by Chris
    I am trying to understand some example code on this web page: (http://www.csharp-station.com/HowTo/HttpWebFetch.aspx) that downloads a file from the internet. The piece of code quoted below goes through a loop getting chunks of data and saving them to a string until all the data has been downloaded. As I understand it, "count" contains the size of the downloaded chunk and the loop runs until count is 0 (an empty chunk of data is downloaded). My question is, isn't it possible that count could be 0 without the file being completely downloaded? Say if the network connection is interrupted, the stream may not have any data to read on a pass of the loop and count should be 0, ending the download prematurely. Or does ResStream.Read stop the program until it gets data? Is this the correct way to save a stream? int count = 0; do { // fill the buffer with data count = resStream.Read(buf, 0, buf.Length); // make sure we read some data if (count != 0) { // translate from bytes to ASCII text tempString = Encoding.ASCII.GetString(buf, 0, count); // continue building the string sb.Append(tempString); } } while (count 0); // any more data to read?

    Read the article

  • Finding an odd perfect number

    - by Coin Bird
    I wrote these two methods to determine if a number is perfect. My prof wants me to combine them to find out if there is an odd perfect number. I know there isn't one(that is known), but I need to actually write the code to prove that. The issue is with my main method. I tested the two test methods. I tried debugging and it gets stuck on the number 5, though I can't figure out why. Here is my code: public class Lab6 { public static void main (String[]args) { int testNum = 3; while (testNum != sum_of_divisors(testNum) && testNum%2 != 0) testNum++; } public static int sum_of_divisors(int numDiv) { int count = 1; int totalDivisors = 0; while (count < numDiv) if (numDiv%count == 0) { totalDivisors = totalDivisors + count; count++; } else count++; return totalDivisors; } public static boolean is_perfect(int numPerfect) { int count = 1; int totalPerfect = 0; while (totalPerfect < numPerfect) { totalPerfect = totalPerfect + count; count++; } if (numPerfect == totalPerfect) return true; else return false; } }

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • XNA: Huge Tile Map, long load times

    - by Zach
    Recently I built a tile map generator for a game project. What I am very proud of is that I finally got it to the point where I can have a GIANT 2D map build perfectly on my PC. About 120000pixels by 40000 pixels. I can go larger actually, but I have only 1 draw back. #1 ram, the map currently draws about 320MB of ram and I know the Xbox allows 512MB I think? #2 It takes 20 mins for the map to build then display on the Xbox, on my PC it take less then a few seconds. I need to bring that 20 minutes of generating from 20 mins to how ever little bit I can, and how can a lower the amount of RAM usage while still being able to generate my map. Right now everything is stored in Jagged Arrays, each piece generating in a size of 1280x720 (the mother piece). Up to the amount that I need, every block is exactly 40x40 pixels however the blocks get removed from a List or regenerated in a List depending how close the mother piece is to the player. Saving A LOT of CPU, so at all times its no more then looping through 5184 some blocks. Well at least I'm sure of this. But how can I lower my RAM usage without hurting the size of the map, and how can I lower these INSANE loading times? EDIT: Let me explain my self better. Also I'd like to let everyone know now that I'm inexperienced with many of these things. So here is an example of the arrays I'm using. Here is the overall in a shorter term: int[][] array = new int[30][]; array[0] = new int[] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; array[1] = new int[] { 1, 3, 3, 3, 3, 1, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; that goes on for around 30 arrays downward. Now for every time it hits a 1, it goes and generates a tile map 1280x720 and it does that exactly the way it does it above. This is how I loop through those arrays: for (int i = 0; i < array.Length; i += 1) { for (int h = 0; h < array[i].Length; h += 1) { } { Now how the tiles are drawn and removed is something like this: public void Draw(SpriteBatch spriteBatch, Vector2 cam) { if (cam.X >= this.Position.X - 1280) { if (cam.X <= this.Position.X + 2560) { if (cam.Y >= this.Position.Y - 720) { if (cam.Y <= this.Position.Y + 1440) { if (visible) { if (once == 0) { once = 1; visible = false; regen(); } } for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles[i].Draw(spriteBatch, cam); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles[i].Draw(spriteBatch, cam); } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } } If you guys still need more information just ask in the comments.

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • How to retrieve row count of one-to-many relation while also including original entity?

    - by kaa
    Say I have two entities Foo and Bar where Foo has-many Bar's, class Foo { int ImportantNumber { get; set; } IEnumerable<Bar> Bars { get; set; } } class FooDTO { Foo Foo { get; set; } int BarCount { get; set; } } How can I efficiently sum up the number of Bars per Foo in a DTO using a single query, preferrably only with the Criteria interface. I have tried any number of ways to get the original entity out of a query with ´SetProjection´ but no luck. The current theory is to do something like SELECT Foo.*, BarCounts.counts FROM Foo LEFT JOIN ( SELECT fooId, COUNT(*) as counts FROM Bar GROUP BY fooId ) AS BarCounts ON Foo.id=BarCounts.fooId but with Criterias, and I just can't seem to figure out how.

    Read the article

  • jquery selector to count the number of visible table rows?

    - by sprugman
    I've got this html: <table> <tr style="display:table-row"><td>blah</td></tr> <tr style="display:none"><td>blah</td></tr> <tr style="display:none"><td>blah</td></tr> <tr style="display:table-row"><td>blah</td></tr> <tr style="display:table-row"><td>blah</td></tr> </table> I need to count the number of rows that don't have display:none. How can I do that?

    Read the article

  • MS SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >