Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 60/572 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Animate multiple entities

    - by Robert
    I'm trying to animate multiple(3) entities using one model(IQM format). It's working but performance is really bad because I'm calling animate function for each entity in my game loop (I think problem is there). What's the best way to animate multiple entities (with different animation ofc) in OpenGL? I think I can try build one VBO / entity for better performances but I don't think it's the best way to do it.

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Is there a generally accepted maximum number of http requests for a web page load?

    - by MorganTiley
    I'm looking into optimizing my web application's client side performance but cannot figure out a good goal for # of http requests for a page. How does YSlow calculate the grade? This doesn't seem to be documented. Also it seems many sites like linkedin.com, amazon.com get an F grade but the page still loads quite fast. How does it fail but still perform well? Gmail get's an A grade and it has 43 unprimed/10 primed requests.

    Read the article

  • Very slow performance deserializing using datacontractserializer in a Silverlight Application.

    - by caryden
    Here is the situation: Silverlight 3 Application hits an asp.net hosted WCF service to get a list of items to display in a grid. Once the list is brought down to the client it is cached in IsolatedStorage. This is done by using the DataContractSerializer to serialize all of these objects to a stream which is then zipped and then encrypted. When the application is relaunched, it first loads from the cache (reversing the process above) and the deserializes the objects using the DataContractSerializer.ReadObject() method. All of this was working wonderfully under all scenarios until recently with the entire "load from cache" path (decrypt/unzip/deserialize) taking hundreds of milliseconds at most. On some development machines but not all (all machines Windows 7) the deserialize process - that is the call to ReadObject(stream) takes several minutes an seems to lock up the entire machine BUT ONLY WHEN RUNNING IN THE DEBUGGER in VS2008. Running the Debug configuration code outside the debugger has no problem. One thing that seems to look suspicious is that when you turn on stop on Exceptions, you can see that the ReadObject() throws many, many System.FormatException's indicating that a number was not in the correct format. When I turn off "Just My Code" thousands of these get dumped to the screen. None go unhandled. These occur both on the read back from the cache AND on a deserialization at the conclusion of a web service call to get the data from the WCF Service. HOWEVER, these same exceptions occur on my laptop development machine that does not experience the slowness at all. And FWIW, my laptop is really old and my desktop is a 4 core, 6GB RAM beast. Again, no problems unless running under the debugger in VS2008. Anyone else seem this? Any thoughts? Here is the bug report link: https://connect.microsoft.com/VisualStudio/feedback/details/539609/very-slow-performance-deserializing-using-datacontractserializer-in-a-silverlight-application-only-in-debugger

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Using WIndows PowerShell 1.0 or 2.0 to evaluate performance of executable files.

    - by Andry
    Hello! I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files. The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times. I did this: Function Time-It { Param ([string]$ProgramPath, [string]$Arguments) $Watch = New-Object System.Diagnostics.Stopwatch $NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick" $Watch.Start() # Starts the timer [System.Diagnostics.Process]::Start($ProgramPath, $Arguments) $Watch.Stop() # Stops the timer # Collectiong timings $Ticks = $Watch.ElapsedTicks $NSecs = $Watch.ElapsedTicks * $NsecPerTick Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)" } This function uses stopwatch. Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created... I need to stop the timer once the program ends. I thought about the Process class, thicking it held some info regarding the execution times... not lucky... How to solve this?

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Does breaking chained Select()s in LINQ to objects hurt performance?

    - by Justin
    Take the following pseudo C# code: using System; using System.Data; using System.Linq; using System.Collections.Generic; public IEnumerable<IDataRecord> GetRecords(string sql) { // DB logic goes here } public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; var ids = GetRecords(sql).Select(record => (record["EmployerID"] as int?) ?? 0); return ids.Select(employerID => new Employer(employerID) as IEmployer); } Would it be faster if the two Select() calls were combined? Is there an extra iteration in the code above? Is the following code faster? public IEnumerable<IEmployer> Employers() { string sql = "select EmployerID from employer"; return Query.Records(sql).Select(record => new Employer((record["EmployerID"] as int?) ?? 0) as IEmployer); } I think the first example is more readable if there is no difference in performance.

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • How costly performance-wise are these actions in iPhone objective-C?

    - by Alex Gosselin
    This is really a few questions in one, I'm wondering what the performance cost is for these things, as I haven't really been following a best practice of any sort for these. The answers may also be useful to other readers, if somebody knows these. (1) If I need the core data managed object context, is it bad to use #import "myAppDelegate.h" //farther down in the code: NSManagedObjectContext *context = [(myAppDelegate.h*)[[UIApplication sharedApplication] delegate] managedObjectContext]; as opposed to leaving the warning you get if you don't cast the delegate? (2) What is the cheapest way to hard-code a string? I have been using return @"myString"; on occasion in some functions where I need to pass it to a variety of places, is it better to do it this way: static NSString *str = @"myString"; return str; (3) How costly is it to subclass an object i wrote vs. making a new one, in general? (4) When I am using core data and navigating through a hierarchy of some sort, is it necessary to turn things back into faults somehow after I read some info from them? or is this done automatically? Thanks for any help.

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • ASP.NET Performance tip- Combine multiple script file into one request with script manager

    - by Jalpesh P. Vadgama
    We all need java script for our web application and we storing our JavaScript code in .js files. Now If we have more then .js file then our browser will create a new request for each .js file. Which is a little overhead in terms of performance. If you have very big enterprise application you will have so much over head for this. Asp.net Script Manager provides a feature to combine multiple JavaScript into one request but you must remember that this feature will be available only with .NET Framework 3.5 sp1 or higher versions.  Let’s take a simple example. I am having two javascript files Jscrip1.js and Jscript2.js both are having separate functions. //Jscript1.js function Task1() { alert('task1'); } Here is another one for another file. ////Jscript1.js function Task2() { alert('task2'); } Now I am adding script reference with script manager and using this function in my code like this. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now Let’s test in Firefox with Lori plug-in which will show you how many request are made for this. Here is output of that. You can see 5 Requests are there. Now let’s do same thing in with ASP.NET Script Manager combined script feature. Like following <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </CompositeScript> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now let’s run it and let’s see how many request are there like following. As you can see now we have only 4 request compare to 5 request earlier. So script manager combined multiple script into one request. So if you have lots of javascript files you can save your loading time with this with combining multiple script files into one request. Hope you liked it. Stay tuned for more!!!.. Happy programming.. Technorati Tags: ASP.NET,ScriptManager,Microsoft Ajax

    Read the article

  • [.NET Performance Counter] "System.ComponentModel.Win32Exception: Access is denied" is thrown when c

    - by Ricky
    Hi guys: Calling PerformanceCounterCategory.Create() below on my machine thorws out this exception: System.ComponentModel.Win32Exception: Access is denied Do you know what's the issue about it? Thank you! if (!PerformanceCounterCategory.Exists("MyCategory")) { CounterCreationDataCollection counters = new CounterCreationDataCollection(); CounterCreationData avgDurationBase = new CounterCreationData(); avgDurationBase.CounterName = "average time per operation base"; avgDurationBase.CounterHelp = "Average duration per operation execution base"; avgDurationBase.CounterType = PerformanceCounterType.AverageBase; counters.Add(avgDurationBase); // create new category with the counters above PerformanceCounterCategory.Create("MyCategory", "Sample category for Codeproject", PerformanceCounterCategoryType.SingleInstance, counters); }

    Read the article

  • Any significant performance improvement by using bitwise operators instead of plain int sums in C#?

    - by tunnuz
    Hello, I started working with C# a few weeks ago and I'm now in a situation where I need to build up a "bit set" flag to handle different cases in an algorithm. I have thus two options: enum RelativePositioning { LEFT = 0, RIGHT = 1, BOTTOM = 2, TOP = 3, FRONT = 4, BACK = 5 } pos = ((eye.X < minCorner.X ? 1 : 0) << RelativePositioning.LEFT) + ((eye.X > maxCorner.X ? 1 : 0) << RelativePositioning.RIGHT) + ((eye.Y < minCorner.Y ? 1 : 0) << RelativePositioning.BOTTOM) + ((eye.Y > maxCorner.Y ? 1 : 0) << RelativePositioning.TOP) + ((eye.Z < minCorner.Z ? 1 : 0) << RelativePositioning.FRONT) + ((eye.Z > maxCorner.Z ? 1 : 0) << RelativePositioning.BACK); Or: enum RelativePositioning { LEFT = 1, RIGHT = 2, BOTTOM = 4, TOP = 8, FRONT = 16, BACK = 32 } if (eye.X < minCorner.X) { pos += RelativePositioning.LEFT; } if (eye.X > maxCorner.X) { pos += RelativePositioning.RIGHT; } if (eye.Y < minCorner.Y) { pos += RelativePositioning.BOTTOM; } if (eye.Y > maxCorner.Y) { pos += RelativePositioning.TOP; } if (eye.Z > maxCorner.Z) { pos += RelativePositioning.FRONT; } if (eye.Z < minCorner.Z) { pos += RelativePositioning.BACK; } I could have used something as ((eye.X > maxCorner.X) << 1) but C# does not allow implicit casting from bool to int and the ternary operator was similar enough. My question now is: is there any performance improvement in using the first version over the second? Thank you Tommaso

    Read the article

  • How to counter the "one true language" perspective?

    - by Rob Wells
    How do you work with someone when they haven't been able to see that there is a range of other languages out there beyond "The One True Path"? I mean someone who hasn't realised that the modern software professional has a range of tools in his toolbox. The person whose knee jerk reaction is, for example, "We must do this is C++!" "Everything must be done in C++!" What's the best approach to open people up to the fact that "not everything is a nail"? How may I introduce them to having a well-equipped toolbox, selecting the best tool for the job at hand?

    Read the article

  • Sharepoint hit counter is not displayed.

    - by stckvrflw
    I followed the instructions here http://support.microsoft.com/kb/825532 After that when I preview my page, I can't see the hitcounter. I learned that it may be related to permissions of the site but I couldn't find how to do it. Is it realy related to permissions ? If so what should I do to ? And any external solution (except this one: http://hitcounter.codeplex.com/) would help, the one in pharanthesis, I couldn't make it work.

    Read the article

  • EF4 Import/Lookup thousands of records - my performance stinks!

    - by Dennis Ward
    I'm trying to setup something for a movie store website (using ASP.NET, EF4, SQL Server 2008), and in my scenario, I want to allow a "Member" store to import their catalog of movies stored in a text file containing ActorName, MovieTitle, and CatalogNumber as follows: Actor, Movie, CatalogNumber John Wayne, True Grit, 4577-12 (repeated for each record) This data will be used to lookup an actor and movie, and create a "MemberMovie" record, and my import speed is terrible if I import more than 100 or so records using these tables: Actor Table: Fields = {ID, Name, etc.} Movie Table: Fields = {ID, Title, ActorID, etc.} MemberMovie Table: Fields = {ID, CatalogNumber, MovieID, etc.} My methodology to import data into the MemberMovie table from a text file is as follows (after the file has been uploaded successfully): Create a context. For each line in the file, lookup the artist in the Actor table. For each Movie in the Artist table, lookup the matching title. If a matching Movie is found, add a new MemberMovie record to the context and call ctx.SaveChanges(). The performance of my implementation is terrible. My expectation is that this can be done with thousands of records in a few seconds (after the file has been uploaded), and I've got something that times out the browser. My question is this: What is the best approach for performing bulk lookups/inserts like this? Should I call SaveChanges only once rather than for each newly created MemberMovie? Would it be better to implement this using something like a stored procedure? A snippet of my loop is roughly this (edited for brevity): while ((fline = file.ReadLine()) != null) { string [] token = fline.Split(separator); string Actor = token[0]; string Movie = token[1]; string CatNumber = token[2]; Actor found_actor = ctx.Actors.Where(a => a.Name.Equals(actor)).FirstOrDefault(); if (found_actor == null) continue; Movie found_movie = found_actor.Movies.Where( s => s.Title.Equals(title, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault(); if (found_movie == null) continue; ctx.MemberMovies.AddObject(new MemberMovie() { MemberProfileID = profile_id, CatalogNumber = CatNumber, Movie = found_movie }); try { ctx.SaveChanges(); } catch { } } Any help is appreciated! Thanks, Dennis

    Read the article

  • Recalculate Counter Cache of 120k Records [Rails / ActiveRecord]

    - by Sebastian
    The following situation: I have a poi model, which has many pictures (1:n). I want to recalculate the counter_cache column, because the values are inconsistent. I've tried to iterate within ruby over each record, but this takes much too long and quits sometimes with some "segmentation fault" bugs. So i wonder, if its possible to do this with a raw sql query?

    Read the article

  • c# Counter requires 2 button clicks to update

    - by marko.ivanovski.nz
    Hi, I have a problem that has been bugging me all day. In my code I have the following: private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } and a button event protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } Then on Page_Load I read that value and create controls accordingly. I understand the button event fires AFTER the Page_Load fires so the value isn't updated until the next postback. Real nightmare. Here's the entire code: protected void Page_Load(object sender, EventArgs e) { string xmlValue = ""; //To read a value from a database if (xmlValue.Length > 0) { if (!Page.IsPostBack) { DataSet ds = XMLToDataSet(xmlValue); Table dimensionsTable = DataSetToTable(ds); tablePanel.Controls.Add(dimensionsTable); DataTable dt = ds.Tables["Dimensions"]; rowCount = dt.Rows.Count; colCount = dt.Columns.Count; } else { tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } else { if (!Page.IsPostBack) { rowCount = 2; colCount = 4; } tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } protected void submit_Click(object sender, EventArgs e) { resultsLabel.Text = Server.HtmlEncode(DataSetToStringXML(TableToDataSet((Table)tablePanel.Controls[0]))); } protected void addColumn_Click(object sender, EventArgs e) { colCount = colCount + 1; } protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } public DataSet TableToDataSet(Table table) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); //Add headers for (int i = 0; i < table.Rows[0].Cells.Count; i++) { DataColumn col = new DataColumn(); TextBox headerTxtBox = (TextBox)table.Rows[0].Cells[i].Controls[0]; col.ColumnName = headerTxtBox.Text; col.Caption = headerTxtBox.Text; dt.Columns.Add(col); } for (int i = 0; i < table.Rows.Count; i++) { DataRow valueRow = dt.NewRow(); for (int x = 0; x < table.Rows[i].Cells.Count; x++) { TextBox valueTextBox = (TextBox)table.Rows[i].Cells[x].Controls[0]; valueRow[x] = valueTextBox.Text; } dt.Rows.Add(valueRow); } return ds; } public Table DataSetToTable(DataSet ds) { DataTable dt = ds.Tables["Dimensions"]; Table newTable = new Table(); //Add headers TableRow headerRow = new TableRow(); for (int i = 0; i < dt.Columns.Count; i++) { TableCell headerCell = new TableCell(); TextBox headerTxtBox = new TextBox(); headerTxtBox.ID = "HeadersTxtBox" + i.ToString(); headerTxtBox.Font.Bold = true; headerTxtBox.Text = dt.Columns[i].ColumnName; headerCell.Controls.Add(headerTxtBox); headerRow.Cells.Add(headerCell); } newTable.Rows.Add(headerRow); //Add value rows for (int i = 0; i < dt.Rows.Count; i++) { TableRow valueRow = new TableRow(); for (int x = 0; x < dt.Columns.Count; x++) { TableCell valueCell = new TableCell(); TextBox valueTxtBox = new TextBox(); valueTxtBox.ID = "ValueTxtBox" + i.ToString() + i + x + x.ToString(); valueTxtBox.Text = dt.Rows[i][x].ToString(); valueCell.Controls.Add(valueTxtBox); valueRow.Cells.Add(valueCell); } newTable.Rows.Add(valueRow); } return newTable; } public DataSet DefaultDataSet(int rows, int cols) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); DataColumn nameCol = new DataColumn(); nameCol.Caption = "Name"; nameCol.ColumnName = "Name"; nameCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(nameCol); DataColumn widthCol = new DataColumn(); widthCol.Caption = "Width"; widthCol.ColumnName = "Width"; widthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(widthCol); if (cols > 2) { DataColumn heightCol = new DataColumn(); heightCol.Caption = "Height"; heightCol.ColumnName = "Height"; heightCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(heightCol); } if (cols > 3) { DataColumn depthCol = new DataColumn(); depthCol.Caption = "Depth"; depthCol.ColumnName = "Depth"; depthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(depthCol); } if (cols > 4) { int newColCount = cols - 4; for (int i = 0; i < newColCount; i++) { DataColumn newCol = new DataColumn(); newCol.Caption = "New " + i.ToString(); newCol.ColumnName = "New " + i.ToString(); newCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(newCol); } } for (int i = 0; i < rows; i++) { DataRow newRow = dt.NewRow(); newRow["Name"] = "Name " + i.ToString(); newRow["Width"] = "Width " + i.ToString(); if (cols > 2) { newRow["Height"] = "Height " + i.ToString(); } if (cols > 3) { newRow["Depth"] = "Depth " + i.ToString(); } dt.Rows.Add(newRow); } return ds; } public DataSet XMLToDataSet(string xml) { StringReader sr = new StringReader(xml); DataSet ds = new DataSet(); ds.ReadXml(sr); return ds; } public string DataSetToStringXML(DataSet ds) { XmlDocument _XMLDoc = new XmlDocument(); _XMLDoc.LoadXml(ds.GetXml()); StringWriter sw = new StringWriter(); XmlTextWriter xw = new XmlTextWriter(sw); XmlDocument xml = _XMLDoc; xml.WriteTo(xw); return sw.ToString(); } private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } private int colCount { get { return (int)ViewState["colCount"]; } set { ViewState["colCount"] = value; } } Thanks in advance, Marko

    Read the article

  • Writing a simple incrementing counter in rails

    - by Trip
    For every Card, I would like to attach a special number to them that increments by one. I assume I can do this all in the controller. def create @card = Card.new(params[:card]) @card.SpecNum = @card.SpecNum ++ ... end Or. I can be blatantly retarded. And maybe the best bet is to add an auto-incremement table to mysql. The problem is the number has to start at a specific number, 1020. Any ideas?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >