Search Results

Search found 6532 results on 262 pages for 'computed columns'.

Page 98/262 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Is NAN suitable for communicating that an invalid parameter was involved in a calculation?

    - by Arman
    I am currently working on a numerical processing system that will be deployed in a performance-critical environment. It takes inputs in the form of numerical arrays (these use the eigen library, but for the purpose of this question that's perhaps immaterial), and performs some range of numerical computations (matrix products, concatenations, etc.) to produce outputs. All arrays are allocated statically and their sizes are known at compile time. However, some of the inputs may be invalid. In these exceptional cases, we still want the code to be computed and we still want outputs not "polluted" by invalid values to be used. To give an example, let's take the following trivial example (this is pseudo-code): Matrix a = {1, 2, NAN, 4}; // this is the "input" matrix Scalar b = 2; Matrix output = b * a; // this results in {2, 4, NAN, 8} The idea here is that 2, 4 and 8 are usable values, but the NAN should signal to the receipient of the data that that entry was involved in an operation that involved an invalid value, and should be discarded (this will be detected via a std::isfinite(value) check before the value is used). Is this a sound way of communicating and propagating unusable values, given that performance is critical and heap allocation is not an option (and neither are other resource-consuming constructs such as boost::optional or pointers)? Are there better ways of doing this? At this point I'm quite happy with the current setup but I was hoping to get some fresh ideas or productive criticism of the current implementation.

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • learn the programming language for computing functions about integers

    - by asd
    Hi I know something about Pascal, Mathematica and Matlab, but I dont have any idea about C,C++,C# languages. I want to learn one of the languages that they they are fast and exact to compute some arithmetic functions for large numbers(for example larger than $10^3000$). I asked somebody and he said he used C++ and he said I computed this sequence in less than 10 min. I want to know C, C++, C# and visual kind of theses programs and know which is better for my goal. Let $f$ be an arithmetic function and A={k1,k2,...,kn} are integers in increasing order. Now I want to start with k1 and compare f(ki) with f(k1). If f(ki)f(k1), put ki as k1. Now start with ki, and compare f(kj) with f(ki), for ji. If f(kj)f(ki), put kj as ki, and repeat this procedure. At the end we will have a sub sequence B={L1,...,Lm} of A by this property: f(L(i+1))f(L(i)), for any 1<=i<=m-1 I have written a code for this program with Mathematica, and it take some hours to compute f of ki's or the set B for large numbers. For example, let f is the divisor function of integers. Do you know how to write the code for my purpose in Mathematica or Matlab. Mathematica is preferable.

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • How to notify client about updated UpdatePanel content on server side

    - by csh1981
    I have a problem with UpdatePanel.Update() which works initially but then stops. I have tumbled with this problem for some time and some background is needed so please read ahead. I have an ASP.net application in which I have a subpage that display computed information in graphs. Each graph is embedded in an UpdatePanel. The graph is a user control that uses the standard asp:Chart for display. My task is to enable this page with AJAX capabilities so the page is responsive during postbacks. When I access this page from another page, during the initial page rendering, I use a wait dialog for each graph and a pageload event on the client side. In the client event, a hidden button is clicked which a server event handles (the hidden button is inside an UpdatePanel so the postback is asynchronous). Each graph is computed and the UpdatePanels are in turn updated with the Chart content. This is done using UpdatePanel.Update. And it is successful. However, I also have some RadioButtons on the page. These are dynamically created. The purpose of them is to switch graph type --- to show the same data in a different way. Same type of time consuming computation is needed in order to do so. I subscribe on each RadioButton's OnCheckedChanged event and the postback is asynchronous since the radiobuttons are inside an UpdatePanel. In the server event handler I determine the type of graph and use this as an input to the Chart control. I then remove the old Chart control from my Panel and adds new Chart and then I call UpdatePanel.Update(). But with no success. Nothing happens, no errors, nothing. Why is this?? I think this is strange because if I compute every Chart data in the initial rendering instead of using the "Wait dialog"-solution described earlier then I can select graph types successfully and all subsequent AJAX requests work as intended. Also, the same code (computing the chart, removal, and adding the Chart control to Panel and UpdatePanel.Update()) is hit during the initial rendering of the page, and it works only the first time. Here is the method that computes the graph and adds it to the panel and update the UpdatePanel: public void UpdateGraph(GraphType type, GraphMapper mapper) { //Panel is the content of UpdatePanelGraph's Panel.Controls.Clear(); chart = new Chart(type, mapper); //Computation happens inside here panel.Controls.Add(chart); //UpdatePanelGraph is in UpdateMode Conditional and has //ChildrenAsTriggers set to false UpdatePanelGraph.Update(); } I really need a way for these radiobuttons to work, possible using some clientside JavaScript or another way of handling things on the server side. I have thought about using a JavaScript postback call on the UpdatePanel instead of the UpdatePanel.Update(). However, the issue I have here is how to notify the client side when the server side is finished with computing the graph? An plausible explanation of the strange behavior is also much appreciated. Any help appreciated, thanks

    Read the article

  • Where to put business logic in MVC design?

    - by BriskLabs Pakistan
    I have created a simple MVC java application that adds records through data forms to a database. my app collects data, it also validates it and stores it. This is because the data is being sourced online from different users. the data is mostly numeric in nature. now on the numeric data being stored into database (SQL server) , i wish that my app should be able to perform computations... and display it. the user is not interested in how computations are done so they must be encapsulated. the user must only be able to view the simple computed data which for example A column data - B Column data / C column data etc... and just display it to the user... i know how to write stored procedures for same but i want a 3 tier app I want the data, that I put into the database as a record, worked upon by performing calculations on it. However, the original data should remain unaffected, while the new data, post-calculations, must be stored as a new entity record into the database. Where should I write the code for this background calculation? As it is the rules and business logic... in a new java beans files ?

    Read the article

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • Execution plan warnings–All that glitters is not gold

    - by Dave Ballantyne
    In a previous post, I showed you the new execution plan warnings related to implicit and explicit warnings.  Pretty much as soon as i hit ’post’,  I noticed something rather odd happening. This statement : select top(10) SalesOrderHeader.SalesOrderID, SalesOrderNumberfrom Sales.SalesOrderHeaderjoin Sales.SalesOrderDetail on SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID   Throws the “Type conversion may affect cardinality estimation” warning.     Ive done no such conversion in my statement why would that be ?  Well, SalesOrderNumber is a computed column , “(isnull(N'SO'+CONVERT([nvarchar](23),[SalesOrderID],0),N'*** ERROR ***'))”,  so thats where the conversion is.   Wait!!! Am i saying that every type conversion will throw the warning ?  Thankfully, no.  It only appears for columns that are used in predicates ,even if the predicate / join condition is fine ,  and the column is indexed ( and/or , presumably has statistics).    Hopefully , this wont lead to to many wild goose chases, but is definitely something to bear in mind.  If you want to see this fixed then upvote my connect item here.

    Read the article

  • Best Terminology for a particular php-based site architecture? [closed]

    - by hen3ry
    For a site... o whose overall look-and-feel is generated by one php page ("index.php"). o in which "index.php" provides for all pages served the following required components: The DOCTYPE, opening html tag, the head section, the opening body tag, the end-body tag, and end-html tag. o which uses computed hierarchical navigation menus within "index.php" to offer visitors access to the site content. o in which all content is stored in individual files that contain "headerless html". (The DOCTYPE, etc. etc. being being provided by "index.php" as described above.) Q1: what term best describes this architecture? I'm seeking a concise descriptor that is useful in conversation and definitive as a search term, in whole-web searches, and searching here on Pro Webmasters. Q2: what term best describes the individual content files? Same general goals for the descriptor as above. As you see above, I couldn't avoid using the term "headerless html", my best choice. But this term does not seem to be in general use. I've found some people use this term to describe such as my content files, but others use it quite differently.

    Read the article

  • Java code optimization leads to numerical inaccuracies and errors

    - by rano
    I'm trying to implement a version of the Fuzzy C-Means algorithm in Java and I'm trying to do some optimization by computing just once everything that can be computed just once. This is an iterative algorithm and regarding the updating of a matrix, the clusters x pixels membership matrix U, this is the update rule I want to optimize: where the x are the element of a matrix X (pixels x features) and v belongs to the matrix V (clusters x features). And m is a parameter that ranges from 1.1 to infinity. The distance used is the euclidean norm. If I had to implement this formula in a banal way I'd do: for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < V.length; j++) { double num = D[i][j]; double sumTerms = 0; for(int k = 0; k < V.length; k++) { double thisDistance = D[i][k]; sumTerms += Math.pow(num / thisDistance, (1.0 / (m - 1.0))); } U[i][j] = (float) (1f / sumTerms); } } In this way some optimization is already done, I precomputed all the possible squared distances between X and V and stored them in a matrix D but that is not enough, since I'm cycling througn the elements of V two times resulting in two nested loops. Looking at the formula the numerator of the fraction is independent of the sum so I can compute numerator and denominator independently and the denominator can be computed just once for each pixel. So I came to a solution like this: int nClusters = V.length; double exp = (1.0 / (m - 1.0)); for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < nClusters; j++) { double distance = D[i][j]; double denominator = D[i][nClusters]; double numerator = Math.pow(distance, exp); U[i][j] = (float) (1f / (numerator * denominator)); } } Where I precomputed the denominator into an additional column of the matrix D while I was computing the distances: for (int i = 0; i < X.length; i++) { for (int j = 0; j < V.length; j++) { double sum = 0; for (int k = 0; k < nDims; k++) { final double d = X[i][k] - V[j][k]; sum += d * d; } D[i][j] = sum; D[i][B.length] += Math.pow(1 / D[i][j], exp); } } By doing so I encounter numerical differences between the 'banal' computation and the second one that leads to different numerical value in U (not in the first iterates but soon enough). I guess that the problem is that exponentiate very small numbers to high values (the elements of U can range from 0.0 to 1.0 and exp , for m = 1.1, is 10) leads to ver y small values, whereas by dividing the numerator and the denominator and THEN exponentiating the result seems to be better numerically. The problem is it involves much more operations. Am I doing something wrong? Is there a possible solution to get both the code optimized and numerically stable? Any suggestion or criticism will be appreciated.

    Read the article

  • snapping an angle to the closest cardinal direction

    - by Josh E
    I'm developing a 2D sprite-based game, and I'm finding that I'm having trouble with making the sprites rotate correctly. In a nutshell, I've got spritesheets for each of 5 directions (the other 3 come from just flipping the sprite horizontally), and I need to clamp the velocity/rotation of the sprite to one of those directions. My sprite class has a pre-computed list of radians corresponding to the cardinal directions like this: protected readonly List<float> CardinalDirections = new List<float> { MathHelper.PiOver4, MathHelper.PiOver2, MathHelper.PiOver2 + MathHelper.PiOver4, MathHelper.Pi, -MathHelper.PiOver4, -MathHelper.PiOver2, -MathHelper.PiOver2 + -MathHelper.PiOver4, -MathHelper.Pi, }; Here's the positional update code: if (velocity == Vector2.Zero) return; var rot = ((float)Math.Atan2(velocity.Y, velocity.X)); TurretRotation = SnapPositionToGrid(rot); var snappedX = (float)Math.Cos(TurretRotation); var snappedY = (float)Math.Sin(TurretRotation); var rotVector = new Vector2(snappedX, snappedY); velocity *= rotVector; //...snip private float SnapPositionToGrid(float rotationToSnap) { if (rotationToSnap == 0) return 0.0f; var targetRotation = CardinalDirections.First(x => (x - rotationToSnap >= -0.01 && x - rotationToSnap <= 0.01)); return (float)Math.Round(targetRotation, 3); } What am I doing wrong here? I know that the SnapPositionToGrid method is far from what it needs to be - the .First(..) call is on purpose so that it throws on no match, but I have no idea how I would go about accomplishing this, and unfortunately, Google hasn't helped too much either. Am I thinking about this the wrong way, or is the answer staring at me in the face?

    Read the article

  • Mouse Speed in GLUT and OpenGL?

    - by CroCo
    I would like to simulate a point that moves in 2D. The input should be the speed of the mouse, so the new position will be computed as following new_position = old_position + delta_time*mouse_velocity As far as I know in GLUT there is no function to acquire the current speed of the mouse between each frame. What I've done so far to compute the delta_time as following void Display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glColor3f(1.0f, 0.0f, 0.0f); static int delta_t, current_t, previous_t; current_t = glutGet(GLUT_ELAPSED_TIME); delta_t = current_t - previous_t; std::cout << delta_t << std::endl; previous_t = current_t; glutSwapBuffers(); } Where should I start from here? (Note: I have to get the speed of the mouse because I'm modeling a system) Edit: Based on the above code, delta_time fluctuates so much 34 19 2 20 1 20 0 16 1 1 10 21 0 13 1 19 34 0 13 0 6 1 14 Why does this happen?

    Read the article

  • Creating movement path displays in a top-down 2d RTS

    - by nihohit
    My game is a top-down 2d RTS coded in C# using SFML's libraries. I want that during unit selection, a unit will display it's movement path on the map. Currently, after the path is computed as a list of directions ({left, up,down, down, down, left}, as an example), it's sent to the graphical component to create it's UI equivalent, and here I'm having some problems. current, these I've checked three ways to do it: compute the size of the image (in the example above it'll be a 3*2 rectangle) and create an invisible rectangle, and then go over the directions list and mark each spot with a visible point, so as to get a continous line. This system is slightly problematic because of the amount of large images that I need to save, but mostly because I have a lot of fine detail onscreen, and a continous line obstructs the view. again, compute the size of the image, but now create several (let's say 4) invisible images of that size, and then instead of a single continous line I'll switch between the four images, in each will appear only a fourth of the spots, in a way which creates a path animation. This is nicer on the eye, but here the memory demands, and the amount of time needed to compute each such image-loop is significant. Just create a list of single markers, each on a different spot on the path. This is very quick & easy on memory, but too sparse. Is there a simple or resource-light system to create path-animations?

    Read the article

  • Context switches much slower in new linux kernels

    - by Michael Goldshteyn
    We are looking to upgrade the OS on our servers from Ubuntu 10.04 LTS to Ubuntu 12.04 LTS. Unfortunately, it seems that the latency to run a thread that has become runnable has significantly increased from the 2.6 kernel to the 3.2 kernel. In fact the latency numbers we are getting are hard to believe. Let me be more specific about the test. We have a program that has two threads. The first thread gets the current time (in ticks using RDTSC) and then signals a condition variable once a second. The second thread waits on the condition variable and wakes up when it is signaled. It then gets the current time (in ticks using RDTSC). The difference between the time in the second thread and the time in the first thread is computed and displayed on the console. After this the second thread waits on the condition variable once more. So, we get a thread to thread signaling latency measurement once a second as a result. In linux 2.6.32, this latency is somewhere on the order of 2.8-3.5 us, which is reasonable. In linux 3.2.0, this latency is somewhere on the order of 40-100 us. I have excluded any differences in hardware between the two host hosts. They run on identical hardware (dual socket X5687 {Westmere-EP} processors running at 3.6 GHz with hyperthreading, speedstep and all C states turned off). We are changing the affinity to run both threads on physical cores of the same socket (i.e., the first thread is run on Core 0 and the second thread is run on Core 1), so there is no bouncing of threads on cores or bouncing/communication between sockets. The only difference between the two hosts is that one is running Ubuntu 10.04 LTS with kernel 2.6.32-28 (the fast context switch box) and the other is running the latest Ubuntu 12.04 LTS with kernel 3.2.0-23 (the slow context switch box). Have there been any changes in the kernel that could account for this ridiculous slow down in how long it takes for a thread to be scheduled to run?

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Is it possible/likely to be paid fairly without a college degree? [closed]

    - by user20134
    Some back story, and then my question: I took a "break" from getting a university education last year to work full time as back end developer on a GIS application at $10.50 an hour. Later that year I was hired on by a fairly prestigious organization on their GIS application for a meager salary + rockin' benefits (not that I need them). I agreed to work on this project through Summer 2012. I don't feel like I'm being fairly compensated for my time. Other team members make between 3-5 times as much as I do, and their work isn't 3-5 times as good as mine, nor do they have 3-5 times as much output. I don't think this is a rectifiable situation within this institution. They've got a set of personnel charts and the way it gets computed, I make less money than any of the janitors (who are very good, and very nice people to boot, and I'm glad they get paid so well. I wish everyone got livable wages). I'm pretty bright, but school's a drag. I don't want mega bucks, I just want $40k/yr (localized to the southeast united states) so I can save enough money to travel, or maybe "finish [my] education". My question is this: Are people without degrees ever compensated commensurate with other people who have degrees? As a someone who never "finished their education", how badly do you think this as hurt you? How do you navigate the job seeking and hiring process? As someone who hires programmers, do you pay more for diplomas? Is that an institutional necessity, or based on your own value judgement?

    Read the article

  • Scroll Viewer not visible in wpf DataGrid

    - by cre-johnny07
    I have a datagrid in a grid but the scrollviewer is not visibile even though I made it auto. Below in my code. I can't figure out where's the problem. <Grid Grid.Row="0" Grid.Column="0"> <Grid.RowDefinitions> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"></ColumnDefinition> <ColumnDefinition Width="Auto"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="Doctor Name" Grid.Row="0" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Doctor Address" Grid.Row="1" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Entry Note" Grid.Row="2" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Join Date" Grid.Row="3" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Default Discount" Grid.Row="4" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Discount Valid Till" Grid.Row="5" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Employee Name" Grid.Row="6" Grid.Column="0" Margin="5,5,0,0"/> <Grid Grid.Row="7" Grid.Column="0" Grid.ColumnSpan="2"> <Grid.ColumnDefinitions> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="Report Type" Grid.Row="0" Grid.Column="0" Margin="5,5,0,0"/> <ComboBox Grid.Row="0" Grid.Column="1" Name="cmbReportType" Text="{Binding CurrentEntity.ReportType}"/> <Button Grid.Row="0" Grid.Column="2" Name="btnAddDetail" Content="Add Details" Command="{Binding AddDetailsCommand}"/> </Grid> <TextBox Grid.Row="0" Grid.Column="1" Margin="5,5,0,0" Width="190" Name="txtDocName" Text="{Binding CurrentEntity.RefName}"/> <TextBox Grid.Row="1" Grid.Column="1" Margin="5,5,0,0" Width="190" Height="75" Name="txtDocAddress" Text="{Binding CurrentEntity.RefAddress}"/> <TextBox Grid.Row="2" Grid.Column="1" Margin="5,5,0,0" Width="190" Height="100" Name="txtEntryNote" Text="{Binding CurrentEntity.EntryNotes}"/> <Custom:DatePicker Grid.Row="3" Grid.Column="1" Margin="5,3,0,0" Width="125" Name="dtpJoinDate" Height="24" HorizontalAlignment="Left" VerticalAlignment="Top" SelectedDate="{Binding CurrentEntity.DateStarted}" SelectedDateFormat="Short"/> <TextBox Grid.Row="4" Grid.Column="1" Height="25" Width="75" Name="txtDefaultDiscount" HorizontalAlignment="Left" Margin="5,0,0,0" VerticalAlignment="Top" Text="{Binding CurrentEntity.DefaultDiscount}"/> <Custom:DatePicker Grid.Row="5" Grid.Column="1" Margin="5,3,0,0" Width="125" Name="dtpValidTill" Height="24" HorizontalAlignment="Left" VerticalAlignment="Top" SelectedDate="{Binding CurrentEntity.DefaultDiscountValidTill}" SelectedDateFormat="Short"/> <ComboBox Grid.Row="6" Grid.Column="1" Margin="5,3,0,0" Width="190" Height="30" Name="cmbEmployeeName" ItemsSource="{Binding Employees}" DisplayMemberPath="FullName" SelectedIndex="{Binding SelecteIndex}"> </ComboBox> <Custom:DataGrid Grid.Row="8" Grid.Column="0" Grid.ColumnSpan="2" ItemsSource="{Binding XYZ}" AutoGenerateColumns="False" Name="grdTestDept"> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding dep_id}" Width="40" Header="ID"/> <Custom:DataGridTextColumn Binding="{Binding dep_name}" Width="125" Header="Name"/> <Custom:DataGridTextColumn Binding="{Binding default_data}" Width="100" Header="Default Data"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </Grid> <Grid Grid.Row="0" Grid.Column="1" Grid.RowSpan="9"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" MinWidth="43"></ColumnDefinition> <ColumnDefinition Width="Auto" MinWidth="150"></ColumnDefinition> <ColumnDefinition Width="Auto" MinWidth="50"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="34*" ></RowDefinition> <RowDefinition Height="337.88*"></RowDefinition> </Grid.RowDefinitions> <TextBlock Text="Name: " Grid.Row="0" Grid.Column="0" Margin="5,4,0,0" /> <cc:ValueEnabledCombo Grid.Column="1" x:Name="cmbfilEmployeeName" Width="150" Height="30" Margin="5,4,0,0" VerticalAlignment="Top" SelectedIndex="0" ItemsSource="{Binding Employees}" DisplayMemberPath="FullName" SelectedValuePath="EmployeeId" cc:ValueEnabledCombo.SelectionChanged="{Binding SelectionChangedCommand}"> </cc:ValueEnabledCombo> <Button Grid.Column="2" Name="btnReport" Width="50" Content="Report" Height="28" Margin="5,4,0,0" Command="{Binding ReportCommand}" VerticalAlignment="Top" /> <Grid Grid.Row="1" Grid.Column="0" Grid.ColumnSpan="3"> <Custom:DataGrid ItemsSource="{Binding DoctorList}" AutoGenerateColumns="False" Name="grdDoctor" ScrollViewer.HorizontalScrollBarVisibility="Auto" ScrollViewer.VerticalScrollBarVisibility="Auto"> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding RefName}" Width="Auto" Header="Doctor Name"/> <Custom:DataGridTextColumn Binding="{Binding EmployeeFullName}" Width="Auto" Header="Employee Name"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </Grid> </Grid> </Grid>

    Read the article

  • xslt cookbook example not working

    - by Liza dawson
    Hi I am working on this from xslt cookbook type my.xml <?xml version="1.0" encoding="UTF-8"?> <people> <person name="Al Zehtooney" age="33" sex="m" smoker="no"/> <person name="Brad York" age="38" sex="m" smoker="yes"/> <person name="Charles Xavier" age="32" sex="m" smoker="no"/> <person name="David Williams" age="33" sex="m" smoker="no"/> <person name="Edward Ulster" age="33" sex="m" smoker="yes"/> <person name="Frank Townsend" age="35" sex="m" smoker="no"/> <person name="Greg Sutter" age="40" sex="m" smoker="no"/> <person name="Harry Rogers" age="37" sex="m" smoker="no"/> <person name="John Quincy" age="43" sex="m" smoker="yes"/> <person name="Kent Peterson" age="31" sex="m" smoker="no"/> <person name="Larry Newell" age="23" sex="m" smoker="no"/> <person name="Max Milton" age="22" sex="m" smoker="no"/> <person name="Norman Lamagna" age="30" sex="m" smoker="no"/> <person name="Ollie Kensington" age="44" sex="m" smoker="no"/> <person name="John Frank" age="24" sex="m" smoker="no"/> <person name="Mary Williams" age="33" sex="f" smoker="no"/> <person name="Jane Frank" age="38" sex="f" smoker="yes"/> <person name="Jo Peterson" age="32" sex="f" smoker="no"/> <person name="Angie Frost" age="33" sex="f" smoker="no"/> <person name="Betty Bates" age="33" sex="f" smoker="no"/> <person name="Connie Date" age="35" sex="f" smoker="no"/> <person name="Donna Finster" age="20" sex="f" smoker="no"/> <person name="Esther Gates" age="37" sex="f" smoker="no"/> <person name="Fanny Hill" age="33" sex="f" smoker="yes"/> <person name="Geta Iota" age="27" sex="f" smoker="no"/> <person name="Hillary Johnson" age="22" sex="f" smoker="no"/> <person name="Ingrid Kent" age="21" sex="f" smoker="no"/> <person name="Jill Larson" age="20" sex="f" smoker="no"/> <person name="Kim Mulrooney" age="41" sex="f" smoker="no"/> <person name="Lisa Nevins" age="21" sex="f" smoker="no"/> </people> type generic-attr-to-csv.xslt <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:param name="delimiter" select=" ',' "/> <xsl:output method="text" /> <xsl:strip-space elements="*"/> <xsl:template match="/"> <xsl:for-each select="$columns"> <xsl:value-of select="@name"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> <xsl:apply-templates/> </xsl:template> <xsl:template match="/*/*"> <xsl:variable name="row" select="."/> <xsl:for-each select="$columns"> <xsl:apply-templates select="$row/@*[local-name(.)=current( )/@attr]" mode="csv:map-value"/> <xsl:if test="position( ) != last( )"> <xsl:value-of select="$delimiter"/> </xsl:if> </xsl:for-each> <xsl:text>&#xa;</xsl:text> </xsl:template> <xsl:template match="@*" mode="map-value"> <xsl:value-of select="."/> </xsl:template> </xsl:stylesheet> type my.xsl <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:csv="http://www.ora.com/XSLTCookbook/namespaces/csv"> <xsl:import href="generic-attr-to-csv.xslt"/> <!--Defines the mapping from attributes to columns --> <xsl:variable name="columns" select="document('')/*/csv:column"/> <csv:column name="Name" attr="name"/> <csv:column name="Age" attr="age"/> <csv:column name="Gender" attr="sex"/> <csv:column name="Smoker" attr="smoker"/> <!-- Handle custom attribute mappings --> <xsl:template match="@sex" mode="csv:map-value"> <xsl:choose> <xsl:when test=".='m'">male</xsl:when> <xsl:when test=".='f'">female</xsl:when> <xsl:otherwise>error</xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> using the apache xalan parser D:\Test>java org.apache.xalan.xslt.Process -in my.xml -xsl my.xsl -out my.csv [Fatal Error] generic-attr-to-csv.xslt:15:6: The value of attribute "select" associated with an element type "xsl:v alue-of" must not contain the '<' character. file:///D:/Test/generic-attr-to-csv.xslt; Line #15; Column #6; org.xml.sax.SAXParseException: The value of attribut e "select" associated with an element type "xsl:value-of" must not contain the '<' character. java.lang.NullPointerException at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1171) at org.apache.xalan.transformer.TransformerImpl.createSerializationHandler(TransformerImpl.java:1060) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1268) at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImpl.java:1251) at org.apache.xalan.xslt.Process.main(Process.java:1048) Exception in thread "main" java.lang.RuntimeException at org.apache.xalan.xslt.Process.doExit(Process.java:1155) at org.apache.xalan.xslt.Process.main(Process.java:1128) Any ideas what am i doing wrong

    Read the article

  • ASP.NET Creating a Rich Repeater, DataBind wiping out custom added controls...

    - by tonyellard
    So...I had this clever idea that I'd create my own Repeater control that implements paging and sorting by inheriting from Repeater and extending it's capabilities. I found some information and bits and pieces on how to go about this and everything seemed ok... I created a WebControlLibrary to house my custom controls. Along with the enriched repeater, I created a composite control that would act as the "pager bar", having forward, back and page selection. My pager bar works 100% on it's own, properly firing a paged changed event when the user interacts with it. The rich repeater databinds without issue, but when the databind fires (when I call base.databind()), the control collection is cleared out and my pager bars are removed. This screws up the viewstate for the pager bars making them unable to fire their events properly or maintain their state. I've tried adding the controls back to the collection after base.databind() fires, but that doesn't solve the issue. I start to get very strange results including problems with altering the hierarchy of the control tree (resolved by adding [ViewStateModeById]). Before I go back to the drawing board and create a second composite control which contains a repeater and the pager bars (so that the repeater isn't responsible for the pager bars viewstate) are there any thoughts about how to resolve the issue? In the interest of share and share alike, the code for the repeater itself is below, the pagerbars aren't as significant as the issue is really the maintaining of state for any additional child controls. (forgive the roughness of some of the code...it's still a work in progress) using System; using System.Collections.Generic; using System.ComponentModel; using System.Text; using System.Data; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; [ViewStateModeById] public class SortablePagedRepeater : Repeater, INamingContainer { private SuperRepeaterPagerBar topBar = new SuperRepeaterPagerBar(); private SuperRepeaterPagerBar btmBar = new SuperRepeaterPagerBar(); protected override void OnInit(EventArgs e) { Page.RegisterRequiresControlState(this); InitializeControls(); base.OnInit(e); EnsureChildControls(); } protected void InitializeControls() { topBar.ID = this.ID + "__topPagerBar"; topBar.NumberOfPages = this._currentProperties.numOfPages; topBar.CurrentPage = this.CurrentPageNumber; topBar.PageChanged += new SuperRepeaterPagerBar.PageChangedEventHandler(PageChanged); btmBar.ID = this.ID + "__btmPagerBar"; btmBar.NumberOfPages = this._currentProperties.numOfPages; btmBar.CurrentPage = this.CurrentPageNumber; btmBar.PageChanged += new SuperRepeaterPagerBar.PageChangedEventHandler(PageChanged); } protected override void CreateChildControls() { EnsureDataBound(); this.Controls.Add(topBar); this.Controls.Add(btmBar); //base.CreateChildControls(); } private void PageChanged(object sender, int newPage) { this.CurrentPageNumber = newPage; } public override void DataBind() { //pageDataSource(); //DataBind removes all controls from control collection... base.DataBind(); Controls.Add(topBar); Controls.Add(btmBar); } private void pageDataSource() { //Create paged data source PagedDataSource pds = new PagedDataSource(); pds.PageSize = this.ItemsPerPage; pds.AllowPaging = true; // first get a PagedDataSource going and perform sort if possible... if (base.DataSource is System.Collections.IEnumerable) { pds.DataSource = (System.Collections.IEnumerable)base.DataSource; } else if (base.DataSource is System.Data.DataView) { DataView data = (DataView)DataSource; if (this.SortBy != null && data.Table.Columns.Contains(this.SortBy)) { data.Sort = this.SortBy; } pds.DataSource = data.Table.Rows; } else if (base.DataSource is System.Data.DataTable) { DataTable data = (DataTable)DataSource; if (this.SortBy != null && data.Columns.Contains(this.SortBy)) { data.DefaultView.Sort = this.SortBy; } pds.DataSource = data.DefaultView; } else if (base.DataSource is System.Data.DataSet) { DataSet data = (DataSet)DataSource; if (base.DataMember != null && data.Tables.Contains(base.DataMember)) { if (this.SortBy != null && data.Tables[base.DataMember].Columns.Contains(this.SortBy)) { data.Tables[base.DataMember].DefaultView.Sort = this.SortBy; } pds.DataSource = data.Tables[base.DataMember].DefaultView; } else if (data.Tables.Count > 0) { if (this.SortBy != null && data.Tables[0].Columns.Contains(this.SortBy)) { data.Tables[0].DefaultView.Sort = this.SortBy; } pds.DataSource = data.Tables[0].DefaultView; } else { throw new Exception("DataSet doesn't have any tables."); } } else if (base.DataSource == null) { // don't do anything? } else { throw new Exception("DataSource must be of type System.Collections.IEnumerable. The DataSource you provided is of type " + base.DataSource.GetType().ToString()); } if (pds != null && base.DataSource != null) { //Make sure that the page doesn't exceed the maximum number of pages //available if (this.CurrentPageNumber >= pds.PageCount) { this.CurrentPageNumber = pds.PageCount - 1; } //Set up paging values... btmBar.CurrentPage = topBar.CurrentPage = pds.CurrentPageIndex = this.CurrentPageNumber; this._currentProperties.numOfPages = btmBar.NumberOfPages = topBar.NumberOfPages = pds.PageCount; base.DataSource = pds; } } public override object DataSource { get { return base.DataSource; } set { //init(); //reset paging/sorting values since we've potentially changed data sources. base.DataSource = value; pageDataSource(); } } protected override void Render(HtmlTextWriter writer) { topBar.RenderControl(writer); base.Render(writer); btmBar.RenderControl(writer); } [Serializable] protected struct CurrentProperties { public int pageNum; public int itemsPerPage; public int numOfPages; public string sortBy; public bool sortDir; } protected CurrentProperties _currentProperties = new CurrentProperties(); protected override object SaveControlState() { return this._currentProperties; } protected override void LoadControlState(object savedState) { this._currentProperties = (CurrentProperties)savedState; } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue("")] [Localizable(false)] public string SortBy { get { return this._currentProperties.sortBy; } set { //If sorting by the same column, swap the sort direction. if (this._currentProperties.sortBy == value) { this.SortAscending = !this.SortAscending; } else { this.SortAscending = true; } this._currentProperties.sortBy = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(true)] [Localizable(false)] public bool SortAscending { get { return this._currentProperties.sortDir; } set { this._currentProperties.sortDir = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(25)] [Localizable(false)] public int ItemsPerPage { get { return this._currentProperties.itemsPerPage; } set { this._currentProperties.itemsPerPage = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(1)] [Localizable(false)] public int CurrentPageNumber { get { return this._currentProperties.pageNum; } set { this._currentProperties.pageNum = value; pageDataSource(); } } }

    Read the article

  • Ajax Tabs implementation problem .

    - by SmartDev
    Hi , I have Implement ajax tabs and i have four tabs in it . In the four tabs i have four grid views with paging and sorting .The tabs are looking good i can see the grid ,but the problem is my first tab sorting works fine, where if i click on any other tab and click on the grid it goes to my first tab again . One more thing i want to change the background color of each tab. Can anyone help please here is my source code: <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:ScriptManager ID="ScMMyTabs" runat="server"> </asp:ScriptManager> <cc1:TabContainer ID="TCMytabs" ActiveTabIndex="0" runat="server"> <cc1:TabPanel ID="TpMyreq" runat="server" CssClass="TabBackground" HeaderText="My request"> <ContentTemplate> <table> <tr> <td> <asp:Button ID="btnexportMyRequestCsu" runat="server" Text="Export To Excel" CssClass="LabelDisplay" OnClick="btnexportMyRequestCsu_Click" /> </td> </tr> <tr> <td> <asp:GridView ID="GdvMyrequest" runat="server" CssClass="Mytabs" BackColor="White" BorderColor="White" BorderStyle="Ridge" BorderWidth="2px" CellPadding="3" CellSpacing="1" GridLines="None" OnPageIndexChanging="GdvMyrequest_PageIndexChanging" OnSorting="GdvMyrequest_Sorting" EmptyDataText="No request found for this user"> <RowStyle BackColor="#DEDFDE" ForeColor="Black" /> <FooterStyle BackColor="#C6C3C6" ForeColor="Black" /> <PagerStyle BackColor="Control" ForeColor="Gray" HorizontalAlign="Left" /> <SelectedRowStyle BackColor="#9471DE" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#4A3C8C" Font-Bold="True" ForeColor="#E7E7FF" /> <PagerSettings Position="TopAndBottom" /> <Columns> <asp:TemplateField> <HeaderTemplate> Row No </HeaderTemplate> <ItemTemplate> <%# Container.DataItemIndex + 1 %> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </td> </tr> <tr> <td> <asp:Label ID="lblmessmyrequestAhk" runat="server" CssClass="labelmess"></asp:Label> </td> </tr> </table> </ContentTemplate> </cc1:TabPanel> <cc1:TabPanel ID="TpMyPaymentCc" runat="server" HeaderText="Payments Credit Card" > <ContentTemplate> <table> <tr> <td> <asp:GridView ID="GdvmypaymentsCc" runat="server" CssClass="Mytabs" BackColor="White" BorderColor="White" BorderStyle="Ridge" BorderWidth="2px" CellPadding="3" CellSpacing="1" GridLines="None" OnPageIndexChanging="GdvmypaymentsCc_PageIndexChanging" OnSorting="GdvmypaymentsCc_Sorting" EmptyDataText="No Data"> <RowStyle BackColor="#DEDFDE" ForeColor="Black" /> <FooterStyle BackColor="#C6C3C6" ForeColor="Black" /> <PagerStyle BackColor="Control" ForeColor="Gray" HorizontalAlign="Left" /> <SelectedRowStyle BackColor="#9471DE" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#4A3C8C" Font-Bold="True" ForeColor="#E7E7FF" /> <PagerSettings Position="TopAndBottom" /> </asp:GridView> </td> </tr> <tr> <td> <asp:Label ID="lblmessmypaymentsCsu" runat="server" CssClass="labelmess"></asp:Label> </td> </tr> </table> </ContentTemplate> </cc1:TabPanel> <cc1:TabPanel ID="TpMyPaymentsCk" runat="server" HeaderText="Payments Check" > <ContentTemplate> <asp:GridView ID="GdvmypaymentsCk" runat="server" CssClass="Mytabs" BackColor="White" BorderColor="White" BorderStyle="Ridge" BorderWidth="2px" CellPadding="3" CellSpacing="1" GridLines="None" OnPageIndexChanging="GdvmypaymentsCk_PageIndexChanging" OnSorting="GdvmypaymentsCk_Sorting" EmptyDataText="No Data"> <RowStyle BackColor="#DEDFDE" ForeColor="Black" /> <FooterStyle BackColor="#C6C3C6" ForeColor="Black" /> <PagerStyle BackColor="Control" ForeColor="Gray" HorizontalAlign="Left" /> <SelectedRowStyle BackColor="#9471DE" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#4A3C8C" Font-Bold="True" ForeColor="#E7E7FF" /> <PagerSettings Position="TopAndBottom" /> </asp:GridView> </ContentTemplate> </cc1:TabPanel> <cc1:TabPanel ID="TpMyCalls" runat="server" HeaderText="Calls" > <ContentTemplate> <table> <tr> <td> <asp:GridView ID="GdvSelectcallsP" runat="server" CssClass="Mytabs" BackColor="White" BorderColor="White" BorderStyle="Ridge" BorderWidth="2px" CellPadding="3" CellSpacing="1" GridLines="None" OnPageIndexChanging="GdvSelectcallsP_PageIndexChanging" OnRowDataBound="GdvSelectcallsP_RowDataBound" OnSorting="GdvSelectcallsP_Sorting" > <RowStyle BackColor="#DEDFDE" ForeColor="Black" /> <FooterStyle BackColor="#C6C3C6" ForeColor="Black" /> <PagerStyle BackColor="#C6C3C6" ForeColor="Black" HorizontalAlign="Right" /> <SelectedRowStyle BackColor="#9471DE" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#4A3C8C" Font-Bold="True" ForeColor="#E7E7FF" /> <Columns> <asp:TemplateField> <HeaderTemplate> Row No </HeaderTemplate> <ItemTemplate> <%# Container.DataItemIndex + 1 %> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </td> </tr> <tr> <td> <asp:Label ID="lblmessselectcallsAhkP" runat="server" CssClass="labelmess"></asp:Label> </td> </tr> </table> </ContentTemplate> </cc1:TabPanel> </cc1:TabContainer> </asp:Content>

    Read the article

  • Detect bugs in this asp.net VB master page , default.aspx and detail.aspx page codes. [closed]

    - by ITGURU2011
    please help me in detecting some bugs, cosmetic issues, information design issues, programming issues in the Below code of master page and default.aspx page and detail.aspx page. also suggest me some way to make it work better. i seprated all the three pages with the names. Master Page <%@ Master Language="VB" CodeFile="Limo.master.vb" Inherits="Limo" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <asp:ContentPlaceHolder id="ContentPlaceHolder2" runat="server"> </asp:ContentPlaceHolder> <link href="StyleSheet.css" rel="stylesheet" type="text/css" /> <style type="text/css"> .style3 { color: #0000CC; font-family: Constantia; font-size: xx-large; font-weight: normal; } </style> </head> <body> <form id="form1" runat="server"> <div class="ExternalDiv"> <div class="HeaderDiv"> <h1 class="style3"> Limousines</h1> <p class="style3"> &nbsp;</p> <div class="MenuDiv"> </div> <div class="ContentDiv"> <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </div> </div> </form> </body> </html> default.aspx page <%@ Page Language="VB" MasterPageFile="~/Limo.master" AutoEventWireup="false" CodeFile="default.aspx.vb" Inherits="list" title="List" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder2" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <div style="height: 1343px; width: 727px"> <asp:GridView ID="GridView1" runat="server" AllowSorting="True" AutoGenerateColumns="False" DataSourceID="SqlDataSource2" style="top: 134px; left: 12px; position: absolute; height: 1337px; width: 531px"> <Columns> <asp:BoundField DataField="Limo_Types" HeaderText="Limo_Types" SortExpression="Limo_Types" /> <asp:HyperLinkField DataNavigateUrlFields="Limo_Types" DataNavigateUrlFormatString="Details.aspx?tag={0}" DataTextField="Limo_Types" HeaderText="Click for Detail" /> <asp:ImageField DataImageUrlField="Images" DataImageUrlFormatString="images/{0}" HeaderImageUrl="~/images/6.jpg" HeaderText="Thumbnail"> <ControlStyle Height="200px" Width="200px" /> <HeaderStyle Height="200px" Width="200px" /> <ItemStyle Height="200px" Width="200px" /> </asp:ImageField> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server"></asp:SqlDataSource> <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString5 %>" ProviderName="<%$ ConnectionStrings:ConnectionString5.ProviderName %>" SelectCommand="SELECT [Limo_Types], [Images] FROM [tag]"> </asp:SqlDataSource> </div> </asp:Content> details.aspx page <%@ Page Language="VB" MasterPageFile="~/Limo.master" AutoEventWireup="false" CodeFile="Details.aspx.vb" Inherits="Details" title="Details Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" AllowSorting="True" BackColor="White" BorderColor="#999999" BorderStyle="Solid" BorderWidth="1px" CellPadding="3" ForeColor="Black" GridLines="Vertical"> <Columns> <asp:BoundField DataField="Limo_Types" HeaderText="Limo_Types" SortExpression="Limo_Types" /> <asp:BoundField DataField="Name" HeaderText="Name" SortExpression="Name" /> <asp:BoundField DataField="Price" HeaderText="Price" SortExpression="Price" /> <asp:BoundField DataField="Description" HeaderText="Description" SortExpression="Description" /> <asp:BoundField DataField="Color" HeaderText="Color" SortExpression="Color" /> <asp:ImageField DataImageUrlField="Image" DataImageUrlFormatString="images/{0}" HeaderImageUrl="~/App_Data/images/1.jpg" HeaderText="Image" AccessibleHeaderText="Image" AlternateText="Image"> <ControlStyle Height="300px" Width="300px" /> </asp:ImageField> </Columns> <FooterStyle BackColor="#CCCCCC" /> <PagerStyle BackColor="#999999" ForeColor="Black" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#000099" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="Black" Font-Bold="True" ForeColor="White" /> <AlternatingRowStyle BackColor="#CCCCCC" /> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" ProviderName="<%$ ConnectionStrings:ConnectionString.ProviderName %>" SelectCommand="SELECT [Limo_Types], [Name], [Price], [Image], [Description], [Color] FROM [Query1] WHERE ([Limo_Types] = ?)"> <SelectParameters> <asp:QueryStringParameter Name="Limo_Types" QueryStringField="tag" Type="String" /> </SelectParameters> </asp:SqlDataSource> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder2" Runat="Server"> </asp:Content>

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >