Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 455/527 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • SQL Injection Protection for dynamic queries

    - by jbugeja
    The typical controls against SQL injection flaws are to use bind variables (cfqueryparam tag), validation of string data and to turn to stored procedures for the actual SQL layer. This is all fine and I agree, however what if the site is a legacy one and it features a lot of dynamic queries. Then, rewriting all the queries is a herculean task and it requires an extensive period of regression and performance testing. I was thinking of using a dynamic SQL filter and calling it prior to calling cfquery for the actual execution. I found one filter in CFLib.org (http://www.cflib.org/udf/sqlSafe): <cfscript> /** * Cleans string of potential sql injection. * * @param string String to modify. (Required) * @return Returns a string. * @author Bryan Murphy ([email protected]) * @version 1, May 26, 2005 */ function metaguardSQLSafe(string) { var sqlList = "-- ,'"; var replacementList = "#chr(38)##chr(35)##chr(52)##chr(53)##chr(59)##chr(38)##chr(35)##chr(52)##chr(53)##chr(59)# , #chr(38)##chr(35)##chr(51)##chr(57)##chr(59)#"; return trim(replaceList( string , sqlList , replacementList )); } </cfscript> This seems to be quite a simple filter and I would like to know if there are ways to improve it or to come up with a better solution?

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • SQL Server 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • Manual drag-drop operations in Flex

    - by Yarin
    This is a two-part problem: A) I'm implementing several irregular drag-drop operations in Flex (e.g. DataGrid ItemRenderer into Tree). My preference was modifying DragManager operations to meet my needs, and in fact using DragManager allows me to do everything I need, but I'm having serious issues with performance. For example, dragging anything over a many-columned DataGrid, whether the drag was initiated with DragManager.doDrag, or just using native ListBase drag-drop functionality, slows the drag movement to a crawl. Even if the DataGrid is disabled/ not listenening for any move/drag events, this happens. On the other hand, if the drag is initiated by calling .startDrag() on the Sprite, the drag is smooth and performs great over DataGrids and everything else. So part A would be: Is there a reason why .startDrag() operations work so well, while drags initiated through DragManager.doDrag suffer so badly when over certain components? B) If indeed the solution is to handle drag-drops using .startDrag(), how would I go about determining what component the mouse is over when the drag is released? In my example, my dragged object is brought up to the top level of the display list, and so is being moved around in stage coordinates. mouseMove, mouseOver events don't fire on the components I'm dragging over because the mouse is constantly over the dragged component, so I would need some sort of stage.coordinate - visibleComponentAtThatCoordinate conversion. Any thoughts on this? Thanks alot!-- Yarin

    Read the article

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • WCF Service Throttling

    - by Mubashar Ahmad
    Dear All I have a WCF Service Deployed in a Console App with BasicHTTPBinding and SSL enabled on port using NetSH command and more over following attribute is set as well. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] And also i have set the Throttling behavior as <serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentSessions="2147483647" maxConcurrentInstances="2147483647" /> On the other hand i have created a Test Client(for load test) that initiates multiple clients simultaneously(multiple threads) and performs transactions on server. everything seems fine and working properly but on server the CPU utilization is doesn't grow so i added some logging to view the number of concurrent calls to the server and found that its never went over 6. i have reviewed the performance counter logging code more than twice and it seems fine to me. So i want to ask where is the problem in this situation and one more thing i haven't specified any kind of ContextMode or ConcurrencyMode yet. After this Post I noticed that whenever i start another Intance of Test Client my concurrent Server Calls counter increase to 2 like if i am running only 1 instance the maximum Concurrent Rcvd Calls will be 2 and if there are two instance the same value goes to 4 and so on. Is there any limit of Number of WCF Calls from once process? Looking for help Mubashar *Added on 17-March******************* Today i ran another test with one test client(with 50 concurrent users) on the same machine on which the server is running this time i am getting exact result what i wanted it to show i.e. Maximum concurrent Calls Rcvd by Server = 50 but i need to do it the same on others machines as well. Can anybody help me on this.

    Read the article

  • C++/CLI managed thread cleanup

    - by Guillermo Prandi
    Hi. I'm writing a managed C++/CLI library wrapper for the MySQL embedded server. The mysql C library requires me to call mysql_thread_init() for every thread that will be using it, and mysql_thread_end() for each thread that exits after using it. Debugging any given VB.Net project I can see at least seven threads; I suppose my library will see only one thread if VB doesn't explicitly create worker threads itself (any confirmations on that?). However, I need clients to my library to be able to create worker threads if they need to, so my library must be thread-aware to some degree. The first option I could think of is to expose some "EnterThread()" and "LeaveThread()" methods in my class, so the client code will explicitly call them at the beginning and before exiting their DoWork() method. This should work if (1) .Net doesn't "magically" create threads the user isn't aware of and (2) the user is careful enough to have the methods called in a try/finally structure of some sort. However, I don't like it very much to have the user handle things manually like that, and I wonder if I could give her a hand on that matter. In a pure Win32 C/C++ DLL I do have the DllMain DLL_THREAD_ATTACH and DLL_THREAD_DETACH pseudo-events, and I could use them for calling mysql_thread_init() and mysql_thread_end() as needed, but there seem to be no such thing in C++/CLI managed code. At the expense of some performance (not much, I think) I can use TLS for detecting the "usage from a new thread" case, but I can imagine no mechanism for the thread exiting case. So, my questions are: (1) could .net create application threads without the user being aware of them? and (2) is there any mechanism I could use similar to DLL_THREAD_ATTACH / DLL_THREAD_DETACH from managed C++/CLI? Thanks in advance.

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    HI Gurus, I have been working on SSD(solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Esp these days i was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test 1. read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. 2. Read and write data from and into files located in SSD... 3. Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why? Thanx very much...If you know tips of ssd...follow me

    Read the article

  • Multiple mesh with one geometry and diferent textures. Error

    - by user1821834
    I have a loop where I create a multiple Mesh with different geometry, because each mesh has one texture: .... var geoCube = new THREE.CubeGeometry(voxelSize, voxelSize, voxelSize); var geometry = new THREE.Geometry(); for( var i = 0; i < voxels.length; i++ ){ var voxel = voxels[i]; var object; color = voxel.color; texture = almacen.textPlaneTexture(voxel.texto,color,voxelSize); //Return the texture with a color and a text for each face of the geometry material = new THREE.MeshBasicMaterial({ map: texture }); object = new THREE.Mesh(geoCube, material); THREE.GeometryUtils.merge( geometry, object ); } //Add geometry merged at scene mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial() ); mesh.geometry.computeFaceNormals(); mesh.geometry.computeVertexNormals(); mesh.geometry.computeTangents(); scene.add( mesh ); .... But now I have this error in the javascript code Three.js Uncaught TypeError: Cannot read property 'map' of undefined In the funcion: function bufferGuessUVType ( material ) { .... } Update: Finally I have removed the merged solution and I can use an unnique geometry for the all voxels. Altough I think that If I use merge meshes the app would have a better performance...

    Read the article

  • Retrieving varbinary output from a query in sql server into classic ASP

    - by user303526
    Hi, I'm trying to retrieve a varbinary output value from a query running on SQL Server 2005 into Classic ASP. The ASP execution just fails when it comes to part of code that is simply taking a varbinary output into a string. So I guess we gotta handle it some other way. Actually, I'm trying to set (sp_setapprole) and unset (sp_unsetapprole) application roles for a database connection. First I'd set the approle, then I'd run my required queries and finally unset the approle. During unsetting is when I need the cookie (varbinary) value in my ASP code so that I can create a query like 'exec sp_unsetapprole @cookie'. Well at this stage, I don't have the cookie (varbinary) value. The reason I'm doing this is I used to get 'sp_setapprole was not invoked correctly' error when trying to set app roles. I've disabled pooling by appending 'OLE DB Services = -2;Pooling=False' into my connection string. I know pooling helps performance wise but here I'm facing big problems. Please help me out to retrieve a varbinary value into an classic ASP file or suggest a way to set & unset app roles. Either way solutions are appreciated. Thanks, Nandagopal

    Read the article

  • Understanding WordProcessingML tags and avoid unnecessary tags

    - by rithanyalaxmi
    Hi, I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file My queries are:- 1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file. 2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ? 3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display? Please shed some light into it and thanks in advance.. Thanks, Rithu

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • iPad receiving memory warning with low memory use

    - by Fer
    I have an UIWebKit with a HTML, this HTML have several images and text, but just displaying it gives me the memory warning. So I did some tests: The same HTML with different images, fullsize, and after the same images but reduced 50% from it's original size, for the 50% reduced images, I went to preview and reduced all images in 50% The surprising part is the 50% test, you can see that even with 16 images, the memory peak is 4.90MB. That's really surprising. Notice that these values are not always the same, they change but there's not a huge difference between the tests. In the 50% issue, in the 8 and 16 images, although the memory is low, sometimes a memory warning appears, but the performance enhance is noticeable compared to the full size images standing still = memory after scrolling all article 1 Image = [standing still 5MB] [rotating 5.6MB] 2 Images = [standing still 6.99MB] [rotating 7.7MB] 3 Images = [standing still 9.04MB] [rotating 10.9MB] 4 Images = [standing still 10.89MB] [rotating 13.20MB] 8 Images = [standing still 23.14MB] [rotating 25.20MB] (sometimes crashes) 16 Images = [standing still 27.14MB and app crashes] 50% 1 Image = [standing still 3.2MB] [rotating 3.67MB] 2 Image = [standing still 3.2MB] [rotating 3.70MB] 3 Image = [standing still 3.3MB] [rotating 3.79MB] 4 Image = [standing still 3.3MB] [rotating 3.80MB] 8 Images = [standing still 4.29MB] [rotating 4,63MB] (sometimes crashes) 16 Images = [standing still 4.79MB] [rotating 4,90MB] (sometimes crashes) My question is: The app sometimes crashed with 16 small images. Why? The memory was much lower. What is the limit of memory use? These numbers are helpful if you also tell us the maximum. But, the maximum seemed different with the 50% size images. 13.2MB works for large images and 3.8 for small images. Anything higher sometimes crashes. That makes no sense.

    Read the article

  • WPF Exposing a calculated property for binding (as DependencyProperty)

    - by kubal5003
    Hello, I have a complex WPF control that for some reasons (ie. performance) is not using dependency properties but simple C# properties (at least at the top level these are exposed as properties). The goal is to make it possible to bind to some of those top level properties - I guess I should declare them as DPs.(right? or is there some other way to achieve this? ) I started reading on MSDN about DependencyProperties and DependencyObjects and found an example: public class MyStateControl : ButtonBase { public MyStateControl() : base() { } public Boolean State { get { return (Boolean)this.GetValue(StateProperty); } set { this.SetValue(StateProperty, value); } } public static readonly DependencyProperty StateProperty = DependencyProperty.Register( "State", typeof(Boolean), typeof(MyStateControl),new PropertyMetadata(false)); } If I'm right - this code enforces the property to be backed up by DependencyProperty which restricts it to be a simple property with a store(from functional point of view, not technically) instead of being able to calculate the property value each time getter is called and setting other properties/fields each time setter is called. What can I do about that? Is there any way I could make those two worlds meet at some point?

    Read the article

  • Do I really need an ORM?

    - by alchemical
    We're about to begin development on a mid-size ASP.Net MVC 2 web site. For a typical page, we grab data and throw it up on the web page, i.e. there is not much pre-processing of the data before it is sent to the UI. We're now making the decision whether or not to use an ORM and if yes, which one. We had been looking at EF2 AKA EF4 (ASP.Net Entity Framework in VS 2010) as one possibility. However, I'm thinking a simple solution in this case may be just to use datatables. The reason being that we don't plan to move the data around or process it a lot once we fetch it, so I'm not sure there is that much value in having strongly-typed objects as DTOs. Also, this way we avoid mapping altogether, thereby I think simplifying the code and allowing for faster development. I should mention budget is an issue on this project, as well as speed of execution. We are striving for simplicity anywhere we can, both to keep the budget smaller, the schedule shorter, and performance fast. We haven't fully decided this yet, but are currently leaning towards no ORM. Will we be OK with the no ORM approach or is an ORM worth it?

    Read the article

  • WCF for a shared data access

    - by Audrius
    Hi all, I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved: A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute. My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data - multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right". What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it? What approach would you take? Project is still in a very early stage so complete design re-do is not out of question. All ideas are appreciated. Thanks! UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules. UPDATE 2: a separate benchmark project will be kicked up that should use MSSQL as a data backend (instead on in-memory list).

    Read the article

  • what's the alternative to readonlycollection when using lazy="extra"?

    - by Kunjan
    I am trying to use lazy="extra" for the child collection of Trades I have on my Client object. The trades is an Iset<Trade Trades, exposed as ReadOnlyCollection<Trade because I do not want anyone to modify the collection directly. As a result, I have added AddTrade and RemoveTrade methods. Now I have a Client Search page where I need to show the Trade Count. and on the Client details page I have a tab where I need to show all the trades for the Client in paged gridview. What I want to achieve is, for the search when I say on the client object as client.Trades.Count, nHibernate should only fire a select count(*) query. Hence I am using lazy="extra". But because I am using a ReadOnlyCollection, nHibernate fires a count query & a separate query to load the child collection trades completely. Also, I cannot include the Trades in my initial search request as this would disturb the paging because a counterparty can have n trades which would result in n rows, when I am searching clients only. So the child collections have to be loaded lazily. The second problem is that on the client details page -- Trades grid view, I have enabled paging for performance reasons. But by nature nHibernate loads the entire collection of trades as the user goes from one page to another. Ideally I want to control this by getting only trades specific to the page the user is on. How can I achieve this? I came across this very good article. http://stackoverflow.com/questions/876976/implementing-ipagedlistt-on-my-models-using-nhibernate But I am not sure if this will work for me, as lazy=extra currently doesnt work as expected with the ReadOnlyCollection. So, if I went ahead and implemented the solution this way and further enhanced it by making the List/Set Immutable, will lazy=extra give me the same problem as with ReadOnlyCollections?

    Read the article

  • How can I combine several Expressions into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method. UPDATE So, whilst it appears that there is no way to do this directly in C#3 is there a way to convert an expression to IL so that I can use it with System.Reflection.Emit?

    Read the article

  • Crosstab/Cube/Pivot Components for Delphi

    - by Anagoge
    I'm looking for a Delphi VCL crosstab/cube/pivotcube/olap grid component for Delphi 2009, 2010, or XE. I'm willing to sacrifice advanced features to get something open/free (or very cheap if I must) to make it easier to collaborate with any future developers without anyone having to purchase more components than I already use, since this will just be used in one screen. If there isn't anything appropriate out there, I may try to implement something simple on my own. I can live with some fairly basic features: drag and drop to configure dimensions, sort by a column, allow totals/min/max for a column, and (optionally) expand/collapse or drill down to sub-categories. Blazing performance and enterprise scalability are not required, since there should be less than 2000 source rows. There appear to be several decent options in the commercial space (ExpressPivotCube, FastCube, HierCube), but they are all a few hundred dollars. This project already uses existing installations of Excel 2007 and SQL Server 2005/2008, so I might consider leveraging those, though I'd prefer a native Delphi component, if possible. There are also the very old Decision Cube components included in Delphi's Source\xtab directory, but they apparently no longer support unicode compilers (Delphi 2009+), since I got dozens of unicode-related compilation errors while test compiling that source in Delphi XE. Those components also still link to the long-deprecated BDE! Has anyone modified Decision Cube to support unicode/pure-TDataSet? The online tutorials I found were incomplete and silent on the dozens of BDE/unicode compilation errors I see, so I might have to tackle that on my own. Does anyone have suggestions where to start for a free/cheap basic crosstab/pivot grid component?

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • When is LINQ (to objects) Overused?

    - by Mystagogue
    My career started as a hard-core functional-paradigm developer (LISP), and now I'm a hard-care .net/C# developer. Of course I'm enamored with LINQ. However, I also believe in (1) using the right tool for the job and (2) preserving the KISS principle: of the 60+ engineers I work with, perhaps only 20% have hours of LINQ / functional paradigm experience, and 5% have 6 to 12 months of such experience. In short, I feel compelled to stay away from LINQ unless I'm hampered in achieving a goal without it (wherein replacing 3 lines of O-O code with one line of LINQ is not a "goal"). But now one of the engineers, having 12 months LINQ / functional-paradigm experience, is using LINQ to objects, or at least lambda expressions anyway, in every conceivable location in production code. My various appeals to the KISS principle have not yielded any results. Therefore... What published studies can I next appeal to? What "coding standard" guideline have others concocted with some success? Are there published LINQ performance issues I could point out? In short, I'm trying to achieve my first goal - KISS - by indirect persuasion. Of course this problem could be extended to countless other areas (such as overuse of extension methods). Perhaps there is an "uber" guide, highly regarded (e.g. published studies, etc), that takes a broader swing at this. Anything?

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >