Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 459/530 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Why is OracleDataAdapter.Fill() Very Slow?

    - by John Gietzen
    I am using a pretty complex query to retrieve some data out of one of our billing databases. I'm running in to an issue where the query seems to complete fairly quickly when executed with SQL Developer, but does not seem to ever finish when using the OracleDataAdapter.Fill() method. I'm only trying to read about 1000 rows, and the query completes in SQL Developer in about 20 seconds. What could be causing such drastic differences in performance? I have tons of other queries that run quickly using the same function. Here is the code I'm using to execute the query: using Oracle.DataAccess.Client; ... public DataTable ExecuteExternalQuery(string connectionString, string providerName, string queryText) { DbConnection connection = null; DbCommand selectCommand = null; DbDataAdapter adapter = null; switch (providerName) { case "System.Data.OracleClient": case "Oracle.DataAccess.Client": connection = new OracleConnection(connectionString); selectCommand = connection.CreateCommand(); adapter = new OracleDataAdapter((OracleCommand)selectCommand); break; ... } DataTable table = null; try { connection.Open(); selectCommand.CommandText = queryText; selectCommand.CommandTimeout = 300000; selectCommand.CommandType = CommandType.Text; table = new DataTable("result"); table.Locale = CultureInfo.CurrentCulture; adapter.Fill(table); } finally { adapter.Dispose(); if (connection.State != ConnectionState.Closed) { connection.Close(); } } return table; } And here is the general outline of the SQL I'm using: with trouble_calls as ( select work_order_number, account_number, date_entered from work_orders where date_entered >= sysdate - (15 + 31) -- Use the index to limit the number of rows scanned and wo_status not in ('Cancelled') and wo_type = 'Trouble Call' ) select account_number, work_order_number, date_entered from trouble_calls wo where wo.icoms_date >= sysdate - 15 and ( select count(*) from trouble_calls repeat where wo.account_number = repeat.account_number and wo.work_order_number <> repeat.work_order_number and wo.date_entered - repeat.date_entered between 0 and 30 ) >= 1

    Read the article

  • UIView animation VS core animation

    - by Tom Irving
    I'm trying to animate a view sliding into view and bouncing once it hits the side of the screen. A basic example of the slide I'm doing is as follows: // The view is added with a rect making it off screen. [UIView beginAnimations:nil context:NULL]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationDuration:0.07]; [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(animationDidStop:finished:context:)]; [theView setFrame:CGRectMake(-5, 0, theView.frame.size.width, theView.frame.size.height)]; [UIView commitAnimations]; More animations are then called in the didStopSelector to make the bounce effect. The problem is when more than one view is being animated, the bounce becomes jerky and, well, doesn't bounce anymore. Before I start reading up on how to do this in Core Animation, (I understand it's a little more difficult) I'd like to know if there is actually an advantage using Core Animation rather than UIView animations. If not, is there something I can do to improve performance?

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • Chart controls for ASP.NET

    - by tolism7
    I am looking people's opinion and experience on using chart controls within an ASP.NET application (web forms or MVC) primarily but also in any kind of project. I am currently doing my research and I have a pretty big list of controls to evaluate. My list includes (in no particular order): ASP.NET controls: DevExpress XtraCharts (http://demos.devexpress.com/XtraChartsDemos/) Dundas Chart for .NET (http://www.dundas.com/) Telerik RadChart for ASP.NET AJAX (http://www.telerik.com/) ComponentArt Charting & Visualization for ASP.NET (http://www.componentart.com/) Infragistics WebChart (http://www.infragistics.com/dotnet/netadvantage/aspnet.aspx#Overview) .net Charting (http://www.dotnetcharting.com/) Chart Control for .Net Framework (Microsoft's) (http://weblogs.asp.net/scottgu/archive/2008/11/24/new-asp-net-charting-control-lt-asp-chart-runat-quot-server-quot-gt.aspx) Flash controls: FusionCharts v3 (http://www.fusioncharts.com/) XML/SWF Charts (http://www.maani.us/xml%5Fcharts/index.php) amCharts (http://www.amcharts.com/) AnyChart (http://www.anychart.com/home/) Javascript: Flot (http://code.google.com/p/flot/) Flotr (http://solutoire.com/flotr/) jqPlot (http://www.jqplot.com/index.php) (If I missed some that worth to be compared against the above please let me know.) What I am looking is opinions on using any of the above so I can form my own and help others do the same, based on what I read here. I do not care which one is better. What I care for is why someone likes one of the above and what do these controls offer as a distinct advantage. I am interested in developer's opinion and I would like to find out which things are difficult doing with any of the above controls and which things are easy to achieve. AJAX compatibility (build in to the controls but also manual), ASP.NET compatibility, input capabilities, data binding options, performance, how much code does one need to write in order to create a chart, are some of the things that I would want to read about. I have already done my research on StackOverflow for relevant questions but there is nothing on the level of detail that I would want to read in order to make a responsible decision.

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Force float left with no line break no matter what

    - by Tesserex
    I'm guessing this isn't possible, but here goes. I have two tables, and I'm trying to get them to sit side-by-side so that they look like one table. The reason for this, instead of using one larger table, is that the data in the second table needs to be handled on a column basis, not row basis, for performance reasons like caching and AJAX-fetching data. So rather than have to reload the whole table for a single column, I decided to break the column out into a separate table, but have it visually seem like a single table. I can't find a way to forcibly put the second table next to the first. I can float them, but when the first table is too wide, the second one breaks to the next line. Here's the kicker: the width of the first table is dynamic. So I can't just set a huge width to their container. Well, I could set a huge width, like 1000%, but then I have a huge ugly horizontal scroll bar. So is there any way to tell the second table "Stay on that same line, no matter what! And line up right next to the previous element please!"

    Read the article

  • SQL Server 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • SQL Injection Protection for dynamic queries

    - by jbugeja
    The typical controls against SQL injection flaws are to use bind variables (cfqueryparam tag), validation of string data and to turn to stored procedures for the actual SQL layer. This is all fine and I agree, however what if the site is a legacy one and it features a lot of dynamic queries. Then, rewriting all the queries is a herculean task and it requires an extensive period of regression and performance testing. I was thinking of using a dynamic SQL filter and calling it prior to calling cfquery for the actual execution. I found one filter in CFLib.org (http://www.cflib.org/udf/sqlSafe): <cfscript> /** * Cleans string of potential sql injection. * * @param string String to modify. (Required) * @return Returns a string. * @author Bryan Murphy ([email protected]) * @version 1, May 26, 2005 */ function metaguardSQLSafe(string) { var sqlList = "-- ,'"; var replacementList = "#chr(38)##chr(35)##chr(52)##chr(53)##chr(59)##chr(38)##chr(35)##chr(52)##chr(53)##chr(59)# , #chr(38)##chr(35)##chr(51)##chr(57)##chr(59)#"; return trim(replaceList( string , sqlList , replacementList )); } </cfscript> This seems to be quite a simple filter and I would like to know if there are ways to improve it or to come up with a better solution?

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • Manual drag-drop operations in Flex

    - by Yarin
    This is a two-part problem: A) I'm implementing several irregular drag-drop operations in Flex (e.g. DataGrid ItemRenderer into Tree). My preference was modifying DragManager operations to meet my needs, and in fact using DragManager allows me to do everything I need, but I'm having serious issues with performance. For example, dragging anything over a many-columned DataGrid, whether the drag was initiated with DragManager.doDrag, or just using native ListBase drag-drop functionality, slows the drag movement to a crawl. Even if the DataGrid is disabled/ not listenening for any move/drag events, this happens. On the other hand, if the drag is initiated by calling .startDrag() on the Sprite, the drag is smooth and performs great over DataGrids and everything else. So part A would be: Is there a reason why .startDrag() operations work so well, while drags initiated through DragManager.doDrag suffer so badly when over certain components? B) If indeed the solution is to handle drag-drops using .startDrag(), how would I go about determining what component the mouse is over when the drag is released? In my example, my dragged object is brought up to the top level of the display list, and so is being moved around in stage coordinates. mouseMove, mouseOver events don't fire on the components I'm dragging over because the mouse is constantly over the dragged component, so I would need some sort of stage.coordinate - visibleComponentAtThatCoordinate conversion. Any thoughts on this? Thanks alot!-- Yarin

    Read the article

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • WCF Service Throttling

    - by Mubashar Ahmad
    Dear All I have a WCF Service Deployed in a Console App with BasicHTTPBinding and SSL enabled on port using NetSH command and more over following attribute is set as well. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] And also i have set the Throttling behavior as <serviceThrottling maxConcurrentCalls="2147483647" maxConcurrentSessions="2147483647" maxConcurrentInstances="2147483647" /> On the other hand i have created a Test Client(for load test) that initiates multiple clients simultaneously(multiple threads) and performs transactions on server. everything seems fine and working properly but on server the CPU utilization is doesn't grow so i added some logging to view the number of concurrent calls to the server and found that its never went over 6. i have reviewed the performance counter logging code more than twice and it seems fine to me. So i want to ask where is the problem in this situation and one more thing i haven't specified any kind of ContextMode or ConcurrencyMode yet. After this Post I noticed that whenever i start another Intance of Test Client my concurrent Server Calls counter increase to 2 like if i am running only 1 instance the maximum Concurrent Rcvd Calls will be 2 and if there are two instance the same value goes to 4 and so on. Is there any limit of Number of WCF Calls from once process? Looking for help Mubashar *Added on 17-March******************* Today i ran another test with one test client(with 50 concurrent users) on the same machine on which the server is running this time i am getting exact result what i wanted it to show i.e. Maximum concurrent Calls Rcvd by Server = 50 but i need to do it the same on others machines as well. Can anybody help me on this.

    Read the article

  • C++/CLI managed thread cleanup

    - by Guillermo Prandi
    Hi. I'm writing a managed C++/CLI library wrapper for the MySQL embedded server. The mysql C library requires me to call mysql_thread_init() for every thread that will be using it, and mysql_thread_end() for each thread that exits after using it. Debugging any given VB.Net project I can see at least seven threads; I suppose my library will see only one thread if VB doesn't explicitly create worker threads itself (any confirmations on that?). However, I need clients to my library to be able to create worker threads if they need to, so my library must be thread-aware to some degree. The first option I could think of is to expose some "EnterThread()" and "LeaveThread()" methods in my class, so the client code will explicitly call them at the beginning and before exiting their DoWork() method. This should work if (1) .Net doesn't "magically" create threads the user isn't aware of and (2) the user is careful enough to have the methods called in a try/finally structure of some sort. However, I don't like it very much to have the user handle things manually like that, and I wonder if I could give her a hand on that matter. In a pure Win32 C/C++ DLL I do have the DllMain DLL_THREAD_ATTACH and DLL_THREAD_DETACH pseudo-events, and I could use them for calling mysql_thread_init() and mysql_thread_end() as needed, but there seem to be no such thing in C++/CLI managed code. At the expense of some performance (not much, I think) I can use TLS for detecting the "usage from a new thread" case, but I can imagine no mechanism for the thread exiting case. So, my questions are: (1) could .net create application threads without the user being aware of them? and (2) is there any mechanism I could use similar to DLL_THREAD_ATTACH / DLL_THREAD_DETACH from managed C++/CLI? Thanks in advance.

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    HI Gurus, I have been working on SSD(solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Esp these days i was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test 1. read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. 2. Read and write data from and into files located in SSD... 3. Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why? Thanx very much...If you know tips of ssd...follow me

    Read the article

  • Multiple mesh with one geometry and diferent textures. Error

    - by user1821834
    I have a loop where I create a multiple Mesh with different geometry, because each mesh has one texture: .... var geoCube = new THREE.CubeGeometry(voxelSize, voxelSize, voxelSize); var geometry = new THREE.Geometry(); for( var i = 0; i < voxels.length; i++ ){ var voxel = voxels[i]; var object; color = voxel.color; texture = almacen.textPlaneTexture(voxel.texto,color,voxelSize); //Return the texture with a color and a text for each face of the geometry material = new THREE.MeshBasicMaterial({ map: texture }); object = new THREE.Mesh(geoCube, material); THREE.GeometryUtils.merge( geometry, object ); } //Add geometry merged at scene mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial() ); mesh.geometry.computeFaceNormals(); mesh.geometry.computeVertexNormals(); mesh.geometry.computeTangents(); scene.add( mesh ); .... But now I have this error in the javascript code Three.js Uncaught TypeError: Cannot read property 'map' of undefined In the funcion: function bufferGuessUVType ( material ) { .... } Update: Finally I have removed the merged solution and I can use an unnique geometry for the all voxels. Altough I think that If I use merge meshes the app would have a better performance...

    Read the article

  • Retrieving varbinary output from a query in sql server into classic ASP

    - by user303526
    Hi, I'm trying to retrieve a varbinary output value from a query running on SQL Server 2005 into Classic ASP. The ASP execution just fails when it comes to part of code that is simply taking a varbinary output into a string. So I guess we gotta handle it some other way. Actually, I'm trying to set (sp_setapprole) and unset (sp_unsetapprole) application roles for a database connection. First I'd set the approle, then I'd run my required queries and finally unset the approle. During unsetting is when I need the cookie (varbinary) value in my ASP code so that I can create a query like 'exec sp_unsetapprole @cookie'. Well at this stage, I don't have the cookie (varbinary) value. The reason I'm doing this is I used to get 'sp_setapprole was not invoked correctly' error when trying to set app roles. I've disabled pooling by appending 'OLE DB Services = -2;Pooling=False' into my connection string. I know pooling helps performance wise but here I'm facing big problems. Please help me out to retrieve a varbinary value into an classic ASP file or suggest a way to set & unset app roles. Either way solutions are appreciated. Thanks, Nandagopal

    Read the article

  • Understanding WordProcessingML tags and avoid unnecessary tags

    - by rithanyalaxmi
    Hi, I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file My queries are:- 1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file. 2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ? 3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display? Please shed some light into it and thanks in advance.. Thanks, Rithu

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • iPad receiving memory warning with low memory use

    - by Fer
    I have an UIWebKit with a HTML, this HTML have several images and text, but just displaying it gives me the memory warning. So I did some tests: The same HTML with different images, fullsize, and after the same images but reduced 50% from it's original size, for the 50% reduced images, I went to preview and reduced all images in 50% The surprising part is the 50% test, you can see that even with 16 images, the memory peak is 4.90MB. That's really surprising. Notice that these values are not always the same, they change but there's not a huge difference between the tests. In the 50% issue, in the 8 and 16 images, although the memory is low, sometimes a memory warning appears, but the performance enhance is noticeable compared to the full size images standing still = memory after scrolling all article 1 Image = [standing still 5MB] [rotating 5.6MB] 2 Images = [standing still 6.99MB] [rotating 7.7MB] 3 Images = [standing still 9.04MB] [rotating 10.9MB] 4 Images = [standing still 10.89MB] [rotating 13.20MB] 8 Images = [standing still 23.14MB] [rotating 25.20MB] (sometimes crashes) 16 Images = [standing still 27.14MB and app crashes] 50% 1 Image = [standing still 3.2MB] [rotating 3.67MB] 2 Image = [standing still 3.2MB] [rotating 3.70MB] 3 Image = [standing still 3.3MB] [rotating 3.79MB] 4 Image = [standing still 3.3MB] [rotating 3.80MB] 8 Images = [standing still 4.29MB] [rotating 4,63MB] (sometimes crashes) 16 Images = [standing still 4.79MB] [rotating 4,90MB] (sometimes crashes) My question is: The app sometimes crashed with 16 small images. Why? The memory was much lower. What is the limit of memory use? These numbers are helpful if you also tell us the maximum. But, the maximum seemed different with the 50% size images. 13.2MB works for large images and 3.8 for small images. Anything higher sometimes crashes. That makes no sense.

    Read the article

  • WPF Exposing a calculated property for binding (as DependencyProperty)

    - by kubal5003
    Hello, I have a complex WPF control that for some reasons (ie. performance) is not using dependency properties but simple C# properties (at least at the top level these are exposed as properties). The goal is to make it possible to bind to some of those top level properties - I guess I should declare them as DPs.(right? or is there some other way to achieve this? ) I started reading on MSDN about DependencyProperties and DependencyObjects and found an example: public class MyStateControl : ButtonBase { public MyStateControl() : base() { } public Boolean State { get { return (Boolean)this.GetValue(StateProperty); } set { this.SetValue(StateProperty, value); } } public static readonly DependencyProperty StateProperty = DependencyProperty.Register( "State", typeof(Boolean), typeof(MyStateControl),new PropertyMetadata(false)); } If I'm right - this code enforces the property to be backed up by DependencyProperty which restricts it to be a simple property with a store(from functional point of view, not technically) instead of being able to calculate the property value each time getter is called and setting other properties/fields each time setter is called. What can I do about that? Is there any way I could make those two worlds meet at some point?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >