Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 643/763 | < Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >

  • Simple iPhone drawing app with Quartz 2D

    - by Mr guy 4
    I am making a simple iPhone drawing program as a personal side-project. I capture touches event in a subclassed UIView and render the actual stuff to a seperate CGLayer. After each render, I call [self setNeedsLayout] and in the drawRect: method I draw the CGLayer to the screen context. This all works great and performs decently for drawing rectangles. However, I just want a simple "freehand" mode like a lot of other iPhone applications have. The way I thought to do this was to create a CGMutablePath, and simply: CGMutablePathRef path; -(void)touchBegan { path = CGMutablePathCreate(); } -(void)touchMoved { CGPathMoveToPoint(path,NULL,x,y); CGPathAddLineToPoint(path,NULL,x,y); } -(void)drawRect:(CGContextRef)context { CGContextBeginPath(context); CGContextAddPath(context,path); CGContextStrokePath(context); } However, after drawing for more than 1 second, performance degrades miserably. I would just draw each line into the off-screen CGLayer, if it were not for variable opacity! The less-than-100% opacity causes dots to be left on the screen connecting the lines. I have looked at CGContextSetBlendingMode() but alas I cannot find an answer. Can anyone point me in the right direction? Other iPhone apps are able to do this with very good efficiency.

    Read the article

  • MS SQL 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • Lucene.NET 2.9 and BitArray/DocIdSet

    - by Paul Knopf
    I found a great example on grabbing facet counts on a base query. It stores the bitarray of the base query to improve the performance each time the a facet gets counted. var genreQuery = new TermQuery(new Term("genre", genre)); var genreQueryFilter = new QueryFilter(genreQuery); BitArray genreBitArray = genreQueryFilter.Bits(searcher.GetIndexReader()); Console.WriteLine("There are " + GetCardinality(genreBitArray) + " document with the genre " + genre); // Next perform a regular search and get its BitArray result Query searchQuery = MultiFieldQueryParser.Parse(term, new[] {"title", "description"}, new[] {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD}, new StandardAnalyzer()); var searchQueryFilter = new QueryFilter(searchQuery); BitArray searchBitArray = searchQueryFilter.Bits(searcher.GetIndexReader()); Console.WriteLine("There are " + GetCardinality(searchBitArray) + " document containing the term " + term); The only problem is that I am using a newer version of Lucene.NET (2.9) and Filter.Bits is obsolete. We are told to use DocIdSet instead (rather than BitArray). I cannot found out how to do the bitArray.And(bitArray) with a docIdSet. I looked in reflector and found OpenIdSet which has And operations. Not sure if OpenIdSet is the route to go, I'm just stating. Thanks in advance!

    Read the article

  • CSS background-images and Z-Index problem

    - by dscher
    Hope someone has an easy answer on this. I have a header image which is just a 75px high gradient with a fade on the bottom. I have it set as the background image on my header and I want to throw in a left-sidebar on my page. There is a transparency on the header image and when I have my sidebar I can't get it to sit behind the header. You can see in this screenshot: link text The green sidebar won't "sit" behind the header. I have the header z-index set to 99 and the sidebar to 1. I tried the reverse to make sure I didn't mix up my numbers but that didn't work. Both are absolutely positioned. I'm attaching their CSS selectors in the hopes someone has an easy answer. Am sure I'm missing something basic: div.header { z-index: 99; background: transparent; background-image: url(images/header_bg.png); position: absolute; height: 85px; width: 100%; font-family: Helvetica, Verdana, Arial, sans-serif; } div#leftsidebar { height: 400px; border-right-style: dashed; border-right-width: 1px; z-index: -1; margin-top: 75px; width: 200px; position: absolute; background-color: #66ff66; } Thanks.

    Read the article

  • why does windows authentication / impersonation fail on asp.net application with iis 7.5 / windows 7

    - by velvet sheen
    hi there; i'm troubleshooting why i cannot get past the login dialog on an asp.net site configured for windows authentication and impersonation. help me before i switch to os x development and objective-c i have an asp.net 2.0 application and i'm trying to deploy it on windows 7 with iis 7.5. i've created a new site, and bound it to localhost and a fully qualified domain name. the fqdn is in my hosts file, and is redirected to 127.0.0.1 the site is also running with an appdomain i created, with integrated pipeline mode, and the process model identity is set to ApplicationPoolIdentity. web.config includes the following: <trust level="High" /> <authentication mode="Windows" /> <authorization> <deny users="?"/> </authorization> <identity impersonate="true"/> acl on the directory for the site is desperation set to everyone full control, the application pool virtual account (windows 7 thing) is set to full control on the physical directory for the site also. iis authentication has asp.net impersonation enabled, and windows authentication enabled. when i connect to the site as localhost, it permits me to get past the login prompt and the application loads without incident. when i connect to the site as the fqdn set in the host headers bindings for this site/ip/port, i cannot get past the login prompt. clicking cancel throws to a http 401.1 error page. why? thanks very much in advance.

    Read the article

  • Perf: Viewing thousands of images in Silverlight 3 on a 3D Wall

    - by Bob Holland
    I currently work on a very cool Silverlight app that displays photos in a 3D wall space like the Wall3D demo that is thrown in with Blend 3. The problem I am currently facing is performance. The app works like this: As you scroll right or left the 3d photo wall rotates As each movement is made, the next column of photos are downloaded, decoded into a BitmapImage and thrown into a 3D Wall Node. As you can imagine users (if you let them) will want to flip through the photos really quickly, but the problem I have is I cannot display the photos quick enough. In most cases it's a beautiful app that works really well, but when an album contains over 300 photos, you can imagine the sort of memory taken up by all the BitmapImage classes and how moving the slider can jump from photo 20 to photo 120 in a second. Of course we have algorithms in place to not download every photo in between, but I still can't work out a fast way to get the photos displayed. It may be a case that we need to throw away the 'great for show' 3D wall and go to a flat DeepZoom like wall like the Playboy archive one that Vertigo did. Still not sure, let me know your thoughts. P.S. We are using Kit3D for all the 3D work, it's using PerspectiveCamera, Model3DGroup, ModelVisual3D, RotateTransform3D & TranslateTransform3D. Cheers, Bob.

    Read the article

  • Complex Query on cassandra

    - by Sadiqur Rahman
    I have heard on cassandra database engine few days ago and searching for a good documentation on it. after studying on cassandra I got cassandra is more scalable than other data engine. I also read on Amazon SimpleDB but as SimpleDB has a limitation 10GB/table and Google Datastore is slower than Amazon SimpleDB, I prefer not to use them (Google Datastore, Amazon SimpleDB). So for making our site scaled specially high write rates with massive data, I like to use Cassandra as our Data Engine. But before starting using cassandra I am confused on "How to handle complex data using casssandra". I am giving you the MySQL database structure below, Please read this and give me a good suggestion. Users Table hasColum ID Primary hasColum email Unique hasColum FirstName hasColum LastName Category Table hasColum ID Primary hasColum Parent hasColum Category Posts Table hasColum ID Primary hasColum UID Index foreign key linked to users-ID hasColum CID Index foreign key linked to Category-ID hasColum Title hasColum Post Index hasColum PunDate Comments hasColum ID primary hasColum UID Index foreign key linked to users-ID hasColum PID Index foreign key linked to Posts-ID hasColum Comment User Group hasColum ID primary hasColum Name UserToGroup Table (for many to many relation only) hasColum UID foreign key linked to Users-ID hasColum GID foreign key linked to Group-ID Finally for your information, I like to use SimpleCassie PHP Class http://code.google.com/p/simpletools-php/ So, it will be very helpful if you can give me example using SimpleCassie

    Read the article

  • Why is OracleDataAdapter.Fill() Very Slow?

    - by John Gietzen
    I am using a pretty complex query to retrieve some data out of one of our billing databases. I'm running in to an issue where the query seems to complete fairly quickly when executed with SQL Developer, but does not seem to ever finish when using the OracleDataAdapter.Fill() method. I'm only trying to read about 1000 rows, and the query completes in SQL Developer in about 20 seconds. What could be causing such drastic differences in performance? I have tons of other queries that run quickly using the same function. Here is the code I'm using to execute the query: using Oracle.DataAccess.Client; ... public DataTable ExecuteExternalQuery(string connectionString, string providerName, string queryText) { DbConnection connection = null; DbCommand selectCommand = null; DbDataAdapter adapter = null; switch (providerName) { case "System.Data.OracleClient": case "Oracle.DataAccess.Client": connection = new OracleConnection(connectionString); selectCommand = connection.CreateCommand(); adapter = new OracleDataAdapter((OracleCommand)selectCommand); break; ... } DataTable table = null; try { connection.Open(); selectCommand.CommandText = queryText; selectCommand.CommandTimeout = 300000; selectCommand.CommandType = CommandType.Text; table = new DataTable("result"); table.Locale = CultureInfo.CurrentCulture; adapter.Fill(table); } finally { adapter.Dispose(); if (connection.State != ConnectionState.Closed) { connection.Close(); } } return table; } And here is the general outline of the SQL I'm using: with trouble_calls as ( select work_order_number, account_number, date_entered from work_orders where date_entered >= sysdate - (15 + 31) -- Use the index to limit the number of rows scanned and wo_status not in ('Cancelled') and wo_type = 'Trouble Call' ) select account_number, work_order_number, date_entered from trouble_calls wo where wo.icoms_date >= sysdate - 15 and ( select count(*) from trouble_calls repeat where wo.account_number = repeat.account_number and wo.work_order_number <> repeat.work_order_number and wo.date_entered - repeat.date_entered between 0 and 30 ) >= 1

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • UIView animation VS core animation

    - by Tom Irving
    I'm trying to animate a view sliding into view and bouncing once it hits the side of the screen. A basic example of the slide I'm doing is as follows: // The view is added with a rect making it off screen. [UIView beginAnimations:nil context:NULL]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationDuration:0.07]; [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(animationDidStop:finished:context:)]; [theView setFrame:CGRectMake(-5, 0, theView.frame.size.width, theView.frame.size.height)]; [UIView commitAnimations]; More animations are then called in the didStopSelector to make the bounce effect. The problem is when more than one view is being animated, the bounce becomes jerky and, well, doesn't bounce anymore. Before I start reading up on how to do this in Core Animation, (I understand it's a little more difficult) I'd like to know if there is actually an advantage using Core Animation rather than UIView animations. If not, is there something I can do to improve performance?

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • Chart controls for ASP.NET

    - by tolism7
    I am looking people's opinion and experience on using chart controls within an ASP.NET application (web forms or MVC) primarily but also in any kind of project. I am currently doing my research and I have a pretty big list of controls to evaluate. My list includes (in no particular order): ASP.NET controls: DevExpress XtraCharts (http://demos.devexpress.com/XtraChartsDemos/) Dundas Chart for .NET (http://www.dundas.com/) Telerik RadChart for ASP.NET AJAX (http://www.telerik.com/) ComponentArt Charting & Visualization for ASP.NET (http://www.componentart.com/) Infragistics WebChart (http://www.infragistics.com/dotnet/netadvantage/aspnet.aspx#Overview) .net Charting (http://www.dotnetcharting.com/) Chart Control for .Net Framework (Microsoft's) (http://weblogs.asp.net/scottgu/archive/2008/11/24/new-asp-net-charting-control-lt-asp-chart-runat-quot-server-quot-gt.aspx) Flash controls: FusionCharts v3 (http://www.fusioncharts.com/) XML/SWF Charts (http://www.maani.us/xml%5Fcharts/index.php) amCharts (http://www.amcharts.com/) AnyChart (http://www.anychart.com/home/) Javascript: Flot (http://code.google.com/p/flot/) Flotr (http://solutoire.com/flotr/) jqPlot (http://www.jqplot.com/index.php) (If I missed some that worth to be compared against the above please let me know.) What I am looking is opinions on using any of the above so I can form my own and help others do the same, based on what I read here. I do not care which one is better. What I care for is why someone likes one of the above and what do these controls offer as a distinct advantage. I am interested in developer's opinion and I would like to find out which things are difficult doing with any of the above controls and which things are easy to achieve. AJAX compatibility (build in to the controls but also manual), ASP.NET compatibility, input capabilities, data binding options, performance, how much code does one need to write in order to create a chart, are some of the things that I would want to read about. I have already done my research on StackOverflow for relevant questions but there is nothing on the level of detail that I would want to read in order to make a responsible decision.

    Read the article

  • Using the Salesforce PHP API to generate a User Profile Report

    - by Phill Pafford
    Hi All, Looking to do a security audit of all user permissions. I think I can use the Salesforce PHPToolkit 11 API to generate the report but new to Salesforce and a little confused on where to start. In Salesforce Setup Under: Administration Setup -> Manage Users -> Profiles -> Profile Names If you click on each user name you can see the permissions set and the actions the user is allowed to perform. Wanted a way to generate an excel report for all users with all the permissions for that user. Example: User Name | Can view Case | Can edit case | Can delete case | etc... phill yes no no x... and so on. I see that in Salesforce I can run a high level report on the Profile but I need to drill down for each user. Has anyone every done this type of reporting before? any help on this would be great. Thanks in advacne, --Phill

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Force float left with no line break no matter what

    - by Tesserex
    I'm guessing this isn't possible, but here goes. I have two tables, and I'm trying to get them to sit side-by-side so that they look like one table. The reason for this, instead of using one larger table, is that the data in the second table needs to be handled on a column basis, not row basis, for performance reasons like caching and AJAX-fetching data. So rather than have to reload the whole table for a single column, I decided to break the column out into a separate table, but have it visually seem like a single table. I can't find a way to forcibly put the second table next to the first. I can float them, but when the first table is too wide, the second one breaks to the next line. Here's the kicker: the width of the first table is dynamic. So I can't just set a huge width to their container. Well, I could set a huge width, like 1000%, but then I have a huge ugly horizontal scroll bar. So is there any way to tell the second table "Stay on that same line, no matter what! And line up right next to the previous element please!"

    Read the article

  • SQL Server 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • Threading 101: What is a Dispatcher?

    - by Water Cooler v2
    Once upon a time, I remembered this stuff by heart. Over time, my understanding has diluted and I mean to refresh it. As I recall, any so called single threaded application has two threads: a) the primary thread that has a pointer to the main or DllMain entry points; and b) For applications that have some UI, a UI thread, a.k.a the secondary thread, on which the WndProc runs, i.e. the thread that executes the WndProc that recieves messages that Windows posts to it. In short, the thread that executes the Windows message loop. For UI apps, the primary thread is in a blocking state waiting for messages from Windows. When it recieves them, it queues them up and dispatches them to the message loop (WndProc) and the UI thread gets kick started. As per my understanding, the primary thread, which is in a blocking state, is this: C++ while(getmessage(/* args &msg, etc. */)) { translatemessage(&msg, 0, 0); dispatchmessage(&msg, 0, 0); } C# or VB.NET WinForms apps: Application.Run( new System.Windows.Forms() ); Is this what they call the Dispatcher? My questions are: a) Is my above understanding correct? b) What in the name of hell is the Dispatcher? c) Point me to a resource where I can get a better understanding of threads from a Windows/Win32 perspective and then tie it up with high level languages like C#. Petzold is sparing in his discussion on the subject in his epic work. Although I believe I have it somewhat right, a confirmation will be relieving.

    Read the article

  • Best way of storing an "array of records" at design-time

    - by smartins
    I have a set of data that I need to store at design-time to construct the contents of a group of components at run-time. Something like this: type TVulnerabilityData = record Vulnerability: TVulnerability; Name: string; Description: string; ErrorMessage: string; end; What's the best way of storing this data at design-time for later retrieval at run-time? I'll have about 20 records for which I know all the contents of each "record" but I'm stuck on what's the best way of storing the data. The only semi-elegant idea I've come up with is "construct" each record on the unit's initialization like this: var VulnerabilityData: array[Low(TVulnerability)..High(TVulnerability)] of TVulnerabilityData; .... initialization VulnerabilityData[0].Vulnerability := vVulnerability1; VulnerabilityData[0].Name := 'Name of Vulnerability1'; VulnerabilityData[0].Description := 'Description of Vulnerability1'; VulnerabilityData[0].ErrorMessage := 'Error Message of Vulnerability1'; VulnerabilityData[1]...... ..... VulnerabilityData[20]...... Is there a better and/or more elegant solution than this? Thanks for reading.

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • SQL Injection Protection for dynamic queries

    - by jbugeja
    The typical controls against SQL injection flaws are to use bind variables (cfqueryparam tag), validation of string data and to turn to stored procedures for the actual SQL layer. This is all fine and I agree, however what if the site is a legacy one and it features a lot of dynamic queries. Then, rewriting all the queries is a herculean task and it requires an extensive period of regression and performance testing. I was thinking of using a dynamic SQL filter and calling it prior to calling cfquery for the actual execution. I found one filter in CFLib.org (http://www.cflib.org/udf/sqlSafe): <cfscript> /** * Cleans string of potential sql injection. * * @param string String to modify. (Required) * @return Returns a string. * @author Bryan Murphy ([email protected]) * @version 1, May 26, 2005 */ function metaguardSQLSafe(string) { var sqlList = "-- ,'"; var replacementList = "#chr(38)##chr(35)##chr(52)##chr(53)##chr(59)##chr(38)##chr(35)##chr(52)##chr(53)##chr(59)# , #chr(38)##chr(35)##chr(51)##chr(57)##chr(59)#"; return trim(replaceList( string , sqlList , replacementList )); } </cfscript> This seems to be quite a simple filter and I would like to know if there are ways to improve it or to come up with a better solution?

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Understanding the Silverlight Dispatcher

    - by Matt
    I had a Invalid Cross Thread access issue, but a little research and I managed to fix it by using the Dispatcher. Now in my app I have objects with lazy loading. I'd make an Async call using WCF and as usual I use the Dispatcher to update my objects DataContext, however it didn't work for this scenario. I did however find a solution here. Here's what I don't understand. In my UserControl I have code to call an Toggle method on my object. The call to this method is within a Dispatcher like so. Dispatcher.BeginInvoke( () => _CurrentPin.ToggleInfoPanel() ); As I mentioned before this was not enough to satisfy Silverlight. I had to make another Dispatcher call within my object. My object is NOT a UIElement, but a simple class that handles all its own loading/saving. So the problem was fixed by calling Deployment.Current.Dispatcher.BeginInvoke( () => dataContext.Detail = detail ); within my class. Why did I have to call the Dispatcher twice to achieve this? Shouldn't a high-level call be enough? Is there a difference between the Deployment.Current.Dispatcher and the Dispatcher in a UIElement?

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

< Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >