Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 457/530 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Elmah for non-HTTP protocol applications OR Elmah without HttpContext

    - by Josh
    We are working on a 3-tier application, and we've been allowed to use the latest and greatest (MVC2, IIS7.5, WCF, SQL2k8, etc). The application tier is exposed to the various web applications by WCF services. Since we control both the service and client side, we've decided to use net.tcp bindings for their performance advantage over HTTP. We would like to use ELMAH for the error logging, both on the web apps and services. Here's my question. There's lots of information about using ELMAH with WCF, but it is all for HTTP bindings. Does anyone know if/how you can use ELMAH with WCF services exposing non-HTTP endpoints? My guess is no, because ELMAH wants the HttpContext, which requires the AspNetCompatibilityEnabled flag to be true in the web.config. From MSDN: IIS 7.0 and WAS allows WCF services to communicate over protocols other than HTTP. However, WCF services running in applications that have enabled ASP.NET compatibility mode are not permitted to expose non-HTTP endpoints. Such a configuration generates an activation exception when the service receives its first message. If it is true that you cannot use ELMAH with WCF services having non-HTTP endpoints, then the follow-up question is: Can we use ELMAH in such a way that doesn't need HttpContext? Or more generally (so as not to commit the thin metal ruler error), is there ANY way to use ELMAH with WCF services having non-HTTP endpoints?

    Read the article

  • Where are the factory_girl records?

    - by gmile
    I'm trying to perform an integration test via Watir and RSpec. So, I created a test file within /integration and wrote a test, which adds a test user into a base via factory_girl. The problem is — I can't actually perform a login with my test user. The test I wrote looks as following: ... before(:each) @user = Factory(:user) @browser = FireWatir::Firefox.new end it "should login" @browser.text_field(:id, "username").set(@user.username) @browser.text_field(:id, "password").set(@user.password) @browser.button(:id, "get_in").click end ... As I'm starting the test and see a "performance" in browser, it always fires up a Username is not valid error. I've started an investigation, and did a small trick. First of all I've started to have doubts if the factory actually creates the user in DB. So after the immediate call to factory I've put some puts User.find stuff only to discover that the user is actually in DB. Ok, but as user still couldn't have logged in I've decided to see if he's present in DB with my own eyes. I've added a sleep right after a factory call, and went to see what's in the DB at the moment. I was crushed to see that the user is actually missing there! How come? Still, when I'm trying to output a user within the code, he is actually being fetched from somewhere. So where does the records, made by factory_girl within a runtime lie? Is it test or dev DB? I don't get it. I've 10 times checked if I'm running my Mongrel in test mode (does it matter? I think it does, as I'm trying to tun an integration test) and if my database.yml holds the correct connection specific data. I'm using an authlogic, if that can give any clue (no, putting activate_authlogic doesn't work here).

    Read the article

  • cookieless sessions with ajax

    - by thezver
    ok, i know you get sick from this subject. me too :( I've been developing a quite "big application" with PHP & kohana framework past 2 years, somewhat-successfully using my framework's authentication mechanism. but within this time, and as the app grown, many concerning state-preservation issues arisen. main problems are that cookie-driven sessions: can't be used for web-service access ( at least it's really not nice to do so.. ) in many cases problematic with mobile access don't allow multiple simultaneous apps on same browser ( can be resolved by hard trickery, but still.. ) requires many configurations and mess to work 100% right, and that's without the --browser issues ( disabled cookies, old browsers bugs & vulnerabilities etc ) many other session flaws stated in this old thread : http://lists.nyphp.org/pipermail/talk/2006-December/020358.html After a really long research, and without any good library/on-hand-solution to feet my needs, i came up with a custom solution to majority of those problems . Basically, i'ts about emulating sessions with ajax calls, with additional security/performance measures: state preserved by interchanging SID(+hash) with client on ajax calls. state data saved in memcache(or equivalent), indexed by SID security achieved by: appending unpredictible hash to SID egenerating hash on each request & validating it validating fingerprint of client on each request ( referrer,os,browser etc) (*)condition: ajax calls are not simultaneous, to prevent race-condition with session token. (hopefully Ext-Direct solves that for me) From the first glance that supposed to be not-less-secure than equivalent cookie-driven implementation, and at the same time it's simple, maintainable, and resolves all the cookies flaws.. But i'm really concerned because i often hear the rule "don't try to implement custom security solutions". I will really appreciate any serious feedback about my method, and any alternatives. also, any tip about how to preserve state on page-refresh without cookies would be great :) but thats small technical prob. Sorry if i overlooked some similar post.. there are billions of them about sessions . Big thanks in advance ( and for reading until here ! ).

    Read the article

  • Contact-app like scrollinglist on android

    - by Alxandr
    I'm writing my first android app (I'm a noob at android, but decent at java). The first screen of the app consists of a huge list (about 1.5K items) of Manga-objects. The code I use is as following: main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:id="@+id/LinearLayout01" android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android"> <ListView android:layout_height="fill_parent" android:layout_width="fill_parent" android:id="@+id/android:list"></ListView> <TextView android:layout_height="fill_parent" android:layout_width="fill_parent" android:text="@string/list_no_items" android:id="@+id/android:empty"></TextView> </LinearLayout> row.xml: <LinearLayout android:id="@+id/LinearLayout01" android:layout_width="fill_parent" android:layout_height="?android:attr/listPreferredItemHeight" android:padding="6dip" xmlns:android="http://schemas.android.com/apk/res/android"> <ImageView android:id="@+id/icon" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_marginRight="6dip" android:src="@drawable/icon" /> <LinearLayout android:orientation="vertical" android:layout_width="0dip" android:layout_weight="1" android:layout_height="fill_parent"> <TextView android:id="@+id/toptext" android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1" android:gravity="center_vertical" /> <TextView android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1" android:id="@+id/bottomtext" android:singleLine="true" android:ellipsize="marquee" /> </LinearLayout> </LinearLayout> Then I have a adapter which basically takes a Manga-object and puts it's name in the rows toptext, and it's latest updated date in the bottomtext. However, (this might be caused by the virtualization of the unit though), the result is really slow. Scrolling the list takes forever and you can only scroll small peaces of the time. How can I make the list work like the one in the contacts-app? So that when you start scrolling a handle pops out at the right side of the screen, and when you drag it letters shows up as of how far you've scrolled (the list is sorted alphabetically), and also, is there a way I could improve the performance of the list?

    Read the article

  • Simple iPhone drawing app with Quartz 2D

    - by Mr guy 4
    I am making a simple iPhone drawing program as a personal side-project. I capture touches event in a subclassed UIView and render the actual stuff to a seperate CGLayer. After each render, I call [self setNeedsLayout] and in the drawRect: method I draw the CGLayer to the screen context. This all works great and performs decently for drawing rectangles. However, I just want a simple "freehand" mode like a lot of other iPhone applications have. The way I thought to do this was to create a CGMutablePath, and simply: CGMutablePathRef path; -(void)touchBegan { path = CGMutablePathCreate(); } -(void)touchMoved { CGPathMoveToPoint(path,NULL,x,y); CGPathAddLineToPoint(path,NULL,x,y); } -(void)drawRect:(CGContextRef)context { CGContextBeginPath(context); CGContextAddPath(context,path); CGContextStrokePath(context); } However, after drawing for more than 1 second, performance degrades miserably. I would just draw each line into the off-screen CGLayer, if it were not for variable opacity! The less-than-100% opacity causes dots to be left on the screen connecting the lines. I have looked at CGContextSetBlendingMode() but alas I cannot find an answer. Can anyone point me in the right direction? Other iPhone apps are able to do this with very good efficiency.

    Read the article

  • MS SQL 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • Lucene.NET 2.9 and BitArray/DocIdSet

    - by Paul Knopf
    I found a great example on grabbing facet counts on a base query. It stores the bitarray of the base query to improve the performance each time the a facet gets counted. var genreQuery = new TermQuery(new Term("genre", genre)); var genreQueryFilter = new QueryFilter(genreQuery); BitArray genreBitArray = genreQueryFilter.Bits(searcher.GetIndexReader()); Console.WriteLine("There are " + GetCardinality(genreBitArray) + " document with the genre " + genre); // Next perform a regular search and get its BitArray result Query searchQuery = MultiFieldQueryParser.Parse(term, new[] {"title", "description"}, new[] {BooleanClause.Occur.SHOULD, BooleanClause.Occur.SHOULD}, new StandardAnalyzer()); var searchQueryFilter = new QueryFilter(searchQuery); BitArray searchBitArray = searchQueryFilter.Bits(searcher.GetIndexReader()); Console.WriteLine("There are " + GetCardinality(searchBitArray) + " document containing the term " + term); The only problem is that I am using a newer version of Lucene.NET (2.9) and Filter.Bits is obsolete. We are told to use DocIdSet instead (rather than BitArray). I cannot found out how to do the bitArray.And(bitArray) with a docIdSet. I looked in reflector and found OpenIdSet which has And operations. Not sure if OpenIdSet is the route to go, I'm just stating. Thanks in advance!

    Read the article

  • Perf: Viewing thousands of images in Silverlight 3 on a 3D Wall

    - by Bob Holland
    I currently work on a very cool Silverlight app that displays photos in a 3D wall space like the Wall3D demo that is thrown in with Blend 3. The problem I am currently facing is performance. The app works like this: As you scroll right or left the 3d photo wall rotates As each movement is made, the next column of photos are downloaded, decoded into a BitmapImage and thrown into a 3D Wall Node. As you can imagine users (if you let them) will want to flip through the photos really quickly, but the problem I have is I cannot display the photos quick enough. In most cases it's a beautiful app that works really well, but when an album contains over 300 photos, you can imagine the sort of memory taken up by all the BitmapImage classes and how moving the slider can jump from photo 20 to photo 120 in a second. Of course we have algorithms in place to not download every photo in between, but I still can't work out a fast way to get the photos displayed. It may be a case that we need to throw away the 'great for show' 3D wall and go to a flat DeepZoom like wall like the Playboy archive one that Vertigo did. Still not sure, let me know your thoughts. P.S. We are using Kit3D for all the 3D work, it's using PerspectiveCamera, Model3DGroup, ModelVisual3D, RotateTransform3D & TranslateTransform3D. Cheers, Bob.

    Read the article

  • Why is OracleDataAdapter.Fill() Very Slow?

    - by John Gietzen
    I am using a pretty complex query to retrieve some data out of one of our billing databases. I'm running in to an issue where the query seems to complete fairly quickly when executed with SQL Developer, but does not seem to ever finish when using the OracleDataAdapter.Fill() method. I'm only trying to read about 1000 rows, and the query completes in SQL Developer in about 20 seconds. What could be causing such drastic differences in performance? I have tons of other queries that run quickly using the same function. Here is the code I'm using to execute the query: using Oracle.DataAccess.Client; ... public DataTable ExecuteExternalQuery(string connectionString, string providerName, string queryText) { DbConnection connection = null; DbCommand selectCommand = null; DbDataAdapter adapter = null; switch (providerName) { case "System.Data.OracleClient": case "Oracle.DataAccess.Client": connection = new OracleConnection(connectionString); selectCommand = connection.CreateCommand(); adapter = new OracleDataAdapter((OracleCommand)selectCommand); break; ... } DataTable table = null; try { connection.Open(); selectCommand.CommandText = queryText; selectCommand.CommandTimeout = 300000; selectCommand.CommandType = CommandType.Text; table = new DataTable("result"); table.Locale = CultureInfo.CurrentCulture; adapter.Fill(table); } finally { adapter.Dispose(); if (connection.State != ConnectionState.Closed) { connection.Close(); } } return table; } And here is the general outline of the SQL I'm using: with trouble_calls as ( select work_order_number, account_number, date_entered from work_orders where date_entered >= sysdate - (15 + 31) -- Use the index to limit the number of rows scanned and wo_status not in ('Cancelled') and wo_type = 'Trouble Call' ) select account_number, work_order_number, date_entered from trouble_calls wo where wo.icoms_date >= sysdate - 15 and ( select count(*) from trouble_calls repeat where wo.account_number = repeat.account_number and wo.work_order_number <> repeat.work_order_number and wo.date_entered - repeat.date_entered between 0 and 30 ) >= 1

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • UIView animation VS core animation

    - by Tom Irving
    I'm trying to animate a view sliding into view and bouncing once it hits the side of the screen. A basic example of the slide I'm doing is as follows: // The view is added with a rect making it off screen. [UIView beginAnimations:nil context:NULL]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationDuration:0.07]; [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(animationDidStop:finished:context:)]; [theView setFrame:CGRectMake(-5, 0, theView.frame.size.width, theView.frame.size.height)]; [UIView commitAnimations]; More animations are then called in the didStopSelector to make the bounce effect. The problem is when more than one view is being animated, the bounce becomes jerky and, well, doesn't bounce anymore. Before I start reading up on how to do this in Core Animation, (I understand it's a little more difficult) I'd like to know if there is actually an advantage using Core Animation rather than UIView animations. If not, is there something I can do to improve performance?

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • Chart controls for ASP.NET

    - by tolism7
    I am looking people's opinion and experience on using chart controls within an ASP.NET application (web forms or MVC) primarily but also in any kind of project. I am currently doing my research and I have a pretty big list of controls to evaluate. My list includes (in no particular order): ASP.NET controls: DevExpress XtraCharts (http://demos.devexpress.com/XtraChartsDemos/) Dundas Chart for .NET (http://www.dundas.com/) Telerik RadChart for ASP.NET AJAX (http://www.telerik.com/) ComponentArt Charting & Visualization for ASP.NET (http://www.componentart.com/) Infragistics WebChart (http://www.infragistics.com/dotnet/netadvantage/aspnet.aspx#Overview) .net Charting (http://www.dotnetcharting.com/) Chart Control for .Net Framework (Microsoft's) (http://weblogs.asp.net/scottgu/archive/2008/11/24/new-asp-net-charting-control-lt-asp-chart-runat-quot-server-quot-gt.aspx) Flash controls: FusionCharts v3 (http://www.fusioncharts.com/) XML/SWF Charts (http://www.maani.us/xml%5Fcharts/index.php) amCharts (http://www.amcharts.com/) AnyChart (http://www.anychart.com/home/) Javascript: Flot (http://code.google.com/p/flot/) Flotr (http://solutoire.com/flotr/) jqPlot (http://www.jqplot.com/index.php) (If I missed some that worth to be compared against the above please let me know.) What I am looking is opinions on using any of the above so I can form my own and help others do the same, based on what I read here. I do not care which one is better. What I care for is why someone likes one of the above and what do these controls offer as a distinct advantage. I am interested in developer's opinion and I would like to find out which things are difficult doing with any of the above controls and which things are easy to achieve. AJAX compatibility (build in to the controls but also manual), ASP.NET compatibility, input capabilities, data binding options, performance, how much code does one need to write in order to create a chart, are some of the things that I would want to read about. I have already done my research on StackOverflow for relevant questions but there is nothing on the level of detail that I would want to read in order to make a responsible decision.

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • Force float left with no line break no matter what

    - by Tesserex
    I'm guessing this isn't possible, but here goes. I have two tables, and I'm trying to get them to sit side-by-side so that they look like one table. The reason for this, instead of using one larger table, is that the data in the second table needs to be handled on a column basis, not row basis, for performance reasons like caching and AJAX-fetching data. So rather than have to reload the whole table for a single column, I decided to break the column out into a separate table, but have it visually seem like a single table. I can't find a way to forcibly put the second table next to the first. I can float them, but when the first table is too wide, the second one breaks to the next line. Here's the kicker: the width of the first table is dynamic. So I can't just set a huge width to their container. Well, I could set a huge width, like 1000%, but then I have a huge ugly horizontal scroll bar. So is there any way to tell the second table "Stay on that same line, no matter what! And line up right next to the previous element please!"

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • SQL Server 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • SQL Injection Protection for dynamic queries

    - by jbugeja
    The typical controls against SQL injection flaws are to use bind variables (cfqueryparam tag), validation of string data and to turn to stored procedures for the actual SQL layer. This is all fine and I agree, however what if the site is a legacy one and it features a lot of dynamic queries. Then, rewriting all the queries is a herculean task and it requires an extensive period of regression and performance testing. I was thinking of using a dynamic SQL filter and calling it prior to calling cfquery for the actual execution. I found one filter in CFLib.org (http://www.cflib.org/udf/sqlSafe): <cfscript> /** * Cleans string of potential sql injection. * * @param string String to modify. (Required) * @return Returns a string. * @author Bryan Murphy ([email protected]) * @version 1, May 26, 2005 */ function metaguardSQLSafe(string) { var sqlList = "-- ,'"; var replacementList = "#chr(38)##chr(35)##chr(52)##chr(53)##chr(59)##chr(38)##chr(35)##chr(52)##chr(53)##chr(59)# , #chr(38)##chr(35)##chr(51)##chr(57)##chr(59)#"; return trim(replaceList( string , sqlList , replacementList )); } </cfscript> This seems to be quite a simple filter and I would like to know if there are ways to improve it or to come up with a better solution?

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Async.Parallel or Array.Parallel.Map ?

    - by gurteen2
    Hello- I'm trying to implement a pattern I read from Don Syme's blog (http://blogs.msdn.com/dsyme/archive/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-i-o-computations.aspx) which suggests that there are opportunities for massive performance improvements from leveraging asynchronous I/O. I am currently trying to take a piece of code that "works" one way, using Array.Parallel.Map, and see if I can somehow achieve the same result using Async.Parallel, but I really don't understand Async.Parallel, and cannot get anything to work. I have a piece of code (simplified below to illustrate the point) that successfully retrieves an array of data for one cusip. (A price series, for example) let getStockData cusip = let D = DataProvider() let arr = D.GetPriceSeries(cusip) return arr let data = Array.Parallel.map (fun x -> getStockData x) stockCusips So this approach contructs an array of arrays, by making a connection over the internet to my data vendor for each stock (which could be as many as 3000) and returns me an array of arrays (1 per stock, with a price series for each one). I admittedly don't understand what goes on underneath Array.Parallel.map, but am wondering if this is a scenario where there are resources wasted under the hood, and it actually could be faster using asynchronous I/O? So to test this out, I have attempted to make this function using asyncs, and I think that the function below follows the pattern in Don Syme's article using the URLs, but it won't compile with "let!". let getStockDataAsync cusip = async { let D = DataProvider() let! arr = D.GetData(cusip) return arr } The error I get is: This expression was expected to have type Async<'a but here has type obj It compiles fine with "let" instead of "let!", but I had thought the whole point was that you need the exclamation point in order for the command to run without blocking a thread. So the first question really is, what's wrong with my syntax above, in getStockDataAsync, and then at a higher level, can anyone offer some additional insight about asychronous I/O and whether the scenario I have presented would benefit from it, making it potentially much, much faster than Array.Parallel.map? Thanks so much.

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • Manual drag-drop operations in Flex

    - by Yarin
    This is a two-part problem: A) I'm implementing several irregular drag-drop operations in Flex (e.g. DataGrid ItemRenderer into Tree). My preference was modifying DragManager operations to meet my needs, and in fact using DragManager allows me to do everything I need, but I'm having serious issues with performance. For example, dragging anything over a many-columned DataGrid, whether the drag was initiated with DragManager.doDrag, or just using native ListBase drag-drop functionality, slows the drag movement to a crawl. Even if the DataGrid is disabled/ not listenening for any move/drag events, this happens. On the other hand, if the drag is initiated by calling .startDrag() on the Sprite, the drag is smooth and performs great over DataGrids and everything else. So part A would be: Is there a reason why .startDrag() operations work so well, while drags initiated through DragManager.doDrag suffer so badly when over certain components? B) If indeed the solution is to handle drag-drops using .startDrag(), how would I go about determining what component the mouse is over when the drag is released? In my example, my dragged object is brought up to the top level of the display list, and so is being moved around in stage coordinates. mouseMove, mouseOver events don't fire on the components I'm dragging over because the mouse is constantly over the dragged component, so I would need some sort of stage.coordinate - visibleComponentAtThatCoordinate conversion. Any thoughts on this? Thanks alot!-- Yarin

    Read the article

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >