Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 478/530 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • How to do "map chunks", like terraria or minecraft maps?

    - by O'poil
    Due to performance issues, I have to cut my maps into chunks. I manage the maps in this way: listMap[x][y] = new Tile (x,y); I tried in vain to cut this list for several "chunk" to avoid loading all the map because the fps are not very high with large map. And yet, when I update or Draw I do it with a little tile range. Here is how I proceed: foreach (List<Tile> list in listMap) { foreach (Tile leTile in list) { if ((leTile.Position.X < screenWidth + hero.Pos.X) && (leTile.Position.X > hero.Pos.X - tileSize) && (leTile.Position.Y < screenHeight + hero.Pos.Y) && (leTile.Position.Y > hero.Pos.Y - tileSize) ) { leTile.Draw(spriteBatch, gameTime); } } } (and the same thing, for the update method). So I try to learn with games like minecraft or terraria, and any two manages maps much larger than mine, without the slightest drop of fps. And apparently, they load "chunks". What I would like to understand is how to cut my list in Chunk, and how to display depending on the position of my character. I try many things without success. Thank you in advance for putting me on the right track! Ps : Again, sorry for my English :'( Pps : I'm not an experimented developer ;)

    Read the article

  • best web database solution for scala for a high traffic site?

    - by egervari
    I am in charge of a rebuilding a website that gets about 250,000 visitors a day. We'd like to use Scala, but it does not work very well with Spring (in some minor cases) and Hibernate (there is a major and very annoying mismatch here if you want to use scala collections, which we do). The application itself is going to have about 40-50 tables. Other than Hibernate, is there an ORM that works awesome with Scala and is as performant and reliable as Hibernate? Does it also have the same capabilities, or are we going to run into leaky-abstractions if we don't use Hibernate? It would be a big risk for us to go with a framework that is newer and doesn't seem to have a lot of industry backing... and at the same time, Hibernate is a real pain to program against when using Scala. 1) The Java Collection <- Scala Collection is absolutely painful. There is a lot more boilerplate and crap to write. 2) The IDE doesn't import JavaConversions and java interfaces automatically... so we this needs to be done manually. Optimizing Imports in IDEA is going to destroy all the manual work. 3) There is also a performance cost to converting back and forth all the time in your domain objects and your dao classes. 4) Not to mention there needs to be a lot of casting, which produces code ugly as sin. I actually would love to write my own orm that is 100% tailored to scala, but obviously this is really outside of the scope of our project for now. So what is the best approach?

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • Move to php in windows? Concern, hints, "please don't do!"?

    - by Daniel
    I am considering to move frome Microsoft languages to PHP (just for web dev) which has quite an interesting syntax, a perlish look (but a wider programmer base) and it allows me to reuse the web without reinventing it. I have some concerns too. I would be more than happy to gather some wisdom from stackoverflow community, (challenge to my opinions warmly welcome). Here are my doubts. Efficiency. Cgi are slow, what I am supposed to use? Fastcgi? Or what else? Efficiency + stability. Is PHP on windows really stable and a good choice in terms of performances? Database. I use very often MSSQL (I regret, i like it). Could I widely and efficiently interface PHP with MSSQL (using smartly stored pro, for example). XSLT + XML performance. I work quite a lot with XML and XSLT and I really find the MS xml parser a great software component. Are parser used in PHP fast, reliable and efficient (I am interested mainly in DOM, not SAX)? Objects. Is the PHP object programming model valid end efficient? 6 Regex. How efficient is PHP processing regexp? Many thanks for your advices.

    Read the article

  • Ruby integer to string key

    - by Gene
    A system I'm building needs to convert non-negative Ruby integers into shortest-possible UTF-8 string values. The only requirement on the strings is that their lexicographic order be identical to the natural order on integers. What's the best Ruby way to do this? We can assume the integers are 32 bits and the sign bit is 0. This is successful: (i >> 24).chr + ((i >> 16) & 0xff).chr + ((i >> 8) & 0xff).chr + (i & 0xff).chr But it appears to be 1) garbage-intense and 2) ugly. I've also looked at pack solutions, but these don't seem portable due to byte order. FWIW, the application is Redis hash field names. Building keys may be a performance bottleneck, but probably not. This question is mostly about the "Ruby way".

    Read the article

  • Is locking on the requested object a bad idea?

    - by Quick Joe Smith
    Most advice on thread safety involves some variation of the following pattern: public class Thing { private static readonly object padlock = new object(); private string stuff, andNonsense; public string Stuff { get { lock (Thing.padlock) { if (this.stuff == null) this.stuff = "Threadsafe!"; } return this.stuff; } } public string AndNonsense { get { lock (Thing.padlock) { if (this.andNonsense == null) this.andNonsense = "Also threadsafe!"; } return this.andNonsense; } } // Rest of class... } In cases where the get operations are expensive and unrelated, a single locking object is unsuitable because a call to Stuff would block all calls to AndNonsense, degrading performance. And rather than create a lock object for each call, wouldn't it be better to acquire the lock on the member itself (assuming it is not something that implements SyncRoot or somesuch for that purpose? For example: public string Stuff { get { lock (this.stuff) { // Pretend that this is a very expensive operation. if (this.stuff == null) this.stuff = "Still threadsafe and good?"; } return this.stuff; } } Strangely, I have never seen this approach recommended or warned against. Am I missing something obvious?

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Hibernate: fetching multiple bags efficiently

    - by Jens Jansson
    Hi! I'm developing a multilingual application. For this reason many objects have in their name and description fields collections of something I call LocalizedStrings instead of plain strings. Every LocalizedString is basically a pair of a locale and a string localized to that locale. Let's take an example an entity, let's say a book -object. public class Book{ @OneToMany private List<LocalizedString> names; @OneToMany private List<LocalizedString> description; //and so on... } When a user asks for a list of books, it does a query to get all the books, fetches the name and description of every book in the locale the user has selected to run the app in, and displays it back to the user. This works but it is a major performance issue. For the moment hibernate makes one query to fetch all the books, and after that it goes through every single object and asks hibernate for the localized strings for that specific object, resulting in a "n+1 select problem". Fetching a list of 50 entities produces about 6000 rows of sql commands in my server log. I tried making the collections eager but that lead me to the "cannot simultaneously fetch multiple bags"-issue. Then I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books, and after that do one query that fetches all LocalizedStrings for all the books. Subselects didn't work in this case how i would have hoped and it basically just did exactly the same as my first case. I'm starting to run out of ideas on how to optimize this. So in short, what fetching strategy alternatives are there when you are fetching a collection and every element in that collection has one or multiple collections in itself, which has to be fetch simultaneously.

    Read the article

  • Keep the images downloaded in the listview during scrolling

    - by Paveliko
    I'm developing my first app and have been reading a LOT here. I've been trying to find a solution for the following issue for over a week with no luck. I have an Adapter that extends ArrayAdapter to show image and 3 lines of text in each row. Inside the getView I assign relevant information for the TextViews and use ImageLoader class to download image and assign it to the ImageView. Everything works great! I have 4.5 rows visible on my screen (out of total of 20). When I scroll down for the first time the images continue to download and be assigned in right order to the list. BUT when I scroll back the list looses all the images and start redrawing them again (0.5-1 sec per image) in correct order. From what I've been reading it's the standard list performance but I want to change it. I want that, once the image was downloaded, it will be "sticked" to the list for the whole session of the current window. Just like in Contacts list or in the Market. It is only 20 images (6-9kb each). Hope I managed to explain myself.

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • How to efficiently implement a strategy pattern with spring ?

    - by Anth0
    I have a web application developped in J2EE 1.5 with Spring framework. Application contains "dashboards" which are simple pages where a bunch of information are regrouped and where user can modify some status. Managers want me to add a logging system in database for three of theses dashboards. Each dashboard has different information but the log should be traced by date and user's login. What I'd like to do is to implement the Strategy pattern kind of like this : interface DashboardLog { void createLog(String login, Date now); } // Implementation for one dashboard class PrintDashboardLog implements DashboardLog { Integer docId; String status; void createLog(String login, Date now){ // Some code } } class DashboardsManager { DashboardLog logger; String login; Date now; void createLog(){ logger.log(login,now); } } class UpdateDocAction{ DashboardsManager dbManager; void updateSomeField(){ // Some action // Now it's time to log dbManagers.setLogger = new PrintDashboardLog(docId, status); dbManagers.createLog(); } } Is it "correct" (good practice, performance, ...) to do it this way ? Is there a better way ? Note :I did not write basic stuff like constructors and getter/setter.

    Read the article

  • Problem with incomplete input when using Attoparsec

    - by Dan Dyer
    I am converting some functioning Haskell code that uses Parsec to instead use Attoparsec in the hope of getting better performance. I have made the changes and everything compiles but my parser does not work correctly. I am parsing a file that consists of various record types, one per line. Each of my individual functions for parsing a record or comment works correctly but when I try to write a function to compile a sequence of records the parser always returns a partial result because it is expecting more input. These are the two main variations that I've tried. Both have the same problem. items :: Parser [Item] items = sepBy (comment <|> recordType1 <|> recordType2) endOfLine For this second one I changed the record/comment parsers to consume the end-of-line characters. items :: Parser [Item] items = manyTill (comment <|> recordType1 <|> recordType2) endOfInput Is there anything wrong with my approach? Is there some other way to achieve what I am attempting?

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • Obfuscator for .NET assembly (Maybe just a C++ obfuscator?)

    - by Pirate for Profit
    The software company I work for is using a ton of open source LGPL/BSD/MIT C++ code that we have written wrappers around to port "helper classes" into a .NET assembly, via C++/CLI. These libraries have wrapped old cryptic APIs into easy-to-use ones based on common sense, and will be very helpful for a lot of different tasks will be included in many future client's applications, and we might even license it to other software companies in the same field. So naturally we are tasked with looking into solutions for securing the code from prying eyes. What we're trying to do is stop the casual observer from seeing what's going on. Now I have hacked some crazy shit in EverQuest and other video games in my day so I know with enough tireless effort anything can be done. But we don't want to make it easy for whomever. To the point, besides the Visual Studio compiler's optimizations, is there's a C++ obfuscator or .NET assembly obfuscator (after it's been built o.O) or something that would scramble everything up, re-arrange data structures, string constants, etc. idk? And if such a thing exists, we'd be curious to know how that would impact performance, as some sections of code are time critical (funny saying that using a managed M$ framework).

    Read the article

  • Freetype2 (error-)return value documentation

    - by Awaki
    In short, I'm looking for documentation that would limit the error situations to check for after a Freetype library function failed, much like the OpenGL and Win32 APIs document the error codes generated by their respective functions. I can't seem to find such documentation though, so I was wondering how to best handle translation of Freetype errors to typed exceptions. Background: I am currently in the process of implementing font-rendering capability (using Freetype) for my GUI framework, which makes strong use of typed exceptions to indicate error situations. However, the Freetype docs seem to completely omit what errors can be expected from what functions. That, if such documentation does indeed not exist, would basically leave me with two options: either guessing which errors make sense for a certain Freetype function (obviously prone to mistakes on my part), or considering every error code for translation into appropriate exceptions (less verbose since I would have to write the translation only once). Performance isn't really critical in the code that calls the Freetype library, so even the latter option would probably be acceptable, but surely there must be some kind of documentation on which library calls may return what Freetype error? Is there any such documentation which I just somehow managed to not find? Should I go the route of generically expecting every error code for translation? Or are there other ways to approach this problem? By the way, I wanted to avoid introducing some kind of generic FreetypeException (containing a description of the Freetype error) since I intended to completely hide what libraries I'm using (not from a legal point-of-view, mind you), but I guess I can be convinced to do this anyway if the consensus is that it would be the best option. I don't think it matters for this question, but I'm writing in C++.

    Read the article

  • Is it acceptable to wrap PHP library functions solely to change the names?

    - by Carson Myers
    I'm going to be starting a fairly large PHP application this summer, on which I'll be the sole developer (so I don't have any coding conventions to conform to aside from my own). PHP 5.3 is a decent language IMO, despite the stupid namespace token. But one thing that has always bothered me about it is the standard library and its lack of a naming convention. So I'm curious, would it be seriously bad practice to wrap some of the most common standard library functions in my own functions/classes to make the names a little better? I suppose it could also add or modify some functionality in some cases, although at the moment I don't have any examples (I figure I will find ways to make them OO or make them work a little differently while I am working). If you saw a PHP developer do this, would you think "Man, this is one shoddy developer?" Additionally, I don't know much (or anything) about if/how PHP is optimized, and I know that usually PHP performace doesn't matter. But would doing something like this have a noticeable impact on the performance of my application?

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >