Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 491/545 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • C++ BigInt multiplication conceptual problem

    - by Kapo
    I'm building a small BigInt library in C++ for use in my programming language. The structure is like the following: short digits[ 1000 ]; int len; I have a function that converts a string into a bigint by splitting it up into single chars and putting them into digits. The numbers in digits are all reversed, so the number 123 would look like the following: digits[0]=3 digits[1]=3 digits[2]=1 I have already managed to code the adding function, which works perfectly. It works somewhat like this: overflow = 0 for i ++ until length of both numbers exceeded: add numberA[ i ] to numberB[ i ] add overflow to the result set overflow to 0 if the result is bigger than 10: substract 10 from the result overflow = 1 put the result into numberReturn[ i ] (Overflow is in this case what happens when I add 1 to 9: Substract 10 from 10, add 1 to overflow, overflow gets added to the next digit) So think of how two numbers are stored, like those: 0 | 1 | 2 --------- A 2 - - B 0 0 1 The above represents the digits of the bigints 2 (A) and 100 (B). - means uninitialized digits, they aren't accessed. So adding the above number works fine: start at 0, add 2 + 0, go to 1, add 0, go to 2, add 1 But: When I want to do multiplication with the above structure, my program ends up doing the following: Start at 0, multiply 2 with 0 (eek), go to 1, ... So it is obvious that, for multiplication, I have to get an order like this: 0 | 1 | 2 --------- A - - 2 B 0 0 1 Then, everything would be clear: Start at 0, multiply 0 with 0, go to 1, multiply 0 with 0, go to 2, multiply 1 with 2 How can I manage to get digits into the correct form for multiplication? I don't want to do any array moving/flipping - I need performance!

    Read the article

  • Updating UI objects in windows forms

    - by P a u l
    Pre .net I was using MFC, ON_UPDATE_COMMAND_UI, and the CCmdUI class to update the state of my windows UI. From the older MFC/Win32 reference: Typically, menu items and toolbar buttons have more than one state. For example, a menu item is grayed (dimmed) if it is unavailable in the present context. Menu items can also be checked or unchecked. A toolbar button can also be disabled if unavailable, or it can be checked. Who updates the state of these items as program conditions change? Logically, if a menu item generates a command that is handled by, say, a document, it makes sense to have the document update the menu item. The document probably contains the information on which the update is based. If a command has multiple user-interface objects (perhaps a menu item and a toolbar button), both are routed to the same handler function. This encapsulates your user-interface update code for all of the equivalent user-interface objects in a single place. The framework provides a convenient interface for automatically updating user-interface objects. You can choose to do the updating in some other way, but the interface provided is efficient and easy to use. What is the guidance for .net Windows Forms? I am using an Application.Idle handler in the main form but am not sure this is the best way to do this. About the time I put all my UI updates in the Idle event handler my app started to show some performance problems, and I don't have the metrics to track this down yet. Not sure if it's related.

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • return Queryable<T> or List<T> in a Repository<T>

    - by Danny Chen
    Currently I'm building an windows application using sqlite. In the data base there is a table say User, and in my code there is a Repository<User> and a UserManager. I think it's a very common design. In the repository there is a List method: //Repository<User> class public List<User> List(where, orderby, topN parameters and etc) { //query and return } This brings a problem, if I want to do something complex in UserManager.cs: //UserManager.cs public List<User> ListUsersWithBankAccounts() { var userRep = new UserRepository(); var bankRep = new BankAccountRepository(); var result = //do something complex, say "I want the users live in NY //and have at least two bank accounts in the system } You can see, returning List<User> brings performance issue, becuase the query is executed earlier than expected. Now I need to change it to something like a IQueryable<T>: //Repository<User> class public TableQuery<User> List(where, orderby, topN parameters and etc) { //query and return } TableQuery<T> is part of the sqlite driver, which is almost equals to IQueryable<T> in EF, which provides a query and won't execute it immediately. But now the problem is: in UserManager.cs, it doesn't know what is a TableQuery<T>, I need to add new reference and import namespaces like using SQLite.Query in the business layer project. It really brings bad code feeling. Why should my business layer know the details of the database? why should the business layer know what's SQLite? What's the correct design then?

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Android CursorAdapters, ListViews and background threads

    - by MattC
    This application I've been working on has databases with multiple megabytes of data to sift through. A lot of the activities are just ListViews descending through various levels of data within the databases until we reach "documents", which is just HTML to be pulled from the DB(s) and displayed on the phone. The issue I am having is that some of these activities need to have the ability to search through the databases by capturing keystrokes and re-running the query with a "like %blah%" in it. This works reasonably quickly except when the user is first loading the data and when the user first enters a keystroke. I am using a ResourceCursorAdapter and I am generating the cursor in a background thread, but in order to do a listAdapter.changeCursor(), I have to use a Handler to post it to the main UI thread. This particular call is then freezing the UI thread just long enough to bring up the dreaded ANR dialog. I'm curious how I can offload this to a background thread totally so the user interface remains responsive and we don't have ANR dialogs popping up. Just for full disclosure, I was originally returning an ArrayList of custom model objects and using an ArrayAdapter, but (understandably) the customer pointed out it was bad memory-manangement and I wasn't happy with the performance anyways. I'd really like to avoid a solution where I'm generating huge lists of objects and then doing a listAdapter.notifyDataSetChanged/Invalidated() Here is the code in question: private Runnable filterDrugListRunnable = new Runnable() { public void run() { if (filterLock.tryLock() == false) return; cur = ActivityUtils.getIndexItemCursor(DrugListActivity.this); if (cur == null || forceRefresh == true) { cur = docDb.getItemCursor(selectedIndex.getIndexId(), filter); ActivityUtils.setIndexItemCursor(DrugListActivity.this, cur); forceRefresh = false; } updateHandler.post(new Runnable() { public void run() { listAdapter.changeCursor(cur); } }); filterLock.unlock(); updateHandler.post(hideProgressRunnable); updateHandler.post(updateListRunnable); } };

    Read the article

  • Grouping Categorized Data In WPF.

    - by VoidDweller
    Here is what I am trying to do. Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# MVVM I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? Envision this nicely styled. :-) -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Edit: Sorting and Paging is not necessary. I have looked at nested ListViews and DataGrids, dynamically building a Grid. Dynamically building a Grid and leveraging the SharedSizeGroup property seems the most promising strategy, but I am concerned about performance. Would a better approach be to consider this a dynamic report? If so, what should I be looking at? Thanks for your help.

    Read the article

  • Implement a threading to prevent UI block on a bug in an async function

    - by Marcx
    I think I ran up againt a bug in an async function... Precisely the getDirectoryListingAsync() of the File class... This method is supposted to return an object containing the lists of files in a specified folder. I found that calling this method on a direcory with a lot of files (in my tests more than 20k files), after few seconds there is a block on the UI until the process is completed... I think that this method is separated in two main block: 1) get the list of files 2) create the array with the details of the files The point 1 seems to be async (for a few second the ui is responsive), then when the process pass from point 1 to point 2 the block of the UI occurs until the complete event is dispathed... Here's some (simple) code: private function checkFiles(dir:File):void { if (dir.exists) { dir.addEventListener( FileListEvent.DIRECTORY_LISTING, listaImmaginiLocale); dir.getDirectoryListingAsync(); // after this point, for the firsts seconds the UI respond well (point 1), // few seconds later (point 2) the UI is frozen } } private function listaImmaginiLocale( event:FileListEvent ):void { // from this point on the UI is responsive again... } Actually in my projects there are some function that perform an heavy cpu usage and to prevent the UI block I implemented a simple function that after some iteration will wait giving time to UI to be refreshed. private var maxIteration:int = 150000; private function sampleFunct(offset:int = 0) :void { if (offset < maxIteration) { // do something // call the recursive function using a timeout.. // if the offset in multiple by 1000 the function will wait 15 millisec, // otherwise it will be called immediately // 1000 is a random number for the pourpose of this example, but I usually change the // value based on how much heavy is the function itself... setTimeout(function():void{aaa(++offset);}, (offset%1000?15:0)); } } Using this method I got a good responsive UI without afflicting performance... I'd like to implement it into the getDirectoryListingAsync method but I don't know if it's possibile how can I do it where is the file to edit or extend.. Any suggestion???

    Read the article

  • ASP.NET: Page HTML head rendering

    - by Fabian
    I've been trying to figure out the best way to custom render the <head> element of a pag to get rid of the extra line breaks which is caused by <head runat="server">, so its properly formatted. So far the only thing i've found which works is the following: protected override void Render(HtmlTextWriter writer) { StringWriter stringWriter = new StringWriter(); HtmlTextWriter htmlTextWriter = new HtmlTextWriter(stringWriter); base.Render(htmlTextWriter); htmlTextWriter.Close(); string html = stringWriter.ToString(); string newHTML = html.Replace("\r\n\r\n<!DOCTYPE", "<!DOCTYPE") .Replace("\r\n<html", "<html") .Replace("<title>\r\n\t", "<title>") .Replace("\r\n</title>", "</title>") .Replace("</head>", "\n</head>"); writer.Write(newHTML); } I define my had tag like Now i have 2 questions: How does the above code affect the performance (so is this viable in an production environment)? Is there a better way to do this, for example a method which i can override to just custom render the <head>? Oh yeah ASP.NET MVC is not an option.

    Read the article

  • C++: Copy contructor: Use Getters or access member vars directly?

    - by cbrulak
    Have a simple container class: public Container { public: Container() {} Container(const Container& cont) //option 1 { SetMyString(cont.GetMyString()); } //OR Container(const Container& cont) //option 2 { m_str1 = cont.m_str1; } public string GetMyString() { return m_str1;} public void SetMyString(string str) { m_str1 = str;} private: string m_str1; } So, would you recommend this method or accessing the member variables directly? In the example, all code is inline, but in our real code there is no inline code. Update (29 Sept 09): Some of these answers are well written however they seem to get missing the point of this question: this is simple contrived example to discuss using getters/setters vs variables initializer lists or private validator functions are not really part of this question. I'm wondering if either design will make the code easier to maintain and expand. Some ppl are focusing on the string in this example however it is just an example, imagine it is a different object instead. I'm not concerned about performance. we're not programming on the PDP-11

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

  • The correct usage of nested #pragma omp for directives

    - by GoldenLee
    The following code runs like a charm before OpenMP parallelization was applied. In fact, the following code was in a state of endless loop! I'm sure that's result from my incorrect use to the OpenMP directives. Would you please show me the correct way? Thank you very much. #pragma omp parallel for for (int nY = nYTop; nY <= nYBottom; nY++) { for (int nX = nXLeft; nX <= nXRight; nX++) { // Use look-up table for performance dLon = theApp.m_LonLatLUT.LonGrid()[nY][nX] + m_FavoriteSVISSRParams.m_dNadirLon; dLat = theApp.m_LonLatLUT.LatGrid()[nY][nX]; // If you don't want to use longitude/latitude look-up table, uncomment the following line //NOMGeoLocate.XYToGEO(dLon, dLat, nX, nY); if (dLon > 180 || dLat > 180) { continue; } if (Navigation.GeoToXY(dX, dY, dLon, dLat, 0) > 0) { continue; } // Skip void data scanline dY = dY - nScanlineOffset; // Compute coefficients as well as its four neighboring points' values nX1 = int(dX); nX2 = nX1 + 1; nY1 = int(dY); nY2 = nY1 + 1; dCx = dX - nX1; dCy = dY - nY1; dP1 = pIRChannelData->operator [](nY1)[nX1]; dP2 = pIRChannelData->operator [](nY1)[nX2]; dP3 = pIRChannelData->operator [](nY2)[nX1]; dP4 = pIRChannelData->operator [](nY2)[nX2]; // Bilinear interpolation usNomDataBlock[nY][nX] = (unsigned short)BilinearInterpolation(dCx, dCy, dP1, dP2, dP3, dP4); } }

    Read the article

  • C++ Array vs vector

    - by blue_river
    when using C++ vector, time spent is 718 milliseconds, while when I use Array, time is almost 0 milliseconds. Why so much performance difference? int _tmain(int argc, _TCHAR* argv[]) { const int size = 10000; clock_t start, end; start = clock(); vector<int> v(size*size); for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { v[i*size+j] = 1; } } end = clock(); cout<< (end - start) <<" milliseconds."<<endl; // 718 milliseconds int f = 0; start = clock(); int arr[size*size]; for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { arr[i*size+j] = 1; } } end = clock(); cout<< ( end - start) <<" milliseconds."<<endl; // 0 milliseconds return 0; }

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Freetype2 (error-)return value documentation

    - by Awaki
    In short, I'm looking for documentation that would limit the error situations to check for after a Freetype library function failed, much like the OpenGL and Win32 APIs document the error codes generated by their respective functions. I can't seem to find such documentation though, so I was wondering how to best handle translation of Freetype errors to typed exceptions. Background: I am currently in the process of implementing font-rendering capability (using Freetype) for my GUI framework, which makes strong use of typed exceptions to indicate error situations. However, the Freetype docs seem to completely omit what errors can be expected from what functions. That, if such documentation does indeed not exist, would basically leave me with two options: either guessing which errors make sense for a certain Freetype function (obviously prone to mistakes on my part), or considering every error code for translation into appropriate exceptions (less verbose since I would have to write the translation only once). Performance isn't really critical in the code that calls the Freetype library, so even the latter option would probably be acceptable, but surely there must be some kind of documentation on which library calls may return what Freetype error? Is there any such documentation which I just somehow managed to not find? Should I go the route of generically expecting every error code for translation? Or are there other ways to approach this problem? By the way, I wanted to avoid introducing some kind of generic FreetypeException (containing a description of the Freetype error) since I intended to completely hide what libraries I'm using (not from a legal point-of-view, mind you), but I guess I can be convinced to do this anyway if the consensus is that it would be the best option. I don't think it matters for this question, but I'm writing in C++.

    Read the article

  • I'm annoyed with asp .net mvc action links? Is there something better in MVC3?

    - by Jonathon Kresner
    After almost 3 years with mvc I'm scratching my head. Is it just me, or does the way we specify links in asp .net mvc suck? @Html.ActionLink("Log Off", "LogOff", "Account") In the previews for mvc 1 we had the funky generic action links which gave us intellisense and compile checking, which I LOVED. I know they removed them because of performance issues and because you could not actually guarantee that the route would resolve all the time... However the default way of doing it just doesn't make me feel safe enough in a big application. I've also used T4Mvc with MVC2, to be honest, I didn't really like it. It's not part of the Mvc framework and frustrating to develop with especially with source control in big teams and continuous integration builds. I guess I could also import Mvc Futures and keep using the generic types (it's probably what I'll do). I'm just about to start a very big project and was wondering what other people are thinking? Is anyone else annoyed with the options or has a new solution? It seems like ActionLinks are the most basic & frequently used feature. Shouldn't there be a good out of the box solution, we're just about to hit revision 3 of this framework.

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Move to php in windows? Concern, hints, "please don't do!"?

    - by Daniel
    I am considering to move frome Microsoft languages to PHP (just for web dev) which has quite an interesting syntax, a perlish look (but a wider programmer base) and it allows me to reuse the web without reinventing it. I have some concerns too. I would be more than happy to gather some wisdom from stackoverflow community, (challenge to my opinions warmly welcome). Here are my doubts. Efficiency. Cgi are slow, what I am supposed to use? Fastcgi? Or what else? Efficiency + stability. Is PHP on windows really stable and a good choice in terms of performances? Database. I use very often MSSQL (I regret, i like it). Could I widely and efficiently interface PHP with MSSQL (using smartly stored pro, for example). XSLT + XML performance. I work quite a lot with XML and XSLT and I really find the MS xml parser a great software component. Are parser used in PHP fast, reliable and efficient (I am interested mainly in DOM, not SAX)? Objects. Is the PHP object programming model valid end efficient? 6 Regex. How efficient is PHP processing regexp? Many thanks for your advices.

    Read the article

  • How to do "map chunks", like terraria or minecraft maps?

    - by O'poil
    Due to performance issues, I have to cut my maps into chunks. I manage the maps in this way: listMap[x][y] = new Tile (x,y); I tried in vain to cut this list for several "chunk" to avoid loading all the map because the fps are not very high with large map. And yet, when I update or Draw I do it with a little tile range. Here is how I proceed: foreach (List<Tile> list in listMap) { foreach (Tile leTile in list) { if ((leTile.Position.X < screenWidth + hero.Pos.X) && (leTile.Position.X > hero.Pos.X - tileSize) && (leTile.Position.Y < screenHeight + hero.Pos.Y) && (leTile.Position.Y > hero.Pos.Y - tileSize) ) { leTile.Draw(spriteBatch, gameTime); } } } (and the same thing, for the update method). So I try to learn with games like minecraft or terraria, and any two manages maps much larger than mine, without the slightest drop of fps. And apparently, they load "chunks". What I would like to understand is how to cut my list in Chunk, and how to display depending on the position of my character. I try many things without success. Thank you in advance for putting me on the right track! Ps : Again, sorry for my English :'( Pps : I'm not an experimented developer ;)

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >