Search Results

Search found 11972 results on 479 pages for 'writing'.

Page 401/479 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Inheritance of TCollectionItem

    - by JamesB
    I'm planning to have collection of items stored in a TCollection. Each item will derive from TBaseItem which in turn derives from TCollectionItem, With this in mind the Collection will return TBaseItem when an item is requested. Now each TBaseItem will have a Calculate function, in the the TBaseItem this will just return an internal variable, but in each of the derivations of TBaseItem the Calculate function requires a different set of parameters. The Collection will have a Calculate All function which iterates through the collection items and calls each Calculate function, obviously it would need to pass the correct parameters to each function I can think of three ways of doing this: Create a virtual/abstract method for each calculate function in the base class and override it in the derrived class, This would mean no type casting was required when using the object but it would also mean I have to create lots of virtual methods and have a large if...else statement detecting the type and calling the correct "calculate" method, it also means that calling the calculate method is prone to error as you would have to know when writing the code which one to call for which type with the correct parameters to avoid an Error/EAbstractError. Create a record structure with all the possible parameters in and use this as the parameter for the "calculate" function. This has the added benefit of passing this to the "calculate all" function as it can contain all the parameters required and avoid a potentially very long parameter list. Just type casting the TBaseItem to access the correct calculate method. This would tidy up the TBaseItem quite alot compared to the first method. What would be the best way to handle this collection?

    Read the article

  • How can I perform this query between related tables without using UNION?

    - by jeremy
    Suppose I have two separate tables that I watch to query. Both of these tables has a relation with a third table. How can I query both tables with a single, non UNION based query? I want the result of the search to rank the results by comparing a field on each table. Here's a theoretical example. I have a User table. That User can have both CDs and books. I want to find all of that user's books and CDs with a single query matching a string ("awesome" in this example). A UNION based query might look like this: SELECT "book" AS model, name, ranking FROM book WHERE name LIKE 'Awesome%' UNION SELECT "cd" AS model, name, ranking FROM cd WHERE name LIKE 'Awesome%' ORDER BY ranking DESC How can I perform a query like this without the UNION? If I do a simple left join from User to Books and CDs, we end up with a total number of results equal to the number of matching cds timse the number of matching books. Is there a GROUP BY or some other way of writing the query to fix this?

    Read the article

  • How can I use TDD to solve a puzzle with an unknown answer?

    - by matthewsteele
    Recently I wrote a Ruby program to determine solutions to a "Scramble Squares" tile puzzle: I used TDD to implement most of it, leading to tests that looked like this: it "has top, bottom, left, right" do c = Cards.new card = c.cards[0] card.top.should == :CT card.bottom.should == :WB card.left.should == :MT card.right.should == :BT end This worked well for the lower-level "helper" methods: identifying the "sides" of a tile, determining if a tile can be validly placed in the grid, etc. But I ran into a problem when coding the actual algorithm to solve the puzzle. Since I didn't know valid possible solutions to the problem, I didn't know how to write a test first. I ended up writing a pretty ugly, untested, algorithm to solve it: def play_game working_states = [] after_1 = step_1 i = 0 after_1.each do |state_1| step_2(state_1).each do |state_2| step_3(state_2).each do |state_3| step_4(state_3).each do |state_4| step_5(state_4).each do |state_5| step_6(state_5).each do |state_6| step_7(state_6).each do |state_7| step_8(state_7).each do |state_8| step_9(state_8).each do |state_9| working_states << state_9[0] end end end end end end end end end So my question is: how do you use TDD to write a method when you don't already know the valid outputs? If you're interested, the code's on GitHub: Tests: https://github.com/mattdsteele/scramblesquares-solver/blob/master/golf-creator-spec.rb Production code: https://github.com/mattdsteele/scramblesquares-solver/blob/master/game.rb

    Read the article

  • Could this be considered a well-written PHP5 class?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • How to set buffer size in client-server app using sockets?

    - by nelly
    First of all i am new to networking so i may say dumb thing in here. Considering a client-server application using sockets(.net with c# if that matters). The client sends some data, the server process it and sends back a string. The client sends some other data, the serve process it, queries the db and sends back several hundreds of items from the database The client sends some other type of data and the server notifies some other clients . My question is how to set the buffer size correctly for reading/writing operation. Should i do something like this: byte[] buff = new byte[client.ReceiveBufferSize] ? I am thinking of something like this: Client sends data to the server(and the server will follow the same pattern) byte[] bytesToSend=new byte[2048] //2048 to be standard for any command send by the client bytes 0..1 ->command type bytes 1..2047 ->command parameters byte[] bytesToReceive=new byte[8]/byte[64]/byte[8192] //switch(command type) But..what is happening when a client is notified by the server without sending data? What is the correct way to accomplish what i am trying to do? Thanks for reading.

    Read the article

  • How to rename an private file of my application?

    - by Ungureanu Liviu
    Hi! I want to rename an context private file created with openFileOutput() but I don't know how... I tried that: File file = getFileStreamPath(optionsMenuView.getPlaylistName()); // this file already exists try { FileOutputStream outStream = openFileOutput(newPlaylistName, Context.MODE_WORLD_READABLE); // i create a new file with the new name outStream.close(); } catch (FileNotFoundException e) { Log.e(TAG, "file not found!"); e.printStackTrace(); } catch (IOException e) { Log.e(TAG, "IO exception"); e.printStackTrace(); } Log.e(TAG, "rename status: " + file.renameTo(getFileStreamPath(newPlaylistName))); //it return true This code throw FileNotFoundException but the documentation said "Open a private file associated with this Context's application package for writing. Creates the file if it doesn't already exist." so the new file should be created on disk. The problem: When I try to read from the new renamed file I got FileNotFoundException! Thank you!

    Read the article

  • Why the compiler is not compiling a line in C++ Builder?

    - by MLB
    Hi boys: I was programming an application in C++ Builder 6, and I had encountered this rare problem: void RotateDice() { Graphics::TBitmap *MYbitmap = new Graphics::TBitmap(); Randomize(); int rn = random(6) + 1; switch (rn) { case 1: { //... break; } //... Some cases... } ShowDice(); //it's a function to show the dice delete MYbitmap; //the compiler don't get it!!!! } In the line "ShowDice()", the compiler jumps to the final of the RotateDice() method, it doesn't "see" the line "delete MYbitmap". When I compile the program, every compiled line shows a little blue point in its left side, but that line don't show the blue point... it's like the compiler don't "see" the line of code. What's happening with that???? Note: Some days ago, I was writing a program in Delphi and I was advice of that problematic issue. Some like that happened to me in Delphi 7... So, waht the problem with that? I am so sorry about my English. I am from Cuba.

    Read the article

  • Casting pointer to object to void * in C++

    - by JB
    I've been reading StackOverflow too much and started doubting all the code I've ever written, I keep thinking "Is that undefined behavour?" even in code that has been working for ages. So my question - Is it safe and well defined behavour to cast a pointer to an object (In this case abstract interface classes) to a void* and then later on cast them back to the original class and call method using them? I'm fully aware that the code that does this is probably awful. I wouldn't even consider writing it like this now (this is old code which I don't really want to change), so I'm not looking for a discussion of better ways to do this. I already know how to write it better if I ever did this again. But if it's actually broken to rely on this in C++ then I'll have to look at changing the code, if it's merely awful code then changing it won't be a priority. I would have had no doubts about something this simple a year or two ago but as my understanding of C++ increases I actually find I have more and more worries about code being safe under the standards even if it works perfectly well. Perhaps reading too much stack overflow is a bad thing for productivity sometimes :P

    Read the article

  • Compressing a database to a single file?

    - by Assimilater
    Hi all. In my contact manager program I have been storing information by reading and writing comma delimited files for each individual contact, and storing notes in a file for each note, and I'm wondering how I could go about shrinking them all into one file effectively. I have attempted using data entry tools in the visual studio toolbox and template class, though I have never quite figured out how to use them. What would be especially convenient is if I could store data as data type IOwner (a class I created) as opposed to strings. I'd also need to figure out how to tell the program what to do when a file is opened (I've noticed in the properties how to associate a file type with the program though am not sure how to tell it what to do when it's opened). Edit: How about rephrasing the question: I have a class IContact with various properties some of them being lists of other class objects. I have a public list of IContact. Can I write Contacts as List(Of IContact) to a file as opposed to a bunch of strings? Second part of the question: I have associated .cms files with my program. But if a user opens the file, what code should the program run through in an attempt to deal with the file? This file is going to contain data that the program needs to read, how do I tell it to read a file when the program is opened vicariously because the file was opened? Does this make the question clearer?

    Read the article

  • Image creation performance / image caching

    - by Kilnr
    Hello, I'm writing an application that has a scrollable image (used to display a map). The map background consists of several tiles (premade from a big JPG file), that I draw on a Graphics object. I also use a cache (Hashtable), to prevent from having to create every image when I need it. I don't keep everything in memory, because that would be too much. The problem is that when I'm scrolling through the map, and I need an image that wasn't cached, it takes about 60-80 ms to create it. Depending on screen resolution, tile size and scroll direction, this can occur multiple times in one scroll operation (for different tiles). In my case, it often happens that this needs to be done 4 times, which introduces a delay of more than 300 ms, which is extremely noticeable. The easiest thing for me would be that there's some way to speed up the creation of Images, but I guess that's just wishful thinking... Besides that, I suppose the most obvious thing to do is to load the tiles predictively (e.g. when scrolling to the right, precache the tiles to the right), but then I'm faced with the rather difficult task of thinking up a halfway decent algorithm for this. My actual question then is: how can I best do this predictive loading? Maybe I could offload the creation of images to a separate thread? Other things to consider? Thanks in advance.

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • select2: "text is undefined" when getting json using ajax

    - by user3046715
    I'm having an issue when getting json results back to select2. My json does not return a result that has a "text" field so need to format the result so that select2 accepts "Name". This code works if the text field in the json is set to "text" but in this case, I cannot change the formatting of the json result (code outside my control). $("#e1").select2({ formatNoMatches: function(term) {return term +" does not match any items." }, ajax: { // instead of writing the function to execute the request we use Select2's convenient helper url: "localhost:1111/Items.json", dataType: 'jsonp', cache: true, quietMillis: 200, data: function (term, page) { return { q: term, // search term p: page, s: 15 }; }, results: function (data, page) { // parse the results into the format expected by Select2. var numPages = Math.ceil(data.total / 15); return {results: data.Data, numPages: numPages}; } } }); I have looked into the documentation and found some statements you can put into the results such as text: 'Name', but I am still getting "text is undefined". Thanks for any help.

    Read the article

  • When using Direct3D, how much math is being done on the CPU?

    - by zirgen
    Context: I'm just starting out. I'm not even touching the Direct3D 11 API, and instead looking at understanding the pipeline, etc. From looking at documentation and information floating around the web, it seems like some calculations are being handled by the application. That, is, instead of simply presenting matrices to multiply to the GPU, the calculations are being done by a math library that operates on the CPU. I don't have any particular resources to point to, although I guess I can point to the XNA Math Library or the samples shipped in the February DX SDK. When you see code like mViewProj = mView * mProj;, that projection is being calculated on the CPU. Or am I wrong? If you were writing a program, where you can have 10 cubes on the screen, where you can move or rotate cubes, as well as viewpoint, what calculations would you do on the CPU? I think I would store the geometry for the a single cube, and then transform matrices representing the actual instances. And then it seems I would use the XNA math library, or another of my choosing, to transform each cube in model space. Then get the coordinates in world space. Then push the information to the GPU. That's quite a bit of calculation on the CPU. Am I wrong? Am I reaching conclusions based on too little information and understanding? What terms should I Google for, if the answer is STFW? Or if I am right, why aren't these calculations being pushed to the GPU as well?

    Read the article

  • c++ overloading delete, retrieve size

    - by user300713
    Hi, I am currently writing a small custom memory Allocator in c++, and want to use it together with operator overloading of new/delete. Anyways, my memory Allocator basicall checks if the requested memory is over a certain threshold, and if so uses malloc to allocate the requested memory chunk. Otherwise the memory will be provided by some fixedPool allocators. that generally works, but for my deallocation function looks like this: void MemoryManager::deallocate(void * _ptr, size_t _size){ if(_size heapThreshold) deallocHeap(_ptr); else deallocFixedPool(_ptr, _size); } so I need to provide the size of the chunk pointed to, to deallocate from the right place. No the problem is that the delete keyword does not provide any hint on the size of the deleted chunk, so I would need something like this: void operator delete(void * _ptr, size_t _size){ MemoryManager::deallocate(_ptr, _size); } But as far as I can see, there is no way to determine the size inside the delete operator.- If I want to keep things the way it is right now, would I have to save the size of the memory chunks myself? Any ideas on how to solve this are welcome! Thanks!

    Read the article

  • Implicit conversion between Scala collection types

    - by ebruchez
    I would like to implicitly convert between the Scala XML Elem object and another representation of an XML element, in my case dom4j Element. I wrote the following implicit conversions: implicit def elemToElement(e: Elem): Element = ... do conversion here ... implicit def elementToElem(e: Element): Elem = ... do conversion here ... So far so good, this works. Now I also need collections of said elements to convert both ways. First, do I absolutely need to write additional conversion methods? Things didn't seem to work if I didn't. I tried to write the following: implicit def elemTToElementT(t: Traversable[Elem]) = t map (elemToElement(_)) implicit def elementTToElemT(t: Traversable[Element]) = t map (elementToElem(_)) This doesn't look too ideal because if the conversion method takes a Traversable, then it also returns a Traversable. If I pass a List, I also get a Traversable out. So I assume the conversion should be parametrized somehow. So what's the standard way of writing these conversions in order to be as generic as possible?

    Read the article

  • subscript requires array or pointer ERROR

    - by Kristian
    Hi, I know what is my mistake can't figer how to solve it. Im writing an winAPI that counts how many 'a' characters are found is a givien file. Im still getting the error " subscript requires array or pointer " (please find the comment in the code) #include "stdafx.h" #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { WCHAR str=L'a'; HANDLE A; TCHAR *fn; fn=L"d:\\test.txt"; A= CreateFile(fn,GENERIC_READ,0,NULL,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,NULL); if(A==INVALID_HANDLE_VALUE) { _tprintf(L"cannot open file \n"); } else { DWORD really; int countletter; int stringsize; do { BYTE x[1024]; ReadFile(A,x,1024,&really,NULL); stringsize = sizeof(really); for(int i =0;i<stringsize;i++) { if(really[i]==str) //here Im getting the error countletter++; } }while(really==1024); CloseHandle(A); _tprintf(L"NUmbers of A's found is %d \n",countletter); } return 0; } now I know I can't make comparesion between array and a WCHAR but hw to fix it ?

    Read the article

  • Cross-Application User Authentication

    - by Chris Lieb
    We have a webapp written in .NET that uses NTLM for SSO. We are writing a new webapp in Java that will tightly integrate with the original application. Unfortunately, Java has no support for performing the server portion of NTLM authentication and the only library that I can find requires too much setup to be allowed by IT. To work around this, I came up with a remote authentication scheme to work across applications and would like your opinions on it. It does not need to be extremely secure, but at the same time not easily be broken. User is authenticated into .NET application using NTLM User clicks link that leaves .NET application .NET application generates random number and stores it in the user table along with the user's full username (domain\username) Insecure token is formed as random number:username Insecure token is run through secure cipher (likely AES-256) using pre-shared key stored within the application to produce a secure token The secure token is passed as part of the query string to the Java application The Java application decrypts the secure key using the same pre-shared key stored within its own code to get the insecure token The random number and username are split apart The username is used to retrieve the user's information from the user table and the stored random number is checked against the one pulled from the insecure token If the numbers match, the username is put into the session for the user and they are now authenticated If the numbers do not match, the user is redirected to the .NET application's home page The random number is removed from the database

    Read the article

  • Checking if a file is a directory or just a file.

    - by Jookia
    I'm writing a program to check if something is a file or is a directory. Is there a better way to do it than this? #include <stdio.h> #include <sys/types.h> #include <dirent.h> #include <errno.h> int isFile(const char* name) { DIR* directory = opendir(name); if(directory != NULL) { closedir(directory); return 0; } if(errno == ENOTDIR) { return 1; } return -1; } int main(void) { const char* file = "./testFile"; const char* directory = "./"; printf("Is %s a file? %s.\n", file, ((isFile(file) == 1) ? "Yes" : "No")); printf("Is %s a directory? %s.\n", directory, ((isFile(directory) == 0) ? "Yes" : "No")); return 0; }

    Read the article

  • What is the fastest (possibly unsafe) way to read a byte[]?

    - by Aidiakapi
    I'm working on a server project in C#, and after a TCP message is received, it is parsed, and stored in a byte[] of exact size. (Not a buffer of fixed length, but a byte[] of an absolute length in which all data is stored.) Now for reading this byte[] I'll be creating some wrapper functions (also for compatibility), these are the signatures of all functions I need: public byte ReadByte(); public sbyte ReadSByte(); public short ReadShort(); public ushort ReadUShort(); public int ReadInt(); public uint ReadUInt(); public float ReadFloat(); public double ReadDouble(); public string ReadChars(int length); public string ReadString(); The string is a \0 terminated string, and is probably encoded in ASCII or UTF-8, but I cannot tell that for sure, since I'm not writing the client. The data exists of: byte[] _data; int _offset; Now I can write all those functions manually, like this: public byte ReadByte() { return _data[_offset++]; } public sbyte ReadSByte() { byte r = _data[_offset++]; if (r >= 128) return (sbyte)(r - 256); else return (sbyte)r; } public short ReadShort() { byte b1 = _data[_offset++]; byte b2 = _data[_offset++]; if (b1 >= 128) return (short)(b1 * 256 + b2 - 65536); else return (short)(b1 * 256 + b2); } public short ReadUShort() { byte b1 = _data[_offset++]; return (short)(b1 * 256 + _data[_offset++]); } But I wonder if there's a faster way, not excluding the use of unsafe code, since this seems a little bit too much work for simple processing.

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • how can i make sure only a single record is inserted when multiple apache threads are trying to acce

    - by Ed Gl
    I have a web service (xmlrpc service to be exact) that handles among other things writing data into the database. Here's the scenario: I often receive requests to either update or insert a record. What I would do is this: If the record already exists, append to the record, If not, create a new record The issue is that there are certain times I would get a 'burst' of requests, which spawns several apache threads to handle the request. These 'bursts' would come within less than milliseconds of each other. I now have several threads performing #1 and #2. Often two threads would would 'pass' number #1 and actually create two duplicate records (except for the primary key). I'd like to use some locking mechanism to prevent other threads from accessing the table while the other thread finishes its work. I'm just afraid of using it because if something happens I don't want to leave the table locked. Is there a solid way of handling this? I'm open to using locks if I can do it properly. Thanks,

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • C# Convert string to nullable type (int, double, etc...)

    - by Nathan Koop
    I am attempting to do some data conversion. Unfortunately, much of the data is in strings, where it should be int's or double, etc... So what I've got is something like: double? amount = Convert.ToDouble(strAmount); The problem with this approach is if strAmount is empty, if it's empty I want it to amount to be null, so when I add it into the database the column will be null. So I ended up writing this: double? amount = null; if(strAmount.Trim().Length>0) { amount = Convert.ToDouble(strAmount); } Now this works fine, but I now have five lines of code instead of one. This makes things a little more difficult to read, especially when I have a large amount of columns to convert. I thought I'd use an extension to the string class and generic's to pass in the type, this is because it could be a double, or an int, or a long. So I tried this: public static class GenericExtension { public static Nullable<T> ConvertToNullable<T>(this string s, T type) where T: struct { if (s.Trim().Length > 0) { return (Nullable<T>)s; } return null; } } But I get the error: Cannot convert type 'string' to 'T?' Is there a way around this? I am not very familiar with creating methods using generics.

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >