Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 217/266 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

  • Two strange efficiency problems in Mathematica

    - by Jess Riedel
    FIRST PROBLEM I have timed how long it takes to compute the following statements (where V[x] is a time-intensive function call): Alice = Table[V[i],{i,1,300},{1000}]; Bob = Table[Table[V[i],{i,1,300}],{1000}]^tr; Chris_pre = Table[V[i],{i,1,300}]; Chris = Table[Chris_pre,{1000}]^tr; Alice, Bob, and Chris are identical matricies computed 3 slightly different ways. I find that Chris is computed 1000 times faster than Alice and Bob. It is not surprising that Alice is computed 1000 times slower because, naively, the function V must be called 1000 more times than when Chris is computed. But it is very surprising that Bob is so slow, since he is computed identically to Chris except that Chris stores the intermediate step Chris_pre. Why does Bob evaluate so slowly? SECOND PROBLEM Suppose I want to compile a function in Mathematica of the form f(x)=x+y where "y" is a constant fixed at compile time (but which I prefer not to directly replace in the code with its numerical because I want to be able to easily change it). If y's actual value is y=7.3, and I define f1=Compile[{x},x+y] f2=Compile[{x},x+7.3] then f1 runs 50% slower than f2. How do I make Mathematica replace "y" with "7.3" when f1 is compiled, so that f1 runs as fast as f2? Many thanks!

    Read the article

  • Perform case-insensitive lookup on an Array in MongoDB?

    - by Hal
    So, I've decided to get my feet wet with MongoDB and love it so far. It seems very fast and flexible which is great. But, I'm still going through the initial learning curve and as such, I'm spending hours digging for info on the most basic things. I've search throughout the MongoDB online documentation and have spent hours Googling through pages without any mention of this. I know Mongo is still quite new (v1.x) so it explains why there isn't much information yet. I've even trying looking for books on Mongo without much luck. So yes, I've tried to RTFM with no luck, so, now I turn to you. I have an Array of various Hashtags nested in each document (ie: #apples, #oranges, #Apples, #APPLES) and I would like to perform a case-insensitive find() to access all the documents containing apples in any case. It seems that find does support some regex with /i, but I can't seem to get this working either. Anyway, I hope this is a quick answer for someone. Here's my existing call in PHP which is case sensitive: $cursor = $collection->find(array( "hashtags" => array("#".$keyword)))->sort(array('$natural' => -1))->limit(10); Help?

    Read the article

  • Reading Xml with XmlReader in C#

    - by Gloria Huang
    I'm trying to read the following Xml document as fast as I can and let additional classes manage the reading of each sub block. <ApplicationPool><Accounts><Account><NameOfKin></NameOfKin><StatementsAvailable><Statement></Statement></StatementsAvailable></Account></Accounts></ApplicationPool> I can't seem to format the above nicely :( However, I'm trying to use the XmlReader object to read each Account and subsequently the "StatementsAvailable". Do you suggest using XmlReader.Read and check each element and handle it? I've thought of seperating my classes to handle each node properly. So theres an AccountBase class that accepts a XmlReader instance that reads the NameOfKin and several other properties about the account. Then I was wanting to interate through the Statements and let another class fill itself out about the Statement (and subsequently add it to an IList). Thus far I have the "per class" part done by doing XmlReader.ReadElementString() but I can't workout how to tell the pointer to move to the StatementsAvailable element and let me iterate through them and let another class read each of those proeprties. Sounds easy!

    Read the article

  • Take screenshot with Selenium: WaitForPageToLoad does not wait long enough

    - by OregonGhost
    I'm trying to get screenshots from a web page with multiple browsers. Just experimenting with Selenium RC, I wrote code like this: var sel = new DefaultSelenium(server, 4444, target, url); sel.Start(); sel.Open(url); sel.WaitForPageToLoad("30000"); var imageString = sel.CaptureScreenshotToString(); This basically works, but in most cases the screenshot is of a blank browser window, because the page is not yet ready for display. It kind of works if I add a sleep just after the WaitForPageToLoad, but that slows down the fast browsers and/or may be to short for the slower browsers (or under load). A typical solution for this seems to be to wait for the presence of a certain element. However, this is meant as a simple generic solution to get a screenshot of a local web page with as many browsers as possible (to test the layout) and I don't want to have to enter certain element names or whatever. It's a simple tool where you just enter the Selenium Server URL and the URL you want to test, and get the screenshots back. Any advice?

    Read the article

  • PHP Frameworks: Codeigniter vs. Yii vs. Custom?

    - by Industrial
    Hi everybody, I have used codeigniter for a some years now. Why I chosed to work with codeigniter back then? Pretty much for the extensive documentation that were available and the big user community. It made me as a totally newbie to the MVC pattern able to get a site up and running really fast. I think what is priorited from my side is that the framework doesn't affect performance too much, which Codeigniter seems to be pretty good at (when compared to other frameworks out there) and Yii, an even better option. Since the time has gone from when I started out with codeigniter, the project sizes have also increased and thereby the demand of the framework and it's footprint on the code. I have thought a few times about writing a whole new MVC framework to do only the thing's I want it to do, but it feels like reinventing the wheel and I cannot yet justify it. I am not sure whether or not it's a good solution to build a site that have the potential to become really big on either Yii or Codeigniter. I have tried to find as much as possible documentation about this comparision/issue online before posting here, but have found very few real-life arguments and stories from people that have shifted between the two PHP frameworks or have been in the same situation as me. So - what's your thoughts about Codeigniter vs. Yii vs. going custom? References: http://daniel.carrera.bz/2009/01/comparison-of-php-frameworks-part-i/ http://www.beyondcoding.com/2009/03/02/choosing-a-php-framework-round-2-yii-vs-kohana-vs-codeigniter/

    Read the article

  • How do you remove a site from Sharepoint Designer?

    - by xpda
    I would like to use Sharepoint Designer 2007 as an html editor. I have a web site with a lot of files in a folder on my hard drive. I do not want Sharepoint Designer to make a web site out of this. I just want to use Sharepoint Designer to edit the html files, locally. If I ever make a mistake and click on a tool for Sites, such as summary or report, Sharepoint Designer will decide that my folder is now a web site. From that point on, Sharepoint Designer is painfully slow whenever I open a file contained in the folder that Sharepoint decided is my web site, instead of being instantaneous like it was before. I can resolve this situation by renaming the folder containing my web site -- everything gets fast again. I can also fix it by uninstalling and reinstalling Sharepoint Designer. Neither of these is a good solution. Is there a place in Sharepoint Designer, or in application data or the registry that I can kill off the Sharepoint Designer web site that's associated with a folder on my hard drive?

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • How should I write Jquery Mobile app for browsers with and without javascript support?

    - by Adrian Grigore
    Hi, I'm trying to wrap my head around jQuery Mobile. My aim is to build a very fast application with a look and feel as close as possible to a native app (at least for modern devices). I understand there are two ways of navigating between pages: Loading each page as a separate page and linking to other pages with regular html anchors. Putting all (or many) pages on one single web page and navigating between them by means of javascript ($.mobile.changePage (method) and similar api functions. The first approach should work on all browsers, but performs quite poorly since there is a delay between each page transition. The second looks like it should be much faster, so I would definitely prefer this approach. But how would that work for mobile device browsers without javascript support? It certainly seems to violate jQuery Mobile's aim to provide a gracefully degraded experience for C-grade browsers. It looks to me like I need to implement my app twice, once optimized for browsers with javascript support, once for browsers without? Using may be another option, but that looks even more messy. What's the recommended way to approach this dilemma? Is there anything I have not noticed? Thanks, Adrian

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Difference in performance between Stax and DOM parsing

    - by Fazal
    I have been using DOM for a long time and as such DOM parsing performance wise has been pretty good. Even when dealing with XML of about 4-7 MB the parsing has been fast. The issue we face with DOM is the memory footprint which become huge as soon as we start dealing with large XMLs. Lately I tried moving to Stax (Streaming parsers for XML) which are supposed top be second generation parsers (reading about Stax it said its the fastest parser now). When I tried stax parser for large XML for about 4MB memory footprint definitely reduced drastically but time take to parse entire XML and create java object out of it increased almost by 5 times over DOM. I used sjsxp.jar implementation of Stax. I can deuce to some extent logically that performance may not be extremely good due to streaming nature of the parser but a reduction of 5 time (e.g. DOM takes about 8 seconds to build object for this XML, whereas Stax parsing took about 40 seconds on average) is definitely not going to be acceptable. Am I missing some point here completely as I am not able to come to terms with these performance numbers

    Read the article

  • Can anyone tell me about a jQuery modal dialog box library that doesn't suck

    - by Ritesh M Nayak
    jQuery based modal dialog boxes are great as long as you do as much as the example tells you to. I need a jQuery based modal dialog box library that has to have the following characteristics: It should be fast, something like the add and link dialog on StackOverflow. Most libraries take an eternity to load the dialog with its fancy effects and stuff. I want to call it using a script. Show a hidden div or a span element inline. MOst of the libraries talk filling an anchor with rel, class and href=#hiddenDiv sort of things. I need to be able to what I want without adding unnecessary attributes to my anchor. Something like this function showDialog(values) { processToChangeDom(values); changeDivTobeDisplayed(); modalDialog.show(); } It should reflect changes I make to the DOM in the hidden Div. I used facebox and found out that it makes a copy of the hidden div and changes to the DOM doesn't reflect on the modal window. I need to be able call the close modal div using javascript and also attach beforeOpen and afterClose handlers to the action. Does anyone have any suggestions? I have already tried facebox, simplemodal and a whole range of libraries, most of them don't support one or the other of these functions I described above.

    Read the article

  • Looking for a .Net ORM

    - by SLaks
    I'm looking for a .Net 3.5 ORM framework with a rather unusual set of requirements: I need to create and alter tables at runtime with schemas defined by my end-users. (Obviously, that wouldn't be strongly-typed; I'm looking for something like a DataTable there) I also want regular strongly-typed partial classes for rows in non-dynamic tables, with custom validation and other logic. (Like normal ORMs) I want to load the entire database (or some entire tables) once, and keep it in memory throughout the life of the (WinForms) GUI. (I have a shared SQL Server with a relatively slow connection) I also want regular LINQ support (like LINQ-to-SQL) for ASP.Net on the shared server (which has a fast connection to SQL Server) In addition to SQL Server, I also want to be able to use a single-file database that would support XCopy deployment (without installing SQL CE on the end-user's machine). (Probably Access or SQLite) Finally, it has to be free (unless it's OpenAccess) I'll probably have to write it myself, as I don't think there is an existing ORM that meets these requirements. However, I don't want to re-invent the wheel if there is one, hence this question. I'm using VS2010, but I don't know when my webhost (LFC) will upgrade to .Net 4.0

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Change NSTimer interval for repeating timer.

    - by user300713
    Hi, I am running a mainLoop in Cocoa using an NSTimer set up like this: mainLoopTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/fps target:self selector:@selector(mainloop) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:mainLoopTimer forMode:NSEventTrackingRunLoopMode]; At Program startup I set the timeInterval to 0.0 so that the mainloop runs as fast as possible. Anyways, I would like to provide a function to set the framerate(and thus the time interval of the timer) to a specific value at runtime. Unfortunately as far as I know that means that I have to reinitialize the timer since Cocoa does not provide a function like "setTimerInterval" This is what I tried: - (void)setFrameRate:(float)aFps { NSLog(@"setFrameRate"); [mainLoopTimer invalidate]; mainLoopTimer = nil; mainLoopTimer = [NSTimer scheduledTimerWithTimeInterval:1.0/aFps target:self selector:@selector(mainloop) userInfo:nil repeats:YES]; [[NSRunLoop currentRunLoop] addTimer:mainLoopTimer forMode:NSEventTrackingRunLoopMode]; } but this throws the following error and stops the mainloop: 2010-06-09 11:14:15.868 myTarget[7313:a0f] setFrameRate 2010-06-09 11:14:15.868 myTarget[7313:a0f] * __NSAutoreleaseNoPool(): Object 0x40cd80 of class __NSCFDate autoreleased with no pool in place - just leaking 2010-06-09 11:14:15.869 myTarget[7313:a0f] * __NSAutoreleaseNoPool(): Object 0x40e700 of class NSCFTimer autoreleased with no pool in place - just leaking 0.614628 I also tried to recreate the timer using the "retain" keyword, but that didn't change anything. Any ideas about how to dynamically change the interval of an NSTimer at runtime? Thanks!

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • glitchy stuttery iphone game loop

    - by Adam
    This is a problem I've been trying to solve for a few days now, and I've looked at the various solutions on stackoverflow and nothing has really seemed to work for me. I'm making an iPhone game with OpenGLES graphics and accelerometer input, at this point it's very simple, but the rendering is already pretty bad... it stutters and seems to jump back or forward in time. It doesn't happen a lot, but it happens enough to be a problem. I mean, who wants to play a game where a bullet gets magically transported into the player, and then it's game over? No one. I've tried using NSTimer for the game loop, I've tried using a separate thread (with a frame rate and continuously) I've tried using different frame rates, from 30FPS to 60FPS (It seems to have a max frame rate around 45FPS, but no problems at 30FPS) I've tried using timeIntervalSince1970 and CFGetAbsoluteTime to measure loop time, with no noticeable diffence Anyone have any ideas on what is the best way to get this looking better? One of the posts I've read suggested running the simulation at a fixed frame rate and then just render as fast as possible, does that seem like a good idea?

    Read the article

  • Search for string allowing for one mismatches in any location of the string, Python

    - by Vincent
    I am working with DNA sequences of length 25 (see examples below). I have a list of 230,000 and need to look for each sequence in the entire genome (toxoplasma gondii parasite) I am not sure how large the genome is but much more that 230,000 sequences. I need to look for each of my sequences of 25 characters example(AGCCTCCCATGATTGAACAGATCAT). The genome is formatted as a continuous string ie (CATGGGAGGCTTGCGGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTTGCGGAGTGCGGAGCCTGAGCCTGAGGGCGGAGCCTGAGGTGGGAGGCTT.........) I don't care where or how many times it is found, just yes or no. This is simple I think, str.find(AGCCTCCCATGATTGAACAGATCAT) But I also what to find a close match defined as wrong(mismatched) at any location but only 1 location and record the location in the sequnce. I am not sure how do do this. The only thing I can think of is using a wildcard and performing the search with a wildcard in each position. ie search 25 times. For example AGCCTCCCATGATTGAACAGATCAT AGCCTCCCATGATAGAACAGATCAT close match with a miss-match at position 13 Speed is not a big issue I am only doing it 3 times. i hope but it would be nice it was fast. The are programs that do this find matches and partial matches but I am looking for a type of partial match that is not available with these applications. Here is a similar post for pearl but they are only comparing sequnces not searching a continuous string Related post

    Read the article

  • Cross-platform game development: ease of development vs security

    - by alcuadrado
    Hi, I'm a member and contributor of the Argentum Online (AO) community, the first MMORPG from Argentina, which is Free Software; which, although it's not 3D, it's really addictive and has some dozens of thousands of users. Really unluckily AO was developed in Visual Basic (yes, you can laugh) but the former community, so imagine, the code not only sucks, it has zero portability. I'm planning, with some friends to rewrite the client, and as a GNU/Linux frantic, want to do it cross-platform. Some other people is doing the same with the server in Java. So my biggest problem is that we would like to use a rapid development language (like Java, Ruby or Python) but the client would be pretty insecure. Ruby/Python version would have all it's code available, and the Java one would be easily decompilable (yes, we have some crackers in the community) We have consider the option to implement the security module in C/C++ as a dynamic library, but it can be replaced with a custom one, so it's not really secure. We are also considering the option of doing the core application in C++ and the GUI in Ruby/Python. But haven't analysed all it's implications yet. But we really don't want to code the entire game in C/C++ as it doesn't need that much performance (the game is played at 18fps on average) and we want to develop it as fast as possible. So what would you choose in my case? Thank you!

    Read the article

  • Rails: Ajax: Changes Onload

    - by Jay Godse
    Hi. I have a web layout of a for that looks something like this <html> <head> </head> <body> <div id="posts"> <div id="post1" class="post" > <!--stuff 1--> </div> <div id="post2" class="post" > <!--stuff 1--> </div> <!-- 96 more posts --> <div id="post99" class="post" > <!--stuff 1--> </div> </div> </body> </html> I would like to be able to load and render the page with a blank , and then have a function called when the page is loaded which goes in and load up the posts and updates the page dynamically. In Rails, I tried using "link_to_remote" to update with all of the elements. I also tweaked the posts controller to render the collection of posts if it was an ajax request (request.xhr?). It worked fine and very fast. However the update of the blank div is triggered by clicking the link. I would like the same action to happen when the page is loaded so that I don't have to put in a link. Is there a Rails Ajax helper or RJS function or something in Rails that I could use to trigger the loading of the "posts" after the page has loaded and rendered (without the posts)? (If putsch comes to shove, I will just copy the generated JS code from the link_to_remote call and have it called from the onload handler on the body).

    Read the article

  • mySQL need to merge fields and get unique rows

    - by jiudev
    i have a database with +1 million rows and the stuktur looks like: CREATE TABLE IF NOT EXISTS `Performance` ( `id` int(11) NOT NULL AUTO_INCREMENT, `CIDs` varchar(100) DEFAULT NULL, `COLOR` varchar(100) DEFAULT NULL, `Name` varchar(255) DEFAULT NULL, `XT` bigint(16) DEFAULT NULL, `MP` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`), KEY `CIDs` (`CIDs`), KEY `COLOR` (`COLOR`), KEY `Name` (`Name`), KEY `XT` (`XT`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=0 ; insert into `Performance` (`id`, `CIDs`, `COLOR`, `Name`, `XT`, `MP`) VALUES (1, '1253374160', 'test test test test test', 'Load1', '89421331221', ''), (2, '1271672029', NULL, 'Load1', '19421331221', NULL), (3, '1188959688', NULL, 'Load2', '39421331221', NULL), (4, '1271672029', NULL, 'Load3', '49421341221', 'Description'), (5, '1271888888', NULL, 'Load4', '59421331221', 'Description'); The Output should look like: +----+------------+--------------------------+-------------+-------------+-------+-----------+---------+ | id | CIDs | COLOR | XT | MP | Name | PIDs | unqName | +----+------------+--------------------------+-------------+-------------+-------+-----------+---------+ | 1 | 1253374160 | test test test test test | 89421331221 | | Load1 | 1,2 | Load1 | | 3 | 1188959688 | NULL | 39421331221 | NULL | Load2 | 3 | Load2 | | 4 | 1271672029 | NULL | 49421341221 | Description | Load3 | 4,5 | Load3 | +----+------------+--------------------------+-------------+-------------+-------+-----------+---------+ any ideas, how i could do this as fast as possible? I have tried with some group by, but it takes some Minutes :/ Thanks Advance //edit: for the solution with the group by, i needed 4 subquerys :/ //edit2: as requested: select id, CIDs, COLOR, XT, MP, Name, concat(PIDs,",",GROUP_CONCAT(DISTINCT id)) as PIDs, IFNULL(Name,id) as unqName from ( select id, CIDs, COLOR, XT, MP, Name, concat(PIDs,",",GROUP_CONCAT(DISTINCT id)) as PIDs, IFNULL(MP,id) as unqMP from ( select id, CIDs, COLOR, XT, MP, Name, concat(PIDs,",",GROUP_CONCAT(DISTINCT id)) as PIDs, IFNULL(XT,id) as unqXT from ( select id, CIDs, COLOR, XT, MP, Name, GROUP_CONCAT(DISTINCT id) as PIDs, IFNULL(COLOR,id) as unqCOLOR from Performance group by unqCOLOR ) m group by unqXT ) x group by unqMP ) y group by unqName

    Read the article

  • CQRS - The query side

    - by mattcodes
    A lot of the blogsphere articles related to CQRS (command query repsonsibility) seperation seem to imply that all screens/viewmodels are flat. e.g. Name, Age, Location Of Birth etc.. and thus the suggestion that implementation wise we stick them into fast read source etc.. single table per view mySQL etc.. and pull them out with something like primitive SqlDataReader, kick that nasty nhibernate ORM etc.. However, whilst I agree that domain models dont mapped well to most screens, many of the screens that I work with are more dimensional, and Im sure this is pretty common in LOB apps. So my question is how are people handling screen where by for example it displays a summary of customer details and then a list of their orders with a [more detail] link etc.... I thought about keeping with the straight forward SQL query to the Query Database breaking off the outer join so can build a suitable ViewModel to View but it seems like overkill? Alternatively (this is starting to feel yuck) in CustomerSummaryView table have a text/big (whatever the type is in your DB) column called Orders, and the columns for the Order summary screen grid are seperated by , and rows by |. Even with XML datatype it still feeel dirty. Any thoughts on an optimal practice?

    Read the article

  • semi dynamic cdn

    - by dwi kristianto
    i'm developing couple of websites using php (directory script, etc.) and wordpress as cms. i need to improve its performance, by using cdn for static files (css, js, images). the problem is, css and javascript files are generated on the fly. i did that due to yahoo and some expert advice to combine the files into one file. also changing basic color of css files. for the time being, i use couple of small vps but still its not fast enough. i already contact maxcdn and the support guy said that they dont have such kind of services. what i need is: a cdn that will serve the request from user/visitor and there's no file in local disk, the cdn will redirect/fetch it from another domain/server. in vps, it could be done easily using combination of .htaccess and php, but NOT in the cdn. most of cdn only support purely static files. is there any such cdn that will server semi-dynamic files?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >