Search Results

Search found 10417 results on 417 pages for 'large'.

Page 45/417 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • expat parser: memory consumption

    - by sameer karjatkar
    Hi, I am using expat parser to parse an XML file of around 15 GB . The problem is it throws an "Out of Memory" error and the program aborts . I want to know has any body faced a similar issue with the expat parser or is it a known bug and has been rectified in later versions ?

    Read the article

  • search & replace on 3000 row, 25 column spreadsheet

    - by Deca
    I'm attempting to clean up data in this (old) spreadsheet and need to remove things like single and double quotes, HTML tags and so on. Trouble is, it's a 3000 row file with 25 columns and every spreadsheet app I've tried (NeoOffice, MS Excel, Apple Numbers) chokes on it. Hard. Any ideas on how else I can clean this thing up for import to MySQL? Clearly I could go through each record manually, row by row, but would like to avoid that if at all possible. Likewise, I could write a PHP script to handle it on import, but don't want to put the server into a death spiral either.

    Read the article

  • How do quickly search through a .csv file in Python

    - by Baldur
    I'm reading a 6 million entry .csv file with Python, and I want to be able to search through this file for a particular entry. Are there any tricks to search the entire file? Should you read the whole thing into a dictionary or should you perform a search every time? I tried loading it into a dictionary but that took ages so I'm currently searching through the whole file every time which seems wasteful. Could I possibly utilize that the list is alphabetically ordered? (e.g. if the search word starts with "b" I only search from the line that includes the first word beginning with "b" to the line that includes the last word beginning with "b") I'm using import csv. (a side question: it is possible to make csv go to a specific line in the file? I want to make the program start at a random line) Edit: I already have a copy of the list as an .sql file as well, how could I implement that into Python?

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • Random access gzip stream

    - by jkff
    I'd like to be able to do random access into a gzipped file. I can afford to do some preprocessing on it (say, build some kind of index), provided that the result of the preprocessing is much smaller than the file itself. Any advice? My thoughts were: Hack on an existing gzip implementation and serialize its decompressor state every, say, 1 megabyte of compressed data. Then to do random access, deserialize the decompressor state and read from the megabyte boundary. This seems hard, especially since I'm working with Java and I couldn't find a pure-java gzip implementation :( Re-compress the file in chunks of 1Mb and do same as above. This has the disadvantage of doubling the required disk space. Write a simple parser of the gzip format that doesn't do any decompressing and only detects and indexes block boundaries (if there even are any blocks: I haven't yet read the gzip format description)

    Read the article

  • What method should be used for searching this mysql dataset?

    - by GeoffreyF67
    I've got a mysql dataset that contains 86 million rows. I need to have a relatively fast search through this data. The data I'll be searching through is all strings. I also need to do partial matches. Now, if I have 'foobar' and search for '%oob%' I know it'll be really slow - it has to look at every row to see if there is a match. What methods can be used to speed queries like this up? G-Man

    Read the article

  • How may I scroll with vim into a big file ?

    - by Luc M
    Hello, I have a big file with thousands of lines of thousands of characters. I move the cursor to 3000th character. If I use PageDown or <CTRL>-D, the file will scroll but the cursor will come back to the first no-space character. There's is an option to set to keep the cursor in the same column after a such scroll ? I have the behavior with gvim on Window, vim on OpenVMS and Cygwin. Regards

    Read the article

  • Database over 2GB in MongoDB

    - by configurator
    We've got a file-based program we want to convert to use a document database, specifically MongoDB. Problem is, MongoDB is limited to 2GB on 32-bit machines (according to http://www.mongodb.org/display/DOCS/FAQ#FAQ-Whatarethe32bitlimitations%3F), and a lot of our users will have over 2GB of data. Is there a way to have MongoDB use more than one file somehow? I thought perhaps I could implement sharding on a single machine, meaning I'd run more than one mongod on the same machine and they'd somehow communicate. Could that work?

    Read the article

  • Process xml-like log file queue

    - by Zsolt Botykai
    Hi all, first of all: I'm not a programmer, never was, although had learn a lot during my professional carreer as a support consultant. Now my task is to process - and create some statistics about a constantly written and rapidly growing XML like log file. It's not valid XML, because it does not have a proper <root> element, e.g. the log looks like this: <log itemdate="somedate"> <field id="0" /> ... </log> <log itemdate="somedate+1"> <field id="0" /> ... </log> <log itemdate="somedate+n"> <field id="0" /> ... </log> E.g. I have to count all the items with field id=0. But most of the solutions I had found (e.g. using XPath) reports an error about the garbage after the first closing </log>. Most probably I can use python (2.6, although I can compile 3.x as well), or some really old perl version (5.6.x), and recently compiled xmlstarlet which really looks promising - I was able to create the statistics for a certain period after copying the file, and pre- & appending the opening and closing root element. But this is a huge file and copying takes time as well. Isn't there a better solution? Thanks in advance!

    Read the article

  • Why does FastCGI not work well with Ruby on Rails?

    - by Jian Lin
    It is said that FastCGI doesn't work well with Ruby on Rails deployment. Why is that? In previous experience, something either works quite well or it might be fundamentally wrong. So if FastCGI is a viable solution, why is it not reliable with RoR? Does FastCGI work well with most any language / frameworks?

    Read the article

  • How to map a Entity Data Model conceptual model property to a storage model column using the "Serial

    - by codekaizen
    I have a conceptual model in EDM where one of the entities has a property which is essentially a big value object whose properties aren't really useful as columns in the datamodel. I'd like to apply the Serialized LOB pattern to it so that I can fit it into a 192 byte binary column. How do I map this in the EDM v4? Is it even possible at this time? Actually, is it possible in any ORM?

    Read the article

  • Mysql: create index on 1.4 billion records

    - by SiLent SoNG
    I have a table with 1.4 billion records. The table structure is as follows: CREATE TABLE text_page ( text VARCHAR(255), page_id INT UNSIGNED ) ENGINE=MYISAM DEFAULT CHARSET=ascii The requirement is to create an index over the column text. The table size is about 34G. I have tried to create the index by the following statement: ALTER TABLE text_page ADD KEY ix_text (text) After 10 hours' waiting I finally give up this approach. Is there any workable solution on this problem? UPDATE: the table is unlikely to be updated or inserted or deleted. The reason why to create index on the column text is because this kind of sql query would be frequently executed: SELECT page_id FROM text_page WHERE text = ?

    Read the article

  • Cloud HUGE data storage options?

    - by ToughPal
    Hi, Does anyone have a good suggestion on how to do video recording? We have a camera that can record and then stream live video to a server. So this means we can have 1000's of cameras sending data 24X7 for recording. We will store data for over 7 / 14 / 30 days depending on the package. Per day if a camera is sending data to the server then it will store 1.5GB. So that means there is a traffic of 1.5GB / day / camera Total monthly 45GB / month / camera (Data + bandwidth for one camera) Please let me know the most cost effective way to get this data stored? Thanks!

    Read the article

  • what changes when your input is giga/terabyte sized?

    - by Wang
    I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)

    Read the article

  • what's the best (most effecient) way in asp .net to return a whole page into tabbed content?

    - by ijjo
    what i want to do is every time i click on a tab, the content area is replaced by pretty much a whole new page. i don't want a full page load so i want to do it in ajax, but i'm used to sending back small jason data via page methods. i'm not sure how i would construct a whole new page and return that via ajax and i would like to simply assign the whole content returned to a div and be done with it. what's the best way to do this with the least amount of overhead (i know there are some inefficient ways the scriptmanager does ajax)? or is it better to load the tabbed content in an iframe? fyi i'm already using jquery to call lightweight pagemethods on my asp net page and that works great.

    Read the article

  • Load big XML files to mySQL database (PHP)

    - by Kees
    Hello There, For a new project I need to load big XML files (200MB+) to a mySQL database. There are +- 20 feeds i need to match with that (not all fields are the same). Now when i want to catch the XML I get this error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 171296569 bytes) in E:\UsbWebserver\Root\****\application\libraries\MY_xml.php on line 21 Is there an easy solution for this? It's not possible to get te feed in parts of a few MB's each. Thank you very much! P.s. has somebody an idea to match xml-feeds easy?

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • What is a good way to refactor a large, terribly written code base by myself? [closed]

    - by AgentKC
    Possible Duplicate: Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?

    Read the article

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >