Search Results

Search found 602 results on 25 pages for 'chunks'.

Page 8/25 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • How to divide hex grid evenly among n players?

    - by manabreak
    I'm making a simple hex-based game, and I want the map to be divided evenly among the players. The map is created randomly, and I want the players to have about equal amount of cells, with relatively small areas. For example, if there's four players and 80 cells in the map, each of the players would have about 20 cells (it doesn't have to be spot-on accurate). Also, each player should have no more than four adjacent cells. That is to say, when the map is generated, the biggest "chunks" cannot be more than four cells each. I know this is not always possible for two or three players (as this resembles the "coloring the map" problem), and I'm OK with doing other solutions for those (like creating maps that solve the problem instead). But, for four to eight players, how could I approach this problem? As always, any and all help is appreciated. :)

    Read the article

  • Java chunk negative number problem

    - by user1990950
    I've got a tile based map, which is divided in chunks. I got a method, which puts tiles in this map and with positive numbers it's working. But when using negative numbers it wont work. This is my setTile method: public static void setTile(int x, int y, Tile tile) { int chunkX = x / Chunk.CHUNK_SIZE, chunkY = y / Chunk.CHUNK_SIZE; IntPair intPair = new IntPair(chunkX, chunkY); world.put(intPair, new Chunk(chunkX, chunkY)); world.get(intPair).setTile(x - chunkX * Chunk.CHUNK_SIZE, y - chunkY * Chunk.CHUNK_SIZE, tile); } This is the setTile method in the chunk class (CHUNK_SIZE is a constant with the value 64): public void setTile(int x, int y, Tile t) { if (x >= 0 && x < CHUNK_SIZE && y >= 0 && y < CHUNK_SIZE) tiles[x][y] = t; } What's wrong with my code?

    Read the article

  • Android Loading Screen: How do I go about using a stack to load elements, and the option of incrementing the size counter?

    - by tom_mai78101
    I have some problems with figuring out what value I should put in the function: int value_needed_to_figure_out = X; ProgressBar.incrementProgressBy(value_needed_to_figure_out); I've been researching about loading screens and how to use them. Some examples I've seen have implemented Thread.sleep() in a Handler.post(new Runnable()) function. To me, I got most of that concept of using the Handler to update the ProgressBar, while pretending to do some heavy crunching work. So, I kept looking. I have read this thread here: How do I load chunks of data from an assest manager during a loading screen? It said that I can try using a stack it needs to load, and adding a size counter as I add elements to the stack. What does it mean? This is the part where I'm totally stumped. If anyone would provide some hints, I'll gladly appreciate it. Thanks in advance.

    Read the article

  • How to create reproducible probability in map generation?

    - by nickbadal
    So for my game, I'm using perlin noise to generate regions of my map (water/land, forest/grass) but I'd also like to create some probability based generation too. For instance: if(nextInt(10) > 2 && tile.adjacentTo(Type.WATER)) tile.setType(Type.SAND); This works fine, and is even reproduceable (based on a common seed) if the nextInt() calls are always in the same order. The issue is that in my game, the world is generated on demand, based on the player's location. This means, that if I explore the map differently, and the chunks of the map are generated in a different order, the randomness is no longer consistent. How can I get this sort of randomness to be consistent, independent of call order? Thanks in advance :)

    Read the article

  • Using gerrit (or similar tool) on a team where multiple devs work on a single feature

    - by Bacon
    We have a team of roughly ~8 devs who regularly work on the same feature over the course of a 3 week sprint. It isn't quite pair programming, but in our current workflow devs regularly push up incomplete code for a colleague to complete. This worked fine before we introduced Gerrit, but now our commits need to represent chunks of test-passing, complete, logical functionality, and so the model breaks. My only idea is to have everybody push up to a separate, untracked branch up until the functionality is ready for review, then squash everything into commits that make sense and push up. Is there another Gerrit-ized workflow that could work? I know this is a widely discussed topic on Google Groups, and that there has recently been some discussion of Gerrit topic reviews, but I wanted to see if there is anybody out there using Gerrit in this way, and what the suggested workflow would be.

    Read the article

  • Can all code be represented as a series of Map / Filter / Reduce operations?

    - by Mongus Pong
    I have recently been refactoring large chunks of code and replacing them with Linq queries. Removing the language bias - Linq is essentially a set of Map / Filter and Reduce operations that operate on a sequence of data. This got me thinking, how far would I theoretically be able to take this. Would I be able to rewrite the whole code base into a series (or even a single) of Map / Filter and Reduce operations. Unfortunately I get paid to do useful stuff, so I haven't been able to experiment much further, but I can't think of any code structure that couldn't be re structured as such. Side effected code can be dealt with via monads.. Even output is essentially mapping memory addresses to screen addresses. Is there anything that couldn't be (theoretically) rewritten as a Linq query?

    Read the article

  • Split large file, have arbitrary start index number

    - by nEJC
    I do a lot of file manipulation on my system and in one particular batch job I end up with around 16 Gb file. I need to prepare this data into smaller chunks for another process. I split it into 10k lines per file and numeric index, padded to 5 digits split -a 5 -d -l 10000 large_input_file /out_path/out. This way I end up with files named out.00000 out.00001 ... The problem is that this way indexing always starts with 0. Is there a way to set it to arbitrary starting index? man reveals nothing ...

    Read the article

  • How much time do you need in between large projects?

    - by Mattio
    You've launched a large project at work, something that's been in progress and taken up large chunks of your life for more than 6 months. The post-launch triage is over. Tech support isn't calling you every hour because they don't know how to troubleshoot an issue. Your hours drop from 60+/wk to whatever is normal in your organization (which is hopefully less than 60+!). How much time do you (or your team) need before the next large project begins? I was asked this question at work and I think the ideal minimum is two weeks -- one week to clear your desk and inbox + one week to clear your head and remember what it's like to have a life outside of work. I'd frankly acknowledge that just being asked this question is a huge boon to work/life balance. But I do think it's possible to go too long in between.

    Read the article

  • How do I connect the seams between my terrain?

    - by gnomgrol
    I'm using c++ and D3D11 and I'm trying to create a (pretty) large terrain, lets say 4096x4096, maybe larger. I've got the basics of terrain creation and already split it up into chunks. But, when I'm rendering them (every chunk has its own vertex and index buffer, as well as its own heightmap), there are still little pieces missing between them. I read a lot about LOD(Level Of Detail) and GMM(Geometry Mipmap), but I can't really implement the theory I read. At the moment, it looks like this: I could really use some help, everything is welcome. If you have some good tutorials on any of this, please share them.

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by user19000
    We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI (business Intelligence), i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks)?

    Read the article

  • I need to get past my permissions to recover data

    - by adsmz
    Due to some mishaps, I am unable to boot into Kubuntu at all. However, my data is still on the hard drive. I managed to get one of the other two computers to which I have access to read the disk by booting into a liveCD session of kubuntu. The only storage medium to which I have access is a 30 GB data stick. Here's where the trouble starts: In music alone, I have to back up about 60 GB. Obviously this is going to have to be split into chunks and moved over to the second spare PC until I can reinstall Kubuntu on my laptop. All of the data that needs backed up is behind a permissions wall, so while I can view it, I can't interact with it directly. I know copying and moving through the terminal can get around this with sudo cp or sudo mv, but is there a way to first compress multiple folders in a single archive, then move it? (While we're on the subject, what compression method would be best for large volumes of music in MP3, WAV, and OGG format?)

    Read the article

  • Slow synch of files form iPad to webserver?

    - by MikeN
    I'm building a recording iPad application that will take some moderately large recordings on the iPad (5-10 minutes of full audio roughly 5-10Megabytes in size.) How can I synch such large files to my web server for use? I want the synching to occur asynchronously in the background. Is there an existing library/utility to synch files in the Megabyte range from an iPhone/iPad to a server in small chunks?

    Read the article

  • "Chunked" MemoryStream

    - by Karol Kolenda
    I'm looking for the implementation of MemoryStream which does not allocate memory as one big block, but rather a collection of chunks. I want to store a few GB of data in memory (64 bit) and avoid limitation of memory fragmentation.

    Read the article

  • HTML tidy/cleaning in Ruby 1.9

    - by Christian
    I'm currently using the RubyTidy Ruby bindings for HTML tidy to make sure HTML I receive is well-formed. Currently this library is the only thing holding me back from getting a Rails application on Ruby 1.9. Are there any alternative libraries out there that will tidy up chunks of HTML on Ruby 1.9?

    Read the article

  • PHP String Split

    - by deniz
    I need to split a string into chunks of 2,2,3,3 characters and was able to do so in Perl by using unpack: unpack("A2A2A3A3", 'thisisloremipsum'); However the same function does not work in PHP, it gives this output: Array ( [A2A3A3] => th ) How can I do this by using unpack? I don't want to write a function for it, it should be possible with unpack but how? Thanks in advance,

    Read the article

  • how IEEE-754 floating point numbers work

    - by hatorade
    Let's say I have this: float i = 1.5 in binary, this float is represented as: 0 01111111 10000000000000000000000 I broke up the binary to represent the 'signed', 'exponent' and 'fraction' chunks. What I don't understand is how this represents 1.5. The exponent is 0 once you subtract the bias (127 - 127), and the fraction part with the implicit leading one is 1.1. How does 1.1 scaled by nothing = 1.5???

    Read the article

  • Suggestion for a Data Structure!

    - by Jay
    I have the following requirements for a data structure: Direct access to an element with the help of a key (Key will be an integer, range is also same as integer range) Avoid memory allocation in chunks (Allocate contigous memory for the data structure including the data) Should be able to grow the data structure size dynamically Which data structure would you suggest? Any pointers in the direction will also be of help.

    Read the article

  • Python slicing a string using space characters and a maximum length

    - by chrism
    I'd like to slice a string up in a similar way to .split() (so resulting in a list) but in a more intelligent way: I'd like it to split it into chunks that are up to 15 characters, but are not split mid word so: string = 'A string with words' [splitting process takes place] list = ('A string with','words') The string in this example is split between 'with' and 'words' because that's the last place you can split it and the first bit be 15 characters or less.

    Read the article

  • realloc(): invalid next size

    - by Kewley
    I'm having a problem with the realloc function. I'm using C only (so no vector) with LibCurl. The problem I'm having is that I'm getting the following error (realloc(): invalid next size) on the 12th iteration of the write_data function (the function I pass to Curl as a callback, it is called each time libcurl has some data to pass back (data is passed in chunks) ). Trace: Source: http://pastebin.com/WBWhV5fr Thanks in advance,

    Read the article

  • Caching in mmap

    - by myahya
    I am using mmap call to read from a very big file using simple pointer arithmetic in C++. The problem is that when I read small chunks of data (in the order of KBs) multiple times, each read take the same amount of time as the previous one. How can I know if the disk is being accessed to fulfill my request or whether the request is being fulfilled from main memory (page cache) in calls after the first one.

    Read the article

  • Max file size for File.ReadAllLines

    - by user283897
    Hi, I need to read and process a text file. My processing would be easier if I could use the File.ReadAllLines method but I'm not sure what is the maximum size of the file that could be read with this method without reading by chunks. I understand that the file size depends on the computer memory. But are still there any recommendations for an average machine? I would greatly appreciate your fast response. Thanks, Lev

    Read the article

  • Testing SocketChannel NIO

    - by hotzen
    Hello, I just wrote some NIO-code and wonder how to stress-test my implementation regarding SocketChannel.write(ByteBuffer) not able to write the whole byte-buffer SocketChannel.read(ByteBuffer) reading the data in chunks into ByteBuffer are there some simple linux-utilities like telnet to open a ServerSocket with some buffering-options?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >