Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 406/1725 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Problems migrating databinding in VB.NET from Winforms to ASP.NET 2.0

    - by David
    And this was supposed to be so easy... I have existing business and data access layers that handle the retrieval and update of the data in question. These work great with the existing Winforms application (.Net V2.0) Now, in trying to write a new web-based UI, I'm running into all sorts of problems (last time I wrote asp.net code was in 1.1). Specifically, I can't data bind a text box to a business object. Oh, sure there's the ObjectDataSource but that wants to know how to do CRUD operations on the data. What I'm looking for is something that acts like the 'classic' binding objects so that, in my code, it's as simple as retrieving the object and doing a a refresh. The data component like FormView and DetailsView are so generic-looking that it's ridiculous. The existing application would have tabbed dialogs, text boxes grouped by panels, etc. On top of that, I have a directive to use master pages and unless one control causes it, I can't seem to get the content section to expand. I can't just put a text box 'below' the bottom of "Content1" and have it resize the content section - which gives me the same results as an earlier question I posted when the footer wasn't being 'pushed down' - relative position solved that but doesn't seem to solve it with placing small text boxes in the area. What I want is fairly simple. Something like: bindingobject.datasource = businessdataobject bindingobject.refresh ...and have the text boxes refresh with the new values. Likewise to have 'businessdataobject' properties updated as the user enters new data. I was able to do this with the GridView (grdRequests.DataSource = lstRequests) by making a list of asp:BoundField tags inside the collection of the GridView. Am I tilting at windmills here?

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Design: How to declare a specialized memory handler class

    - by Michael Dorgan
    On an embedded type system, I have created a Small Object Allocator that piggy backs on top of a standard memory allocation system. This allocator is a Boost::simple_segregated_storage< class and it does exactly what I need - O(1) alloc/dealloc time on small objects at the cost of a touch of internal fragmentation. My question is how best to declare it. Right now, it's scope static declared in our mem code module, which is probably fine, but it feels a bit exposed there and is also now linked to that module forever. Normally, I declare it as a monostate or a singleton, but this uses the dynamic memory allocator (where this is located.) Furthermore, our dynamic memory allocator is being initialized and used before static object initialization occurs on our system (as again, the memory manager is pretty much the most fundamental component of an engine.) To get around this catch 22, I added an extra 'if the small memory allocator exists' to see if the small object allocator exists yet. That if that now must be run on every small object allocation. In the scheme of things, this is nearly negligable, but it still bothers me. So the question is, is there a better way to declare this portion of the memory manager that helps decouple it from the memory module and perhaps not costing that extra isinitialized() if statement? If this method uses dynamic memory, please explain how to get around lack of initialization of the small object portion of the manager.

    Read the article

  • Performance difference in for loop condition?

    - by CSharperWithJava
    Hello all, I have a simple question that I am posing mostly for my curiousity. What are the differences between these two lines of code? (in C++) for(int i = 0; i < N, N > 0; i++) for(int i = 0; i < N && N > 0; i++) The selection of the conditions is completely arbitrary, I'm just interested in the differences between , and &&. I'm not a beginner to coding by any means, but I've never bothered with the comma operator. Are there performance/behavior differences or is it purely aesthetic? One last note, I know there are bigger performance fish to fry than a conditional operator, but I'm just curious. Indulge me. Edit Thanks for your answers. It turns out the code that prompted this question had misused the comma operator in the way I've described. I wondered what the difference was and why it wasn't a && operator, but it was just written incorrectly. I didn't think anything was wrong with it because it worked just fine. Thanks for straightening me out.

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • SQL Server 2005 - Enabling both Named Pipes & TCP/IP protocols?

    - by Clinemi
    We have a SQL Server 2005 database, and currently all our users are connecting to the database via the TCP/IP protocol. The SQL Server Configuration Manager allows you to "enable" both Named Pipes, and TCP/IP connections at the same time. Is this a good idea? My question is not whether we should use named pipes instead of TCP/IP, but are there problems associated with enabling both? One of our client's IT guys, says that enabling database communication with both protocols will limit the bandwidth that either protocol can use - to like 50% of the total. I would think that the bandwidth that TCP/IP could use would be directly tied (inversely) to the amount of traffic that Named Pipes (or any of the other types of traffic) were occupying on the network at that moment. However, this IT person is indicating that the fact that we have enabled two protocols on the server, artificially limits the bandwidth that TCP/IP can use. Is this correct? I did Google searches but could not come up with an answer to this question. Any help would be appreciated.

    Read the article

  • Most "thorough" distribution of points around a circle

    - by hippietrail
    This question is intended to both abstract and focus one approach to my problem expressed at "Find the most colourful image in a collection of images". Imagine we have a set of circles, each has a number of points around its circumference. We want to find a metric that gives a higher rating to a circle with points distributed evenly around the circle. Circles with some points scattered through the full 360° are better but circles with far greater numbers of points in one area compared to a smaller number in another area are less good. The number of points is not limited. Two or more points may coincide. Coincidental points are still relevant. A circle with one point at 0° and one point at 180° is better than a circle with 100 points at 0° and 1000 points at 180°. A circle with one point every degree around the circle is very good. A circle with a point every half degree around the circle is better. In my other (colour based question) it was suggested that standard deviation would be useful but with caveat. Is this a good suggestion and does it cope with the closeness of 359° to 1°?

    Read the article

  • Centralized Exception handling for Eclipse plug-in

    - by Svilen
    Hello, At first I thought this would be question often asked, however trying (and failing) to look up info on this proved me wrong. Is there a mechanism in Eclipse platform for centralized exception handling of exceptions? For example... You have plug-in project which connects to a DB and issues queries, results of which are used to populate some e.g. views. This is like the most common example ever. :) Queries are executed almost for any user action, from every UI control the plug-in provides. Most likely the DB Query API will have some specific to the DB SomeDBGeneralException declared as being thrown by it. That's OK, you can handle those according to whatever your software design is. But how about unchecked exceptions which are likely to occur, e.g. , when communication with DB suddenly breaks for some network related reason? What if in such case one would like to catch those exceptions in a central place and for example provide user friendly message to the user (rather than the low level communication protocol api messages) and even some possible actions the user could execute in order to deal with the specific problem? Thinking in Eclipse platform context, the question may be rephrased as "Is there an extension point like "org.eclipse.ExceptionHandler" which allows to declare exception handlers for specific (some kind of filtering support) exceptions giving a lot of flexibility with the actual handling?"

    Read the article

  • Add xml-stylesheet and get standalone = yes.

    - by tumba25
    The code at the bottom is what I have. I removed the creation of all tags. At the top in the xml file I get.<?xml version="1.0" encoding="UTF-8" standalone="no"?> Note that standalone is no, even thou I have it set to yes. The first question: How do I get standalone = yes? I would like to add <?xml-stylesheet type="text/xsl" href="my.stylesheet.xsl"?> at line two in the xml file. Second question: How do I do that? Some useful links? Anything? DocumentBuilderFactory dbfac = DocumentBuilderFactory.newInstance(); DocumentBuilder docBuilder = dbfac.newDocumentBuilder(); Document doc = docBuilder.newDocument(); <cut> TransformerFactory transfac = TransformerFactory.newInstance(); transfac.setAttribute("indent-number", new Integer(2)); Transformer trans = transfac.newTransformer(); trans.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "no"); trans.setOutputProperty(OutputKeys.STANDALONE, "yes"); trans.setOutputProperty(OutputKeys.INDENT, "yes"); trans.setOutputProperty(OutputKeys.CDATA_SECTION_ELEMENTS, "name"); FileOutputStream fout = new FileOutputStream(filepath); BufferedOutputStream bout= new BufferedOutputStream(fout); trans.transform(new DOMSource(doc), new StreamResult(new OutputStreamWriter(bout, "utf-8")));

    Read the article

  • How does mercurial's bisect work when the range includes branching?

    - by Joshua Goldberg
    If the bisect range includes multiple branches, how does hg bisect's search work. Does it effectively bisect each sub-branch (I would think that would be inefficient)? For instance, borrowing, with gratitude, a diagram from an answer to this related question, what if the bisect got to changeset 7 on the "good" right-side branch first. @ 12:8ae1fff407c8:bad6 | o 11:27edd4ba0a78:bad5 | o 10:312ba3d6eb29:bad4 |\ | o 9:68ae20ea0c02:good33 | | | o 8:916e977fa594:good32 | | | o 7:b9d00094223f:good31 | | o | 6:a7cab1800465:bad3 | | o | 5:a84e45045a29:bad2 | | o | 4:d0a381a67072:bad1 | | o | 3:54349a6276cc:good4 |/ o 2:4588e394e325:good3 | o 1:de79725cb39a:good2 | o 0:2641cc78ce7a:good1 Will it then look only between 7 and 12, missing the real first-bad that we care about? (thus using "dumb" numerical order) or is it smart enough to use the full topography and to know that the first bad could be below 7 on the right-side branch, or could still be anywhere on the left-side branch. The purpose of my question is both (a) just to understand the algorithm better, and (b) to understand whether I can liberally extend my initial bisect range without thinking hard about what branch I go to. I've been in high-branching bisect situations where it kept asking me after every test to extend beyond the next merge, so that the whole procedure was essentially O(n). I'm wondering if I can just throw the first "good" marker way back past some nest of merges without thinking about it much, and whether that would save time and give correct results.

    Read the article

  • MySQL SELECT combining 3 SELECTs INTO 1

    - by Martin Tóth
    Consider following tables in MySQL database: entries: creator_id INT entry TEXT is_expired BOOL other: creator_id INT entry TEXT userdata: creator_id INT name VARCHAR etc... In entries and other, there can be multiple entries by 1 creator. userdata table is read only for me (placed in other database). I'd like to achieve a following SELECT result: +------------+---------+---------+-------+ | creator_id | entries | expired | other | +------------+---------+---------+-------+ | 10951 | 59 | 55 | 39 | | 70887 | 41 | 34 | 108 | | 88309 | 38 | 20 | 102 | | 94732 | 0 | 0 | 86 | ... where entries is equal to SELECT COUNT(entry) FROM entries GROUP BY creator_id, expired is equal to SELECT COUNT(entry) FROM entries WHERE is_expired = 0 GROUP BY creator_id and other is equal to SELECT COUNT(entry) FROM other GROUP BY creator_id. I need this structure because after doing this SELECT, I need to look for user data in the "userdata" table, which I planned to do with INNER JOIN and select desired columns. I solved this problem with selecting "NULL" into column which does not apply for given SELECT: SELECT creator_id, COUNT(any_entry) as entries, COUNT(expired_entry) as expired, COUNT(other_entry) as other FROM ( SELECT creator_id, entry AS any_entry, NULL AS expired_entry, NULL AS other_enry FROM entries UNION SELECT creator_id, NULL AS any_entry, entry AS expired_entry, NULL AS other_enry FROM entries WHERE is_expired = 1 UNION SELECT creator_id, NULL AS any_entry, NULL AS expired_entry, entry AS other_enry FROM other ) AS tTemp GROUP BY creator_id ORDER BY entries DESC, expired DESC, other DESC ; I've left out the INNER JOIN and selecting other columns from userdata table on purpose (my question being about combining 3 SELECTs into 1). Is my idea valid? = Am I trying to use the right "construction" for this? Are these kind of SELECTs possible without creating an "empty" column? (some kind of JOIN) Should I do it "outside the DB": make 3 SELECTs, make some order in it (let's say python lists/dicts) and then do the additional SELECTs for userdata? Solution for a similar question does not return rows where entries and expired are 0. Thank you for your time.

    Read the article

  • Problem registering a COM server written for Excel registered on client machine (can't set full path

    - by toytrains
    In this previous question <How to get COM Server for Excel written in VB.NET installed and registered in Automation Servers list? there is an example of how to create the full path to a registry key using VS 2008. Everything in the previous answer works correctly except the full path that I am setting (using the registry editor in VS) for mscoree.dll is not working (meaning it seems to do nothing). The full registry path is: HKEY_CLASSES_ROOT\CLSID{my_GUID}\InprocServer32(default) and the value I am setting is: [SystemFolder]mscoree.dll I can put anything (including hardcoding the full path) but the setting does not seem to matter and the registry always contains mscoree.dll without any path. I have tried adding another value to the registry path via VS and that works correctly including having the full path as specified by [SystemFolder]. The reason I need the full path (as explained in the previous question) is that without the path, Excel generates an error when the automation server is selected as it cannot find mscoree.dll (interestingly even though I receive an error the registration works OK). I am doing the install via a setup project which otherwise works fine. I am installing on a VISTA*64 system but have gotten the same error on other OS's. Does anyone know what I am doing wrong?

    Read the article

  • Java-Eclipse-Spring 3.1 - the fastest way to get familiar with this set

    - by Leron
    I, know almost all of you at some point of your life as a programmer get to the point where you know (more or less) different technologies/languages/IDEs and a times come when you want to get things together and start using them once - more efficient and second - more closely to the real life situation where in fact just knowing Java, or some experience with Eclipse doesn't mean nothing, and what makes you a programmer worth something is the ability to work with the combination of 2 or more combinations. Having this in mind here is my question - what do you think is the optimal way of getting into Java+Eclipse+Spring3.1 world. I've read, and I've read a lot. I started writing real code but almost every step is discovering the wheel again and again, wondering how to do thing you know are some what trivial, but you've missed that one article where this topic was discussed and so on. I don't mind for paying for a good tutorial like for example, after a bit of research I decided that instead of losing a lot of time getting the different parts together I'd rather pay for the videos in http://knpuniversity.com/screencast/starting-in-symfony2-tutorial and save myself a lot of time (I hope) and get as fast as possible to writing a real code instead of wondering what do what and so on. But I find it much more difficult to find such sources of info especially when you want something more specific as me and that's the reason to ask this question. I know a lot of you go through the hard way, and I won't give up if I have to do the same, but to be honest I really hope to get post with good tutorials on the subject (paid or not) because in my situation time is literally money. Thanks Leron

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • An online php debugger/code editor

    - by Zirak
    It's a simple deal: I'm sometimes in places where I don't have my laptop, and find myself with spare time and an idea for a project. But unfortunately, I can't do anything about it. I tried a variety of solutions, which include running IDEs (like phpstorm or Aptana) on a disc-on-key or cd (very slow and unappealing), trying several online solutions (like http://phpanywhere.net) and found that all of them are either buggy, overloaded or underloaded with features, just difficult to use, require FTP etc etc. All that is required here is a syntax highlighting and debugging alerts; no actual running of code. So the question is split into two: 1)Do you know of a good online php editor that you've used and enjoyed? 2)If no, then how would you go about making one? The second one seems a bit general, so I'll try and expand...It might be a good idea; if you can't find one, make one. The question is about the concept of making a syntax highlighter (shouldn't be too difficult), and the difficult part of catching php errors WITHOUT executing any php code. Thank you in advance.

    Read the article

  • Why can't I display a unicode character in the Python Interpreter on Mac OS X Terminal.app?

    - by apphacker
    If I try to paste a unicode character such as the middle dot: · in my python interpreter it does nothing. I'm using Terminal.app on Mac OS X and when I'm simply in in bash I have no trouble: :~$ · But in the interpreter: :~$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ^^ I get nothing, it just ignores that I just pasted the character. If I use the escape \xNN\xNN representation of the middle dot '\xc2\xb7', and try to convert to unicode, trying to show the dot causes the interpreter to throw an error: >>> unicode('\xc2\xb7') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128) I have setup 'utf-8' as my default encoding in sitecustomize.py so: >>> sys.getdefaultencoding() 'utf-8' What gives? It's not the Terminal. It's not Python, what am I doing wrong?! This question is not related to this question, as that indivdiual is able to paste unicode into his Terminal.

    Read the article

  • How can I generate an "unlimited" world?

    - by snowlord
    I would like to create a game with an endless (in reality an extremely large) world in which the player can move about. Whether or not I will ever get around to implement the game is one matter, but I find the idea interesting and would like some input on how to do it. The point is to have a world where all data is generated randomly on-demand, but in a deterministic way. Currently I focus on a large 2D map from which it should be possible to display any part without knowledge about the surrounding parts. I have implemented a prototype by writing a function that gives a random-looking, but deterministic, integer given the x and y of a pixel on the map (see my recent question about this function). Using this function I populate the map with "random" values, and then I smooth the map using a simple filter based on the surrounding pixels. This makes the map dependent on a few pixels outside its edge, but that's not a big problem. The final result is something that at least looks like a map (especially with a good altitude color map). Given this, one could maybe first generate a coarser map which is used to generate bigger differences in altitude to create mountain ranges and seas. Anyway, that was my idea, but I am sure that there exist ways to do this already and I also believe that given the specification, many of you can come up with better ideas. EDIT: Forgot the link to my question.

    Read the article

  • Boost's "cstdint" Usage

    - by patt0h
    Boost's C99 stdint implementation is awfully handy. One thing bugs me, though. They dump all of their typedefs into the boost namespace. This leaves me with three choices when using this facility: Use "using namespace boost" Use "using boost::[u]<type><width>_t" Explicitly refer to the target type with the boost:: prefix; e.g., boost::uint32_t foo = 0; Option ? 1 kind of defeats the point of namespaces. Even if used within local scope (e.g., within a function), things like function arguments still have to be prefixed like option ? 3. Option ? 2 is better, but there are a bunch of these types, so it can get noisy. Option ? 3 adds an extreme level of noise; the boost:: prefix is often = to the length of the type in question. My question is: What would be the most elegant way to bring all of these types into the global namespace? Should I just write a wrapper around boost/cstdint.hpp that utilizes option ? 2 and be done with it? Also, wrapping the header like so didn't work on VC++ 10 (problems with standard library headers): namespace Foo { #include <boost/cstdint.hpp> using namespace boost; } using namespace Foo; Even if it did work, I guess it would cause ambiguity problems with the ::boost namespace.

    Read the article

  • Learning libraries without books or tutorials

    - by Kawili-wili
    While many ask questions about where to find good books or tutorials, I'd like to take the opposite tack. I consider myself to be an entry-level programmer ready to move up to mid-level. I have written code in c, c++, c#, perl, python, clojure, vb, and java, so I'm not completely clueless. Where I see a problem in moving to the next level is learning to make better use of the literally hundreds upon hundreds of libraries available out there. I seem paralyzed unless there is a specific example in a book or tutorial to hand-hold me, yet I often read in various forums where another programmer attempts to assist with a question. He/she will look through the docs or scan the available classes/methods in their favorite IDE and seem to grok what's going on in a relatively short period of time, even if they had no previous experience with that specific library or function. I yearn to break the umbilical chord of constantly spending hour upon hour searching and reading, searching and reading, searching and reading. Many times there is no book or tutorial, or if there is, the discussion glosses over my specific needs or the examples shown are too far off the path for the usage I had in mind or the information is outdated and makes use of deprecated components or the library itself has fallen out of mainstream, yet is still perfectly usable (but no docs, books, or tutorials to hand-hold). My question is: In the absence of books or tutorials, what is the best way to grok new or unfamiliar libraries? I yearn to slicken the grok path so I can get down to the business of doing what I love most -- coding.

    Read the article

  • minimum L sum in a mxn matrix - 2

    - by hilal
    Here is my first question about maximum L sum and here is different and hard version of it. Problem : Given a mxn *positive* integer matrix find the minimum L sum from 0th row to the m'th row . L(4 item) likes chess horse move Example : M = 3x3 0 1 2 1 3 2 4 2 1 Possible L moves are : (0 1 2 2), (0 1 3 2) (0 1 4 2) We should go from 0th row to the 3th row with minimum sum I solved this with dynamic-programming and here is my algorithm : 1. Take a mxn another Minimum L Moves Sum array and copy the first row of main matrix. I call it (MLMS) 2. start from first cell and look the up L moves and calculate it 3. insert it in MLMS if it is less than exists value 4. Do step 2. until m'th row 5. Choose the minimum sum in the m'th row Let me explain on my example step by step: M[ 0 ][ 0 ] sum(L1 = (0, 1, 2, 2)) = 5 ; sum(L2 = (0,1,3,2)) = 6; so MLMS[ 0 ][ 1 ] = 6 sum(L3 = (0, 1, 3, 2)) = 6 ; sum(L4 = (0,1,4,2)) = 7; so MLMS[ 2 ][ 1 ] = 6 M[ 0 ][ 1 ] sum(L5 = (1, 0, 1, 4)) = 6; sum(L6 = (1,3,2,4)) = 10; so MLMS[ 2 ][ 2 ] = 6 ... the last MSLS is : 0 1 2 4 3 6 6 6 6 Which means 6 is the minimum L sum that can be reach from 0 to the m. I think it is O(8*(m-1)*n) = O(m*n). Is there any optimal solution or dynamic-programming algorithms fit this problem? Thanks, sorry for long question

    Read the article

  • Using Dispose on a Singleton to Cleanup Resources

    - by ImperialLion
    The question I have might be more to do with semantics than with the actual use of IDisposable. I am working on implementing a singleton class that is in charge of managing a database instance that is created during the execution of the application. When the application closes this database should be deleted. Right now I have this delete being handled by a Cleanup() method of the singleton that the application calls when it is closing. As I was writing the documentation for Cleanup() it struck me that I was describing what a Dispose() method should be used for i.e. cleaning up resources. I had originally not implemented IDisposable because it seemed out of place in my singleton, because I didn't want anything to dispose the singleton itself. There isn't currently, but in the future might be a reason that this Cleanup() might be called but the singleton should will need to still exist. I think I can include GC.SuppressFinalize(this); in the Dispose method to make this feasible. My question therefore is multi-parted: 1) Is implementing IDisposable on a singleton fundamentally a bad idea? 2) Am I just mixing semantics here by having a Cleanup() instead of a Dispose() and since I'm disposing resources I really should use a dispose? 3) Will implementing 'Dispose()' with GC.SuppressFinalize(this); make it so my singleton is not actually destroyed in the case I want it to live after a call to clean-up the database.

    Read the article

  • What are the arguments against the inclusion of server side scripting in JavaScript code blocks?

    - by James Wiseman
    I've been arguing for some time against embedding server-side tags in JavaScript code, but was put on the spot today by a developer who seemed unconvinced The code in question was a legacy ASP application, although this is largely unimportant as it could equally apply to ASP.NET or PHP (for example). The example in question revolved around the use of a constant that they had defined in ServerSide code. 'VB Const MY_CONST: MY_CONST = 1 If sMyVbVar = MY_CONST Then 'Do Something End If //JavaScript if (sMyJsVar === "<%= MY_CONST%>"){ //DoSomething } My standard arguments against this are: Script injection: The server-side tag could include code that can break the JavaScript code Unit testing. Harder to isolate units of code for testing Code Separation : We should keep web page technologies apart as much as possible. The reason for doing this was so that the developer did not have to define the constant in two places. They reasoned that as it was a value that they controlled, that it wasn't subject to script injection. This reduced my justification for (1) to "We're trying to keep the standards simple, and defining exception cases would confuse people" The unit testing and code separation arguments did not hold water either, as the page itself was a horrible amalgam of HTML, JavaScript, ASP.NET, CSS, XML....you name it, it was there. No code that was every going to be included in this page could possibly be unit tested. So I found myself feeling like a bit of a pedant insisting that the code was changed, given the circumstances. Are there any further arguments that might support my reasoning, or am I, in fact being a bit pedantic in this insistence?

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >