Search Results

Search found 1060 results on 43 pages for 'pseudo lru'.

Page 18/43 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • Non-Profit Technololgy for Non-Profits?

    - by TomJ
    I've been looking around for a way to give back to the community, but I haven't found my right fit yet, so an idea came to mind: A non-profit technology "company" that targets non-profits. Do these exist? I've been doing some google searches and can only find software that is targeted for non-profits that is created by for-profit companies or that charges what I believe to be an outrages amount, conferences directed towards non-profits and technology they may use -- or articles complaining about the digital divide and how non-profits view technology as key but dont have the funds or the knowledge to employ it. Pseudo "Business Model" An open source 501(3)(c) organization that targets directly targets non-profits to fill the "digital divide." Most services would be free and consulting fees would be charged for customization. Donations would be accepted and government grants would be sought after. This would enable non-profits to keep pace with the for-profits in the technology sector, but at little to no cost. Perhaps the first "industry" to be targeted would be those that fill key social needs like unemployment, or food banks.

    Read the article

  • Can anyone explain step-by-step how the as3isolib depth-sorts isometric objects?

    - by Rob Evans
    The library manages to depth-sort correctly, even when using items of non-1x1 sizes. I took a look through the code but it's a big project to go through line by line! There are some questions about the process such as: How are the x, y, z values of each object defined? Are they the center points of the objects or something else? I noticed that the IBounds defines the bounds of the object. If you were to visualise a cuboid of 40, 40, 90 in size, where would each of the IBounds metrics be? I would like to know how as3isolib achieves this although I would also be happy with a generalised pseudo-code version. At present I have a system that works 90% of the time but in cases of objects that are along the same horizontal line, the depth is calculated as the same value. The depth calculation currently works like this: x = object horizontal center point y = object vertical center point originX and Y = the origin point relative to the object so if you want the origin to be the center, the value would be originX = 0.5, originY = 0.5. If you wanted the origin to be vertical center, horizontal far right of the object it would be originX = 1.0, originY = 0.5. The origin adjusts the position that the object is transformed from. AABB_width = The bounding box width. AABB_height = The bounding box height. depth = x + (AABB_width * originX) + y + (AABB_height * originY) - z; This generates the same depth for all objects along the same horizontal x.

    Read the article

  • Transform coordinates from 3d to 2d without matrix or built in methods

    - by Thomas
    Not to long ago i started to create a small 3D engine in javascript to combine this with an html5 canvas. One of the issues I run into is how can you transform 3d to 2d coords. Since I cannot use matrices or built in transformation methods I need another way. I've tried implementing the next explanation + pseudo code: http://freespace.virgin.net/hugo.elias/routines/3d_to_2d.htm Unfortunately no luck there. I've replace all the input variables with data from my own camera and object classes. I have the following data: An object with a rotation, position vector and an array of 4 3d coords (its just a plane) a camera with a position and rotation vector the viewport - a square 600 x 600 surface. The example uses a zoom factor which I've set as 1 Most hits on google use either matrix calculations or don't implement camera rotation. Basic transformation should be like this: screen.x = x / z * zoom screen.y = y / z * zoom Can anyone point me in the right direction or explain to me howto achieve this? edit: Thanks for all your posts, I haven't been able to apply all this to my project yet but I hope to do this soon.

    Read the article

  • How to design a scriptable communication emulator?

    - by Hawk
    Requirement: We need a tool that simulates a hardware device that communicates via RS232 or TCP/IP to allow us to test our main application which will communicate with the device. Current flow: User loads script Parse script into commands User runs script Execute commands Script / commands (simplified for discussion): Connect RS232 = RS232ConnectCommand Connect TCP/IP = TcpIpConnectCommand Send data = SendCommand Receive data = ReceiveCommand Disconnect = DisconnectCommand All commands implement the ICommand interface. The command runner simply executes a sequence of ICommand implementations sequentially thus ICommand must have an Execute exposure, pseudo code: void Execute(ICommunicator context) The Execute method takes a context argument which allows the command implementations to execute what they need to do. For instance SendCommand will call context.Send, etc. The problem RS232ConnectCommand and TcpIpConnectCommand needs to instantiate the context to be used by subsequent commands. How do you handle this elegantly? Solution 1: Change ICommand Execute method to: ICommunicator Execute(ICommunicator context) While it will work it seems like a code smell. All commands now need to return the context which for all commands except the connection ones will be the same context that is passed in. Solution 2: Create an ICommunicatorWrapper (ICommunicationBroker?) which follows the decorator pattern and decorates ICommunicator. It introduces a new exposure: void SetCommunicator(ICommunicator communicator) And ICommand is changed to use the wrapper: void Execute(ICommunicationWrapper context) Seems like a cleaner solution. Question Is this a good design? Am I on the right track?

    Read the article

  • Distributing an Android game with plugins via the market

    - by Peter Serwylo
    I'm new to Android development, and was wondering how the following could be achieved within the confines of the Android market as a distribution channel: One main application, which handles the main menu, networking, high scores, etc. Several games which can be launched from the main menu, which all work within the same eco system. The main application is not just a pseudo launcher for other games, these different games will share high scores and other achievements/preferences. In a traditional package management system such as apt, pacman or yum, this could be handled quite happily through dependencies. This does not appear to be possible via the Android market. The closest I've seen is when apps scan to check if the required app is installed, and if not, launches the market and asks the user to download the app. This sounds like a very messy solution. It also begs the question, would they download the game (plugin) first, which then downloads the main shell application? Or would they download the main shell application, and when they navigate to a menu item which says "Play game", then it scans for any installed games, and if none exist, redirects to the market? Also, I'm not even sure if it is possible to dig up the package from another application on the device, and start invoking classes from within (e.g. when you want to launch the game (plugin)) A final option is just to have a 3rd component which is a .jar that each game includes, which effectively contains the entire shell application. Then each game would appear to have the same menu, but it would become a nightmare as soon as you want to update the menu component and have to re-release each game. It would be especially worse if other people released games (plugins) based on the same framework and didn't update them. Is there any other options which I haven't thought of? Has anyone else solved this or seen a solution in any apps they've installed (doesn't have to be games)? cheers.

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • Getting the relational table data into XML recursively

    - by Tom
    I have levels of tables (Level1, Level2, Level3, ...) For simplicity, we'll say I have 3 levels. The rows in the higher level tables are parents of lower level table rows. The relationship does not skip levels however. E.g. Row1Level1 is parent of Row3Level2, Row2Level2 is parent of Row4Level3. Level(n-1)'s parent is always be in Level(n). Given these tables with data, I need to come up with a recursive function that generates an XML file to represent the relationship and the data. E.g. <data> <level levelid = 1 rowid=1> <level levelid = 2 rowid=3 /> </level> <level levelid = 2 rowid=2> <level levelid = 3 rowid=4 /> </level> </data> I would like help with coming up with a pseudo-code for this setup. This is what I have so far: XElement GetXMLData(Table table, string identifier, XElement data) { XElement xmlData = data; if (table != null) { foreach (row in the table) { // Get subordinate table Table subordinateTable = GetSubordinateTable(table); // Get the XML Data for the children of current row xmlData += GetXMLData(subordinateTable, row.Identifier, xmlData); } } return xmlData; }

    Read the article

  • Bomberman clone, how to do bombs?

    - by hustlerinc
    I'm playing around with a bomberman clone to learn game-developement. So far I've done tiles, movement, collision detection, and item pickup. I also have pseudo bombplacing (just graphics and collision, no real functionality). I've made a jsFiddle of the game with the functionality I currently have. The code in the fiddle is very ugly though. Scroll past the map and you find how I place bombs. Anyway, what I would like to do is an object, that has the general information about bombs like: function Bomb(){ this.radius = player.bombRadius; this.placeBomb = function (){ if(player.bombs != 0){ // place bomb } } this.explosion = function (){ // Explosion } } I don't really know how to fit it into the code though. Everytime I place a bomb, do I do var bomb = new Bomb(); or do i need to constantly have that in the script to be able to access it. How does the bomb do damage? Is it as simple as doing X,Y in all directions until radius runs out or object stops it? Can I use something like setTimeout(bomb.explosion, 3000) as timer? Any help is appreciated, be it a simple explanation of the theory or code examples based on the fiddle. When I tried the object way it breaks the code.

    Read the article

  • Regulating how much to draw based on how much was drawn last frame.

    - by Mike Howard
    I have a 3D game world on an iPhone (limited graphics speed), and I'm already regulating whether I draw each shape on the screen based on it's size and distance from the camera. Something like... if (how_big_it_looks_from_the_camera > constant) then draw What I want to do now is also take into account how many shapes are being drawn, so that in busier areas of the game world I can draw less than I otherwise would. I tried to do this by dividing how_big_it_looks by the number of shapes that were drawn last frame (well, the square root of this but I'm simplifying - the problem is the same). if (how_big_it_looks / shapes_drawn > constant2) then draw But the check happens at the level of objects which represent many drawn shapes, and if an object containing many shapes is switched on, it increases shapes_drawn lots and switches itself back off the next frame. It flickers on and off. I tried keeping a kind of weighted average of previous values, by each frame doing something like shapes_drawn_recently = 0.9 * shapes_drawn_recently + 0.1 * shapes_just_drawn, but of course it only slows the flickering down because of the nature of the feedback loop. Is there a good way of solving this? My project is in Objective-C, but a general algorithm or pseudo-code is good too. Thanks.

    Read the article

  • Rendering a big game universe - bitmaps or vector graphics?

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Is it possible to make CSS-added text searchable by a browser?

    - by Andrew Stacey
    I run a website that uses CSS pseudo classes to insert text here and there. One of them inserts the value of a CSS counter (whereupon it would require considerable re-engineering of the system to do this without CSS text injection). The specific CSS rule is: .num_defn .theorem_label:after { content: " " counter(definition, decimal); counter-increment: definition; } and this converts "Definition" to "Definition 1" (say). However, the injected text is not searchable by the browser. It doesn't see the 1: if I search for "Definition 1" then it doesn't find it, and if I search for "Definition. Whatever the definition text was" then the browser happily highlights the line except for the inserted 1. So if you imagine the bold text as the highlighting, it would look like: Definition 1 . Whatever the definition text was This is not ideal! People like to refer to definitions by their number and to say "Look at Definition 1 on the page XYZ" (and in contexts where hyperlinks are not available - strange, I know, but it does happen). Thus: Is there any way that I, on the server end, can designate the injected text as "searchable"? If not, is there a simple way at the browser end that this can be enabled?

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • Which is a better practice - helper methods as instance or static?

    - by Ilian Pinzon
    This question is subjective but I was just curious how most programmers approach this. The sample below is in pseudo-C# but this should apply to Java, C++, and other OOP languages as well. Anyway, when writing helper methods in my classes, I tend to declare them as static and just pass the fields if the helper method needs them. For example, given the code below, I prefer to use Method Call #2. class Foo { Bar _bar; public void DoSomethingWithBar() { // Method Call #1. DoSomethingWithBarImpl(); // Method Call #2. DoSomethingWithBarImpl(_bar); } private void DoSomethingWithBarImpl() { _bar.DoSomething(); } private static void DoSomethingWithBarImpl(Bar bar) { bar.DoSomething(); } } My reason for doing this is that it makes it clear (to my eyes at least) that the helper method has a possible side-effect on other objects - even without reading its implementation. I find that I can quickly grok methods that use this practice and thus help me in debugging things. Which do you prefer to do in your own code and what are your reasons for doing so?

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Procedural content (settlement) generation

    - by instancedName
    I have, lets say, something like a homework or assignment to do. Roughly said I need to write an algorithm (pseudo code is not necessary, just in depth description) of procedure that would generate settlements, environment and a people to populate it with, as part of some larger world generation procedure. The genre of game is not specified, it could be any genre (rpg, strategy, colony simulation etc.) where interacting with large and extensive world is central to the game. Procedure should be called once per settlement. At the time of calling, world generation procedure makes geography, culture and history input available. Output should be map of the village and it's immediate area, and various potential additional information like myths, history, demographic facts etc. Bonus would be quest ant similar stuff, but that not really my focus at the moment. I will leave quality of the output for later when I actually dig little deeper into this topic. I am free to change parameters as long as I have strong explanation for doing so. Setting of the game is undetermined so I am free to use anything that I like the most. Ok, so my actual question is: Can anyone who has some experience in this field of game design recommend me some good literature, or point me in the direction where I should look/reed/study? I'm somewhat experienced game programmer, but I've never been into game design till now so any help will be great. I want to do this assignment as good as I can. As for deadline, it's not strictly set, but lets say I don't want it to take longer then few weeks, one month at worst case.

    Read the article

  • Is an event loop just a for/while loop with optimized polling?

    - by Alan
    I'm trying to understand what an event loop is. Often the explanation is that in the event loop, you do something until you're notified that an event occurred. You than handle the event and continue doing what you did before. To map the above definition with an example. I have a server which 'listens' in a event loop, and when a socket connection is detected, the data from it gets read and displayed, after which the server goes to the listening it did before. However, this event happening and us getting notified 'just like that' are to much for me to handle. You can say: "It's not 'just like that' you have to register an event listener". But what's an event listener but a function which for some reason isn't returning. Is it in it's own loop, waiting to be notified when an event happens? Should the event listener also register an event listener? Where does it end? Events are a nice abstraction to work with, however just an abstraction. I believe that in the end, polling is unavoidable. Perhaps we are not doing it in our code, but the lower levels (the programming language implementation or the OS) are doing it for us. It basically comes down to the following pseudo code which is running somewhere low enough so it doesn't result in busy waiting: while(True): do stuff check if event has happened (poll) do other stuff This is my understanding of the whole idea, and i would like to hear if this is correct. I am open in accepting that the whole idea is fundamentally wrong, in which case I would like the correct explanation. Best regards

    Read the article

  • Help parsing long (3.5mil lines) text file, line by line and storing data, need a strategy

    - by Jarrod
    This is a question about solving a particular problem I am struggling with, I am parsing a long list of text data, line by line for a business app in PHP (cron script on the CLI). The file follows the format: HD: Some text here {text here too} DC: A description here DC: the description continues here DC: and it ends here. DT: 2012-08-01 HD: Next header here {supplemental text} ... this repeats over and over for a few hundred megs I have to read each line, parse out the HD: line and grab the text on this line. I then compare this text against data stored in a database. When a match is found, I want to then record the following DC: lines that succeed the matched HD:. Pseudo code: while ( the_file_pointer_isnt_end_of_file) { line = getCurrentLineFromFile title = parseTitleFrom(line) matched = searchForMatchInDB(line) if ( matched ) { recordTheDCLines // <- Best way to do this? } } My problem is that because I am reading line by line, what is the best way to trigger the script to start saving DC lines, and then when they are finished save them to the database? I have a vague idea, but have yet to properly implement it. I would love to hear the communities ideas\suggestions! Thank you.

    Read the article

  • Relation between developers and clients

    - by guiman
    Hi everyone, i've been facing a situation at work and i would like to share it with you and tell me: Did you had to do it to? Should a developer be in direct contact wit the client? Or there should be an "adapter" guy that translates client needs in pseudo formal requirements understandable to us? I'm currently working in a small company that its taking care of implementing lots of systems, most of them for goverment institutions, in witch it generally means taking software developted 20 years ago and refurbish them so fit up-to-date needs. The clients generally are very used to them and tend to discourage change (they are in their 50s 60s give or take, so not technologie-friendly in general). As you can imagine, dev-team in most cases starts taking care of relation with clients, generating the documentation needed in this cases (CU usually), assisting to weekly meets to see improvements with clients. As for experience, this is a gold mine for me, because gives a nice perspective on all the aspects of software development, but also some problems rise because, if developers come from mars then client are from venus. So there is a fine gap on the vocabulary/experience/capability-to-interpret-needs that generates an noice in the communication, and some times affecting the final product.

    Read the article

  • Trouble with SAT style vector projection in C#/XNA

    - by ssb
    Simply put I'm having a hard time working out how to work with XNA's Vector2 types while maintaining spatial considerations. I'm working with separating axis theorem and trying to project vectors onto an arbitrary axis to check if those projections overlap, but the severe lack of XNA-specific help online combined with pseudo code everywhere that omits key parts of the algorithm, googling has left me little help. I'm aware of HOW to project a vector, but the way that I know of doing it involves the two vectors starting from the same point. Particularly here: http://www.metanetsoftware.com/technique/tutorialA.html So let's say I have a simple rectangle, and I store each of its corners in a list of Vector2s. How would I go about projecting that onto an arbitrary axis? The crux of my problem is that taking the dot product of say, a vector2 of (1, 0) and a vector2 of (50, 50) won't get me the dot product I'm looking for.. or will it? Because that (50, 50) won't be the vector of the polygon's vertex but from whatever XNA calculates. It's getting the calculation from the right starting point that's throwing me off. I'm sorry if this is unclear, but my brain is fried from trying to think about this. I need a better understanding of how XNA calculates Vector2s as actual vectors and not just as random points.

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • Creating a 2D Line Branch

    - by Danran
    I'm looking into creating a 2D line branch, something for a "lightning effect". I did ask this question before on creating a "lightning effect" (mainly though referring to the process of the glow & after effects the lightning has & to whether it was a good method to use or not); Methods of Creating a "Lightning" effect in 2D However i never did get around to getting it working. So i've been trying today to get a seconded attempt going but i'm getting now-were :/. So to be clear on what i'm trying to-do, in this article posted; http://drilian.com/2009/02/25/lightning-bolts/ I'm trying to create the line segments seen in the images on the site. I'm confused mainly by this line in the pseudo code; // Offset the midpoint by a random amount along the normal. midPoint += Perpendicular(Normalize(endPoint-startPoint))*RandomFloat(-offsetAmount,offsetAmount); If someone could explain this to me it would be really grateful :).

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >