Search Results

Search found 12001 results on 481 pages for 'naked objects'.

Page 154/481 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • What to Return with Async CRUD methods C#

    - by RualStorge
    While there is a similar question focused on Java, I've been in debates with utilizing Task objects. What's the best way to handle returns on CRUD methods (and similar)? Common returns we've seen over the years are: Void (no return unless there is an exception) Boolean (True on Success, False on Failure, exception on unhandled failure) Int or GUID (Return the newly created objects Id, 0 or null on failure, exception on unhandled failure) The updated Object (exception on failure) Result Object (Object that houses the manipulated object's ID, Boolean or status field to with success or failure indicated, Exception information if there was one, etc) The concern comes into play as we've started moving over to utilizing C# 5's Async functionality, and this brought the question up of how we should handle CRUD returns large-scale. In our systems we have a little of everything in regards to what we return, we want to make these returns standardized... Now the question is what is the recommended standard? Is there even a recommended standard yet? (I realize we need to decide our standard, but typically we do so by looking at best practices, see if it makes sense for us and go from there, but here we're not finding much to work with)

    Read the article

  • Triggering Data Changes in N-Tier

    - by Ryan Kinal
    I've been studying n-tier architectures as of late, particularly in VB.NET with Entity Framework and/or LINQ to SQL. I understand the basic concepts, but have been wondering about best practices in regard to triggering CRUD-type operations from user input/action. So, the arcitecture looks something like the following: [presentation layer] - [business layer] - [data layer] - (database) Getting information from the database into the presentation layer is simple and abstracted. It's just a matter of instantiating a new object from the business layer, which in turn uses the data layer to get at the correct information. However, saving (updating and inserting), and deleting seem to require particular APIs on the relevant business objects. I have to assume this is standard practice, unless a business object will save itself on various operations (which seems inefficient), or on disposal (which seems like it just wouldn't work, or may be unwieldy and unreliable). Should my "savable" business objects all implement a particular "ISavable" or "IDatabaseObject" interface? Is this a recognized (anti-)pattern? Are there other, better patterns I should be using that I'm just unaware of? The TLDR question, I suppose, is How does the presentation layer trigger database changes?

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Are there design patterns or generalised approaches for particle simulations?

    - by romeovs
    I'm working on a project (for college) in C++. The goal is to write a program that can more or less simulate a beam of particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following: there is a World that holds all objects you can add objects to this world such as Particle, Dipole and Quadrupole time is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear). each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations) save the Particle positions repeat This seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field) So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?

    Read the article

  • How do I simplify a 2D game grid for level management while keeping its by-pixel features?

    - by Eric Thoma
    (I cross-posted this from StackOverflow as this seems to be a more appropriate forum. I've looked around a little here and I did not find an answer, so I hope this is not a recurring question.) This is a question dealing with 2D world design. I am playing around by creating a 2D bird's eye view shooter game, and I am looking to make the game sleek and advanced. I hope to be able to write physics so projectiles have momentum and knock-down properties. I am immediately running into the problem of world design. I need a way to have level files that store everything there is about a game. This is easiest by just having a grid of objects. But there are thin-walls and other objects that don't seem to fit into a traditional cell of a grid. I want to be able to fit all these together so I can streamline level design; so I don't have to put in the exact pixel-specific start and end of a wall. There doesn't seem to be an obvious translation from level file to game without forcing myself into a pacman-life scenario, meaning a scenario where the game feels boxy and discrete. There is a contrast between the smoothly (relatively) moving characters and finite jumps in a grid. I would appreciate an answer that would describe implementation options or point me to resources that do. I would also appreciate references to sites that teach game design. The language I am using is Java (although I would love to use C or C++, but I can never find convenient resources in those languages). Thank you for any answers. Please leave any questions in the space below; I will be able to answer them later tonight (28th Nov).

    Read the article

  • What patterns book for iOS development contains this specific information? [closed]

    - by Brett Ryan
    I've read several books on iOS development and Objective-C, however what a lot of them teach is how to work with interfaces and all contain the model inside the view controller, i.e. a UITableViewController based view will simply have an NSArray as it's model. I'm interested in what the best practices are for designing the structure of an application. Specifically I'm interested in best practices for the following: How to separate a model from the view controller. I think I know how to do this by simply replacing the NSArray style example with a specific model object, however what I do not know how to do is alert the view when the model changes. For example in .NET I would solve this by conforming to INotifyPropertyChanged and databinding, and similarly with Java I would use PropertyChangeListener. How to create a service model for my domain objects. For example I want to learn the best way to create a service for a hypothetical Widget object to manage an internal DB and also services for communicating with remote endpoints. I need to learn the best ways to do this in a way that interface components can subscribe to events such as widgetUpdated. These services should be singleton classes and some how dependency injected into model/controller objects. Books I've read so far are: Programming in Objective-C (4th Edition) Beginning iOS 5 Development: Exploring the iOS SDK The iOS 5 Developer's Cookbook: Expanded Electronic Edition: Essentials and Advanced Recipes for iOS Programmers Learn Objective-C on the Mac: For OS X and iOS I've also purchased the following updated books but not yet read them. The Core iOS 6 Developer's Cookbook (4th edition Programming in Objective-C (5th Edition) I come from a Java and C# background with 15 years experience, I understand that many of the ways I would do things in these languages may not fit to the ObjC way of developing applications. Would someone be able to provide me with the book on this topic containing this specific subject matter?

    Read the article

  • BoundingSpheres move when they should not

    - by NDraskovic
    I have a XNA 4.0 project in which I load a file that contains type and coordinates of items I need to draw to the screen. Also I need to check if one particular type (the only movable one) is passing in front or trough other items. This is the code I use to load the configuration: if (ks.IsKeyDown(Microsoft.Xna.Framework.Input.Keys.L)) { this.GraphicsDevice.Clear(Color.CornflowerBlue); Otvaranje.ShowDialog(); try { using (StreamReader sr = new StreamReader(Otvaranje.FileName)) { String linija; while ((linija = sr.ReadLine()) != null) { red = linija.Split(','); model = red[0]; x = red[1]; y = red[2]; z = red[3]; elementi.Add(Convert.ToInt32(model)); podatci.Add(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z))); sfere.Add(new BoundingSphere(new Vector3(Convert.ToSingle(x), Convert.ToSingle(y), Convert.ToSingle(z)), 1f)); } } } catch (Exception ex) { Window.Title = ex.ToString(); } } The "Otvaranje" is an OpenFileDialog object, "elementi" is a List (determines the type of item that would be drawn), podatci is a List (determines the location where the items will be drawn) and sfere is a List. Now I solved the picking algorithm (checking for ray and bounding sphere intersection) and it works fine, but the collision detection does not. I noticed, while using picking, that BoundingSphere's move even though the objects that they correspond to do not. The movable object is drawn to the world1 Matrix, and the static objects are drawn into the world2 Matrix (world1 and world2 have the same values, I just separated them so that the static elements would not move when the movable one does). The problem is that when I move the item I want, all boundingSpheres move accordingly. How can I move only the boundingSphere that corresponds to that particular item, and leave the rest where they are?

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • Turn Based Event Algorithm

    - by GamersIncoming
    I'm currently working on a small roguelike in XNA, which sees the player in a randomly generated series of dungeons fending off creeps, as you might expect. As with most roguelikes, the player makes a move, and then each of the creeps currently active on screen will make a move in turns, until all creeps have updated, and it return's to the player's go. On paper, the simple algorithm is straightforward: Player takes turn Turn Number increments For each active creep, update Position Once all active creeps have updated, allow player to take next turn However, when it comes to actually writing this in more detail, the concept becomes a bit more tricky for me. So my question comes as this: what is the best way to handle events taking turns to trigger, where the completion of each last event triggers the next, when dealing with a large number of creeps (probably stored as an array of an enemy object), and is there an easier way to create some kind of engine that just takes all objects that need updating and chains them together so their updates follow suit? I'm not asking for code, just algorithms and theory in the direction of objects triggering updates one after the other, in a turn based manner. Thanks in advance. Edited: Here's the code I currently have that is horrible :/ if (player.getTurnOver() && updateWait == 0) { if (creep[creepToUpdate].getActive()) { creep[creepToUpdate].moveObject(player, map1); updateWait = 10; } if (creepToUpdate < creep.Length -1) { creepToUpdate++; } else { creepToUpdate = 0; player.setTurnOver(false); } } if (updateWait > 0) { updateWait--; }

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • how to calculate intersection time and place of multiple moving arcs

    - by user20733
    I have rocks orbiting moons, moons orbiting planets, planets orbiting suns, and suns orbiting black holes, and the current system could have many many layers of orbitage. the position of any object is a function of time and relative to the object it orbits. (so far so good). now I want to know for a given 2 objects(A,B), a start time and a speed, how can I work out the when and where to go. I can work out where A and B is given a time. so i just need. 1: direction to travel in from A to B(remember B is moving(not in a straight line)) 2: Time to get to b in a straight line. travel must be in a straight line with the shortest possible distance. as an extension to this question, how will i know if its better to wait, EG is it faster to stay on object A and wait for a hour when the objects may be closer, than to set off from A to B at the start. Cheers, it hurt my brain.

    Read the article

  • why specular light is not running?

    - by nkint
    hi, i'm on JOGL this is my method for lighting: private void lights(GL gl) { float[] LightPos = {0.0f, 0.0f, -1.0f, 1.0f}; float[] LightAmb = {0.2f, 0.2f, 0.2f, 1.0f}; float[] LightDif = {0.6f, 0.6f, 0.6f, 1.0f}; float[] LightSpc = {0.9f, 0.9f, 0.9f, 1.0f}; gl.glLightfv(GL.GL_LIGHT1, GL.GL_POSITION, LightPos, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_AMBIENT, LightAmb, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_DIFFUSE, LightDif, 0); gl.glLightfv(GL.GL_LIGHT1, GL.GL_SPECULAR, LightSpc, 0); gl.glLightfv(GL.GL_LIGHT0, GL.GL_SPECULAR, LightSpc, 0); gl.glEnable(GL.GL_LIGHT0); gl.glEnable(GL.GL_LIGHT1); gl.glShadeModel(GL.GL_SMOOTH); gl.glEnable(GL.GL_LIGHTING); } and i see my objects flat, no specular light.. any ideas? ps. to render my objects: gl.glColor3f(1f,0f,0f); gl.glBegin(GL.GL_TRIANGLES); for(Triangle t : tubeModel.getTriangles()) { gl.glVertex3f(t.v1.x, t.v1.y, t.v1.z); gl.glVertex3f(t.v2.x, t.v2.y, t.v2.z); gl.glVertex3f(t.v3.x, t.v3.y, t.v3.z); } gl.glEnd();

    Read the article

  • JiglibX addition to existing project questions

    - by SomeXnaChump
    Got a very simple existing project, that basically contains a lot of cubes. Now I am wanting to add a physics system to it and JiglibX seemed like the simplest one with some tutorials out there. My main problem is that the physics don't seem to be working how I imagined, I expected my tower of cubes to come crashing down, but they dont seem to do anything. I think my problem is that my cubes do not inherit DrawableGameComponent, they are managed by a world object that will update and render them. So they are at no point put into the games component list. I am not sure if this means that JiglibX will not be able to interact with them as in all the tutorials there are no explicit calls to add the Body objects to the physics system, so I can only presume that they are using a static/singleton under the hood which automatically hooks in all things, or they use the game objects component list somehow. I also noticed that in alot of the tutorials they use the following when setting up the physics system: float timeStep = (float)gameTime.ElapsedGameTime.Ticks / TimeSpan.TicksPerSecond; PhysicsSystem.CurrentPhysicsSystem.Integrate(timeStep); Would it not be better to keep a local instance of the created PhysicsSystem object and just call myPhysicsSystem.Integrate(timeStep)?

    Read the article

  • Making a class pseudo-immutable by setting a flag

    - by scott_fakename
    I have a java project that involves building some pretty complex objects. There are quite a lot (dozens) of different ones and some of them have a HUGE number of parameters. They also need to be immutable. So I was thinking the builder pattern would work, but it ends up require a lot of boilerplate. Another potential solution I thought of was to make a mutable class, but give it a "frozen" flag, a-la ruby. Here is a simple example: public class EqualRule extends Rule { private boolean frozen; private int target; public EqualRule() { frozen = false; } public void setTarget(int i) { if (frozen) throw new IllegalStateException( "Can't change frozen rule."); target = i; } public int getTarget() { return target; } public void freeze() { frozen = true; } @Override public boolean checkRule(int i) { return (target == i); } } and "Rule" is just an abstract class that has an abstract "checkRule" method. This cuts way down on the number of objects I need to write, while also giving me an object that becomes immutable for all intents and purposes. This kind of act like the object was its own Builder... But not quite. I'm not too excited, however, about having an immutable being disguised as a bean however. So I had two questions: 1. Before I go too far down this path, are there any huge problems that anyone sees right off the bat? For what it's worth, it is planned that this behavior will be well documented... 2. If so, is there a better solution? Thanks

    Read the article

  • How to enable a Web portal-based rich enterprise platform on different domains and hosts using JS without customization/ server configuration

    - by S.Jalali
    Our company Coscend has built a Web portal-based communications and cloud collaboration platform by using JavaScript (JS), which is embedded in HTML5 and formatted with CSS3. Other technologies used in the core code include Flash, Flex, PostgreSQL and MySQL. Our team would like to host this platform on five different Windows and Linux environments that run different types of Web servers such as IIS and Apache. Technical challenge: Each of these Windows and Linux servers have a different host name and domain name (and IP address), but we would like to keep our enterprise platform independent of host server configuration. Possible approach to solution: We think an API (interface module with a GUI) is needed to accomplish this level of modularity and flexibility while deploying at our enterprise customers. Seeking your insights: In this context, our team would appreciate your guidance on: Is there an algorithmic method to implement this Web portal-based platform in these Windows and Linux environments while separating it from server configuration, i.e., customizing the host name, domain name and IP address for each individual instance? For example, would it be suitable to create some JS variables / objects for host name and domain name and call them in the different implementations? If a reference to the host/domain names occurs on hundreds of portal modules, these variables or JS objects would replace that. If so, what is the best way to make these object modules written in JS portable and re-usable across different environments and instances for enterprise customers? Here is an example of the implemented code for the said platform. The following Web site (www.CoscendCommunications.com) was built using this enterprise collaboration platform and has the base code examples of the platform. This Web site is domain-specific. We like to make the underlying platform such that it is domain and host-independent. This will allow the underlying platform to be deployed in multiple instances of our enterprise customers.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Client-server application design issue

    - by user2547823
    I have a collection of clients on server's side. And there are some objects that need to work with that collection - adding and removing clients, sending message to them, updating connection settings and so on. They should perform these actions simultaneously, so mutex or another synchronization primitive is required. I want to share one instance of collection between these objects, but all of them require access to private fields of collection. I hope that code sample makes it more clear[C++]: class Collection { std::vector< Client* > clients; Mutex mLock; ... } class ClientNotifier { void sendMessage() { mLock.lock(); // loop over clients and send message to each of them } } class ConnectionSettingsUpdater { void changeSettings( const std::string& name ) { mLock.lock(); // if client with this name is inside collection, change its settings } } As you can see, all these classes require direct access to Collection's private fields. Can you give me an advice about how to implement such behaviour correctly, i.e. keeping Collection's interface simple without it knowing about its users?

    Read the article

  • Good practice on Visual Studio Solutions

    - by JonWillis
    Hopefully a relativity simple question. I'm starting work on a new internal project to create tractability of repaired devices within the buildings. The database is stored remotely on a webserver, and will be accessed via web API (JSON output) and protected with OAuth. The front end GUI is being done in WPF, and the business code in C#. From this, I see the different layers Presentation/Application/Datastore. There will be code for managing all the authenticated calls to the API, class to represent entities (business objects), classes to construct the entities (business objects), parts for WPF GUI, parts of the WPF viewmodels, and so on. Is it best to create this in a single project, or split them into individual projects? In my heart I say it should be multiple projects. I have done it both ways previously, and found testing to be easier with a single project solution, however with multiple projects then recursive dependencies can crop up. Especially when classes have interfaces to make it easier to test, I've found things can become awkward.

    Read the article

  • Movement on the X an Z axis are combined?

    - by Magicaxis
    This is probably a stupid question, but I'm trying to simply move a 3D object up, down, left, and right (Not forward or backward). The Y axis works fine, but when I increment the object's X position, the object moves BOTH right and backwards! when I decrement X, left and forwards! setPosition(getPosition().X + 2/*times deltatime*/, getPosition().Y, getPosition().Z); I was astonished that XNA doesnt have its own setPosition function, so I made a parent class for all objects with a setPosition and Draw function. Setposition simply edits a variable "mPosition" and passes it to the common draw function: // Copy any parent transforms. Matrix[] transforms = new Matrix[block.Bones.Count]; block.CopyAbsoluteBoneTransformsTo(transforms); // Draw the model. A model can have multiple meshes, so loop. foreach (ModelMesh mesh in block.Meshes) { // This is where the mesh orientation is set, as well // as our camera and projection. foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.World = transforms[mesh.ParentBone.Index] * Matrix.CreateRotationY(MathHelper.ToRadians(mOrientation.Y)) * Matrix.CreateTranslation(mPosition); effect.View = game1.getView(); effect.Projection = game1.getProjection(); } // Draw the mesh, using the effects set above. mesh.Draw(); } I tried to work it out by attempting to increment and decrement the Z axis, but nothing happens?! So using the X axis changes the objects x and z axis', but changing the Z does nothing. Great. So how do I seperate the X and Z axis movement?

    Read the article

  • What makes you look like a bad developer (ie a hacker) [on hold]

    - by user134583
    This comes from a lot of people about me, so I have to look at myself. So I would wonder what make one a bad developer (ie a hacker). These are a few things about me I used IDE intensively, all features, you name it: auto-completion, refactoring, quick fixes, open type, view hierarchy, API documentation, etcc When I deal with writing code for a project in domain I am not used to (I can't have fluency in this, this is new), I only have a very rough high level ideas. I don't use the standard modeling diagrams for early detail planning. Unorthodox diagrams that I invented when I need to draw the design in details. I don't use UML or similar, I find them not enough. I divide the sorts of diagram I drew into 3 types. Very high level diagrams which probably can be understood by almost anybody. Data entity diagram used for modeling data objects only (like ER diagrams and tree for inheritances and composition). Action diagrams for agents/classes and their interactions on data objects they contain. Constantly changing the interface (public methods) between interacting agents/classes if the need arises. I am more refrained when the interface and the module have matured Write initial concept code in a quick hackie way just so that the module works in the general cases so that I can play around with it. The module will be re-factored intensively after playing around so I could see more corner cases that I couldn't or (wouldn't want) anticipate before writing code. Using JUnit for integration-like test by using TestSuite class and ordering Unit test classes in the suite Using debugger almost anytime there is a problem instead of reading the code Constantly search on the internet for how to do some thing with some library that I haven't used a lot. So judgment, am I a bad developer? a hacker? Put in other words, to make sure this is not considered off-topic: - Is this bad practice to make your code too agile during incubating/prototyping phase of software development - Is it bad practice to use JUnit for integration testing, (I know there are other framework for integration testing, but those frameworks are for a specific products, not general)

    Read the article

  • Using the Coherence ConcurrentMap Interface (Locking API)

    - by jpurdy
    For many developers using Coherence, the first place they look for concurrency control is the com.tangosol.util.ConcurrentMap interface (part of the NamedCache interface). The ConcurrentMap interface includes methods for explicitly locking data. Despite the obvious appeal of a lock-based API, these methods should generally be avoided for a variety of reasons: They are very "chatty" in that they can't be bundled with other operations (such as get and put) and there are no collection-based versions of them. Locks do directly not impact mutating calls (including puts and entry processors), so all code must make explicit lock requests before modifying (or in some cases reading) cache entries. They require coordination of all code that may mutate the objects, including the need to lock at the same level of granularity (there is no built-in lock hierarchy and thus no concept of lock escalation). Even if all code is properly coordinated (or there's only one piece of code), failure during updates that may leave a collection of changes to a set of objects in a partially committed state. There is no concept of a read-only lock. In general, use of locking is highly discouraged for most applications. Instead, the use of entry processors provides a far more efficient approach, at the cost of some additional complexity.

    Read the article

  • CCMoveBy values on update()

    - by Jose M Pan
    Hope you can help me. This is my problem: I have a scheduled update, here I track the movements of my objects (sprites), I move them with CCMoveBy, and I need to constantly update the zOrder. For setting the zOrder I've made a setZOrder(), which it takes the actual position of the sprite. And here is the problem, I get all the X and Y values AFTER the object is in the target. I know I get the values after the object is in the new position because I've made a CCLog. I can read all the values from the sprite, only when it's in the new position, so everything is well sorted only when the objects are not moving. How can I get the CCMoveBy values on every tick update? (or how can I get the CCMoveBy values in "real-time"?) Thanks a lot in advance, Here is an idea of my code. this->schedule(schedule_selector(Game::update)); void Game::update(float dt) { setZOrder(); moveObjects(); } void Game::setZOrder() { //This function takes the X and Y position and the row and column where the sprite is. Is working good. But I'm getting the "move" action values, after the object is in place. } void Game::moveObjects() { for (i=0; i < numChildren; i++) { CCActionInterval* move = CCMoveBy::create(targetPoint, time); object[i]->runAction(move); } }

    Read the article

  • How to handle class dependency with interfaces and implementatons

    - by lealand
    I'm using ObjectAid with Eclipse to generate UML class diagrams for my latest Java project, and I currently have a handful of situations like this, where I have a dependency between two interfaces, as well as one of the implementations of one of the interfaces. Here, foo is the graphics library I'm using. In the previous example, FooCanvas draws ITexture objects to the screen, and both FooCanvas and its interface, ICanvas, take ITexture objects as arguments to their methods. The method in the canvas classes which cause this dependency is the following: void drawTexture(ITexture texture, float x, float y); Additionally, I tried a variation on the method signature using Java's generics: <T extends ITexture> void drawTexture(T texture, float x, float y); The result of this was a class diagram where the only dependencies where between the interfaces and the implementing classes, and no dependency by a canvas object on a texture. I'm not sure if this is more ideal or not. Is the dependency of both the interface and implementation on another interface an expected pattern, or is it typical and/or possible to keep the implementation 'isolated' from its interfaces dependencies? Or is the generic method the ideal solution?

    Read the article

  • Representing a world in memory

    - by user9993
    I'm attempting to write a chunk based map system for a game, where as the player moves around chunks are loaded/unloaded, so that the program doesn't run out of memory by having the whole map in memory. I have this part mostly working, however I've hit a wall regarding how to represent the contents of each chunk in memory because of my so far limited understanding of OOP languages. The design I have currently has a ChunkManager class that uses a .NET List type to store instances of Chunk classes. The "chunks" consist of "blocks". It's similar to a Minecraft style game. Inside the Chunk classes, I have some information such as the chunk's X/Y coordinate etc, and I also have a three dimensional array of Block objects. (Three dimensional because I need XYZ values) Here's the problem: The Block class has some basic properties, and I had planned on making different types of blocks inherit from this "base" class. So for example, I would have "StoneBlock", "WaterBlock" etc. So because I have blocks of many different types, I don't know how I would create an array with different object types in each cell. This is how I currently have the three dimensional array declared in my Chunk class: private Block[][][] ArrayOfBlocks; But obviously this will only accept Block objects, not any of the other classes that inherit from Block. How would I go about creating this? Or am I doing it completely wrong and there's a better way?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >