Search Results

Search found 2417 results on 97 pages for 'states'.

Page 47/97 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Starting to Program C++ and Java

    - by user0321
    So as the title states, I'm trying to start programming in C++ and Java. I took C++ and Java courses in high school and I'm trying to get back into it. Of course all I want to get working now is a simple "Hello World" program. Couple of things: I want to use an IDE. I've decided on Eclipse. I'm just confused about how I go about downloading/using it. For Java: I get stuck right on their download page. They show Eclipse Classic, Eclipse IDE for Java developers and Eclipse IDE for Java EE Developers. I only programmed in Notepad and compiled in command prompt. Question 1: Which version of Eclipse should I download? Question 2: Do I need to install the Java JDK or does it come built into Eclipse? For C++: I guess I download the separate Eclipse IDE for C/C++ developers? I'm not too sure. I remember using Microsoft Visual for C++. I remember it being weird though. Anyways Question 3: Which version of Eclipse should I download? Question 4: Does C++ have a Development Kit or does it come built into Eclipse?

    Read the article

  • JavaOne 2012 Conference Preview

    - by Janice J. Heiss
    A new article, by noted freelancer Steve Meloan, now up on otn/java, titled “JavaOne 2012 Conference Preview,” looks ahead to the fast approaching JavaOne 2012 Conference, scheduled for September 30-October 4 in San Francisco. The Conference will celebrate and highlight one of the world’s leading technologies. As Meloan states, “With 9 million Java developers worldwide, 5 billion Java cards in use, 3 billion mobile phones running Java, 1 billion Java downloads each year, and 100 percent of Blu-ray disk players and 97 percent of enterprise desktops running Java, Java is a technology that literally permeates our world.”The 2012 JavaOne is organized under seven technical tracks:* Core Java Platform* Development Tools and Techniques* Emerging Languages on the JVM* Enterprise Service Architectures and the Cloud* Java EE Web Profile and Platform Technologies* Java ME, Java Card, Embedded, and Devices* JavaFX and Rich User ExperiencesConference keynotes will lay out the Java roadmap. For the Sunday keynote, such Oracle luminaries as Cameron Purdy, Vice President of Development; Nandini Ramani, Vice President of Engineering, Java Client and Mobile Platforms; Richard Bair, Chief Architect, Client Java Platform; and Mark Reinhold, Chief Architect, Java Platform will be presenting.For the Thursday IBM keynote, Jason McGee, Distinguished Engineer and Chief Architect for IBM PureApplication System, and John Duimovich, Java CTO and IBM Distinguished Engineer, will explore Java and IBM's cloud-based initiatives.All in all, the JavaOne 2012 Conference should be as exciting as ever.Link to the article here. Originally published on blogs.oracle.com/javaone.

    Read the article

  • JavaOne 2012 Conference Preview

    - by Janice J. Heiss
    A new article, by noted freelancer Steve Meloan, now up on otn/java, titled “JavaOne 2012 Conference Preview,” looks ahead to the fast approaching JavaOne 2012 Conference, scheduled for September 30-October 4 in San Francisco. The Conference will celebrate and highlight one of the world’s leading technologies. As Meloan states, “With 9 million Java developers worldwide, 5 billion Java cards in use, 3 billion mobile phones running Java, 1 billion Java downloads each year, and 100 percent of Blu-ray disk players and 97 percent of enterprise desktops running Java, Java is a technology that literally permeates our world.” The 2012 JavaOne is organized under seven technical tracks: * Core Java Platform* Development Tools and Techniques* Emerging Languages on the JVM* Enterprise Service Architectures and the Cloud* Java EE Web Profile and Platform Technologies* Java ME, Java Card, Embedded, and Devices* JavaFX and Rich User Experiences Conference keynotes will lay out the Java roadmap. For the Sunday keynote, such Oracle luminaries as Cameron Purdy, Vice President of Development; Nandini Ramani, Vice President of Engineering, Java Client and Mobile Platforms; Richard Bair, Chief Architect, Client Java Platform; and Mark Reinhold, Chief Architect, Java Platform will be presenting. For the Thursday IBM keynote, Jason McGee, Distinguished Engineer and Chief Architect for IBM PureApplication System, and John Duimovich, Java CTO and IBM Distinguished Engineer, will explore Java and IBM's cloud-based initiatives.All in all, the JavaOne 2012 Conference should be as exciting as ever. Link to the article here.

    Read the article

  • (Abstract) Game engine design

    - by lukeluke
    I am writing a simple 2D game (for mobile platforms) for the first time. From an abstract point of view, i have the main player controlled by the human, the enemies, elments that will interact with the main player, other living elements that will be controlled by a simple AI (both enemies and non-enemies). The human player will be totally controlled by the player, the other actors will be controlled by AI. So i have a class CActor and a class CActorLogic to start with. I would define a CActor subclass CHero (the main player controlled with some input device). This class will probably implement some type of listener, in order to capture input events. The other players controlled by the AI will be probably a specific subclass of CActor (a subclass per-type, obviously). This seems to be reasonable. The CActor class should have a reference to a method of CActorLogic, that we will call something like CActorLogic::Advance() or similar. Actors should have a visual representation. I would introduce a CActorRepresentation class, with a method like Render() that will draw the actor (that is, the right frame of the right animation). Where to change the animation? Well, the actor logic method Advance() should take care of checking collisions and other things. I would like to discuss the design of a game engine (actors, entities, objects, messages, input handling, visualization of object states (that is, rendering, sound output and so on)) but not from a low level point of view, but from an high level point of view, like i have described above. My question is: is there any book/on line resource that will help me organize things (using an object oriented approach)? Thanks

    Read the article

  • Cannot access personal website from home IP. More details inside.

    - by GX67
    This is a recent problem I've been having. My site can be accessed from almost everywhere else except from my home IP, where I do most of my editing/updating, etc. I've tested my connection from my school's network, a friend's connection from out of state (multiple states), and through a tethered connection with my friend's Android. It works in all those cases, both viewing, accessing the cPanel, and using FTP. Here's the problem that happens to me when I try to view it from my home IP: The page times out in Firefox, IE, and Chrome. Using the cmd, I ran tracert and ping, both as failed attempts. Log here. downforeveryoneorjustme.com says my site is up. So do the other site checkers. I can't access my cPanel or FTP accounts. I can't access the host site. (I use perfectz.info for hosting, and I can't access their site either.) System settings: No firewall enabled. Ports are seemingly properly forwarded. (e.g. The ports are open in the router settings, and are open everywhere else.) I have an email forwarder set up from the cPanel that works just fine. (i.e. I can receive emails sent to that address. If any other information is needed, I'll do my best to provide it. UPDATE @ilhan: I use two things: 1) The site cPanel from in-browser. 2) Dreamweaver CS5 FTP. @Matthias: I tested both, and it passes the dual stack with a 10/10. What should I do then?

    Read the article

  • Rails: Law of Demeter Confusion

    - by user2158382
    I am reading a book called Rails AntiPatterns and they talk about using delegation to to avoid breaking the Law of Demeter. Here is their prime example: They believe that calling something like this in the controller is bad (and I agree) @street = @invoice.customer.address.street Their proposed solution is to do the following: class Customer has_one :address belongs_to :invoice def street address.street end end class Invoice has_one :customer def customer_street customer.street end end @street = @invoice.customer_street They are stating that since you only use one dot, you are not breaking the Law of Demeter here. I think this is incorrect, because you are still going through customer to go through address to get the invoice's street. I primarily got this idea from a blog post I read: http://www.dan-manges.com/blog/37 In the blog post the prime example is class Wallet attr_accessor :cash end class Customer has_one :wallet # attribute delegation def cash @wallet.cash end end class Paperboy def collect_money(customer, due_amount) if customer.cash < due_ammount raise InsufficientFundsError else customer.cash -= due_amount @collected_amount += due_amount end end end The blog post states that although there is only one dot customer.cash instead of customer.wallet.cash, this code still violates the Law of Demeter. Now in the Paperboy collect_money method, we don't have two dots, we just have one in "customer.cash". Has this delegation solved our problem? Not at all. If we look at the behavior, a paperboy is still reaching directly into a customer's wallet to get cash out. EDIT I completely understand and agree that this is still a violation and I need to create a method in Wallet called withdraw that handles the payment for me and that I should call that method inside the Customer class. What I don't get is that according to this process, my first example still violates the Law of Demeter because Invoice is still reaching directly into Customer to get the street. Can somebody help me clear the confusion. I have been searching for the past 2 days trying to let this topic sink in, but it is still confusing.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • jQuery 2.0 drops support for legacy IE (IE6, IE7, IE8)

    - by Renso
    Originally posted on: http://geekswithblogs.net/renso/archive/2013/10/31/jquery-2.0-drops-support-for-legacy-ie-ie6-ie7-ie8.aspxjQuery upgrades may not be as reverse compatible as you may think. Starting from version 2 of jQuery, IE6, IE7 and yes also IE8 will no longer be supported. These are now considered legacy browsers. You will need to stop any upgrades to jQuery until your SLA states that IE8 is no longer supported and remain in jQuery 1.9.Some of the reasons for not supporting IE8 and before:- Remove all the code clutter in the jQuery library with code that has to deal with IE browser compatibility issues between IE6, 7 and 8 and the newer IE versions, the latter being more compliant.- IE6 and 7 may have fallen to below 2% generally, that does not mean that that is true for your client base. In the oil and gas industry some clients are years behind and you may have 50% or more of clients remain on IE8 or older for the foreseeable future.- The difference between browser engines has become almost negligible, as it should be. So one of the greatest goals of jQuery to abstract that out for developers is no longer needed, for most part anyway. - CSS3 features like animations basically replace the need for jQuery’s 2.0 animations and effects.If the need is still there to support IE8 or before, but you also want to upgrade, then use conditional comments:<!--[if lt IE 9]>    <script src="jQuery-1.9.0.js"></script><![endif]--><!--[if gte IE 9]><!-->    <script src="jQuery-2.0.0.js"><</script><!--<![endif]-->

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • System testing - making sure the system conforms to specification. Validation?

    - by user970696
    After weeks of research I have nearly completed my thesis, yet I am unable to clear up my confusion contained in all previous threads here (and in many books): During system testing, we check the system function against system analysis (functional system design) - but that would fit to a definition of verification according to many books. But I follow ISO12207, which considers all testing as validation (making sure work product meets requirement for intended use). How can I justify that unit testing or system testing is validation, even though when I check it against specification? Which fullfils the definiton of verification? When testing that e.g. "Save button" works, is it validation? This picture shows my understanding of V&V, so different from many other sources, including ISTQB etc. Essential problem I have is that a book using the same picture also states on another place that: test activities in the area of validation are usability, alpha and beta testing. For verification, testable system requirements are defined whose correct implementation can be tested through system tests. Isn't that the opposite of what the picture says? Most books present the following picture, where validation is just making sure that customer needs are satisfied. Mind you that according to ISO, validation activity is testing.

    Read the article

  • PHP Battle System for RPG game

    - by Jay
    I posted this a while ago on stackoverflow, they thought it would be better place here, I agree. Essentially I know what I want to accomplish, and I have something to the effect of what I want but I am not satisfied with it. Here's the problem. Each user has some states: STR (how hard they hit), DEF (dodging/blocking attacks), SPD (when they can strike), and STAMINA (basically their endurance in game, if this runs out they can no longer fight and lose) What I need is something like this: UserA Stats: STR: 1,000 DEF: 2500 SPD: 2000 (HP: 1000/1000) UserB Stats: STR: 1,500 DEF: 500 SPD: 4000 (HP: 1000/1000) Because the second user has double the speed, he lands twice the amount of hits on the first user, before he gets hit. Because he has less strength than the first users defence, he will do no, to little damage. This is how the battle would theoretically go: UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage UserB strikes UserA for 0 damage UserB strikes UserA for 0 damage UserA strikes UserB for 500 damage, and sends him to the hospital! I was using this code, which is buggy, and not efficient, I just need a better way to do this: http://pastebin.com/15LiQQuJ Oh, and if anyone has some good ideas on how to improve the concept that would be cool too! It's not that elaborate so I'll be thinking of all sorts of things to make it more dynamic. Thanks.

    Read the article

  • Philosophy behind the memento pattern

    - by TheSilverBullet
    I have been reading up on memento pattern from various sources of the internet. Differing information from different sources has left me in confusion regarding why this pattern is actually needed. The dofactory implementation says that the primary intention of this pattern is to restore the state of the system. Wiki says that the primary intention is to be able to restore the changes on the system. This gives a different impact - saying that it is possible for a system to have memento implementation with no need to restore. And that ability of restore is a feature of this. OODesign says that It is sometimes necessary to capture the internal state of an object at some point and have the ability to restore the object to that state later in time. Such a case is useful in case of error or failure. So, my question is why exactly do we use this one? Is it to save previous states - or to promote encapsulation between the Caretaker and the Memento? Why is this type of encapsulation so important? Edit: For those visiting, check out this Implementation!

    Read the article

  • DHCP-server doesn't start at boot because of wrong startup order

    - by stolsvik
    Apparently the isc-dhcp-server is started too early in the boot sequence, it states that it has nothing to do. If I just log directly in as root and start it using the init.d-script, it starts normally. My setup is basically an utterly standard router, with an eth0 on the inet side, and an eth1 on the lan side. However, I've defined a bridge instead of the eth1 for the lan-side. Thus, the lan-part of the network isn't up until the bridge is up. I currently believe that the dhcp server is brought up before the bridge is brought up, probably because the bridge is brought up with the 'networking' task, while the eth's are taken up with the 'network-interface' tasks - which are run earlier. (also, the bridge takes a small age to get up compared to the eth's). If I do take away the bridge config, instead using eth1 directly for the lan side, things work. (However, judging by syslog, things are still tight.) Ideas of how the get DHCP to start later? (The reason for the bridge, is to be able to use KVM with bridged networking..)

    Read the article

  • XNA shield effect with a Primative sphere problem

    - by Sparky41
    I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point) This sphere is drawn via a primitive (DrawIndexedPrimitives) This is how i calculate the pieces of the sphere using a class i've called Sphere (this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d) public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List(); // Once all the geometry has been specified, the InitializePrimitive // method copies the vertex and index data into these buffers, which // store it on the GPU ready for efficient rendering. VertexBuffer vertexBuffer; IndexBuffer indexBuffer; BasicEffect basicEffect; public Vector3 position = Vector3.Zero; public Matrix RotationMatrix = Matrix.Identity; public Texture2D texture; /// <summary> /// Constructs a new sphere primitive, /// with the specified size and tessellation level. /// </summary> public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback) { texture = text; if (tessellation < 3) throw new ArgumentOutOfRangeException("tessellation"); int verticalSegments = tessellation; int horizontalSegments = tessellation * 2; float radius = diameter / 2; // Start with a single vertex at the bottom of the sphere. AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5 // Create rings of vertices at progressively higher latitudes. for (int i = 0; i < verticalSegments - 1; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude / up);//(up)5 float dxz = (float)Math.Cos(latitude); // Create a single ring of vertices at this latitude. for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2 float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4 Vector3 normal = new Vector3(dx, dy, dz); AddVertex(normal * radius, normal, new Vector2(j, i)); } } // Finish with a single vertex at the top of the sphere. AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5 // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; AddIndex(1 + i * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); AddIndex(1 + i * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + nextJ); AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(CurrentVertex - 1); AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); AddIndex(CurrentVertex - 2 - i); } //InitializePrimitive(graphicsDevice); } /// <summary> /// Adds a new vertex to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate) { vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate)); } /// <summary> /// Adds a new index to the primitive model. This should only be called /// during the initialization process, before InitializePrimitive. /// </summary> protected void AddIndex(int index) { if (index > ushort.MaxValue) throw new ArgumentOutOfRangeException("index"); indices.Add((ushort)index); } /// <summary> /// Queries the index of the current vertex. This starts at /// zero, and increments every time AddVertex is called. /// </summary> protected int CurrentVertex { get { return vertices.Count; } } public void InitializePrimitive(GraphicsDevice graphicsDevice) { // Create a vertex declaration, describing the format of our vertex data. // Create a vertex buffer, and copy our vertex data into it. vertexBuffer = new VertexBuffer(graphicsDevice, typeof(VertexPositionNormal), vertices.Count, BufferUsage.None); vertexBuffer.SetData(vertices.ToArray()); // Create an index buffer, and copy our index data into it. indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort), indices.Count, BufferUsage.None); indexBuffer.SetData(indices.ToArray()); // Create a BasicEffect, which will be used to render the primitive. basicEffect = new BasicEffect(graphicsDevice); //basicEffect.EnableDefaultLighting(); } /// <summary> /// Draws the primitive model, using the specified effect. Unlike the other /// Draw overload where you just specify the world/view/projection matrices /// and color, this method does not set any renderstates, so you must make /// sure all states are set to sensible values before you call it. /// </summary> public void Draw(Effect effect) { GraphicsDevice graphicsDevice = effect.GraphicsDevice; // Set our vertex declaration, vertex buffer, and index buffer. graphicsDevice.SetVertexBuffer(vertexBuffer); graphicsDevice.Indices = indexBuffer; graphicsDevice.BlendState = BlendState.Additive; foreach (EffectPass effectPass in effect.CurrentTechnique.Passes) { effectPass.Apply(); int primitiveCount = indices.Count / 3; graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Count, 0, primitiveCount); } graphicsDevice.BlendState = BlendState.Opaque; } /// <summary> /// Draws the primitive model, using a BasicEffect shader with default /// lighting. Unlike the other Draw overload where you specify a custom /// effect, this method sets important renderstates to sensible values /// for 3D model rendering, so you do not need to set these states before /// you call it. /// </summary> public void Draw(Camera camera, Color color) { // Set BasicEffect parameters. basicEffect.World = GetWorld(); basicEffect.View = camera.view; basicEffect.Projection = camera.projection; basicEffect.DiffuseColor = color.ToVector3(); basicEffect.TextureEnabled = true; basicEffect.Texture = texture; GraphicsDevice device = basicEffect.GraphicsDevice; device.DepthStencilState = DepthStencilState.Default; if (color.A < 255) { // Set renderstates for alpha blended rendering. device.BlendState = BlendState.AlphaBlend; } else { // Set renderstates for opaque rendering. device.BlendState = BlendState.Opaque; } // Draw the model, using BasicEffect. Draw(basicEffect); } public virtual Matrix GetWorld() { return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position); } } public struct VertexPositionNormal : IVertexType { public Vector3 Position; public Vector3 Normal; public Vector2 TextureCoordinate; /// <summary> /// Constructor. /// </summary> public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor) { Position = position; Normal = normal; TextureCoordinate = textCoor; } /// <summary> /// A VertexDeclaration object, which contains information about the vertex /// elements contained within this struct. /// </summary> public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0), new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0) ); VertexDeclaration IVertexType.VertexDeclaration { get { return VertexPositionNormal.VertexDeclaration; } } } A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent. My current thoughts on this are: The direction of the weapon hitting the ship is used to get the middle position for the texture Wrap a texture around the drawn sphere based on this point of contact Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.

    Read the article

  • Ubuntu 13.04 client cannot connect to Raspbian samba share

    - by envoyweb
    I have a client Ubuntu 13.04 machine trying to connect to a server running Raspbian with samba and samba-common-bin installed on the server I can see my share and when I try to login I get this error: Unable to access location: Failed to write windows share Cannot allocate memory. I have installed ntfs-3g for the usb hard drive that already auto mounts on the server so I never had to create a directory or edit fstab. Testparm on the server states the following: [global] workgroup = ENVOYWEB server string = %h server map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [homes] comment = Home Directories valid users = %S create mask = 0700 directory mask = 0700 browseable = No [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [BigDude] comment = Sharing BigDude's Files path = /media/BigDude/ valid users = @users read only = No create mask = 0755 testparm on the client which is running ubuntu is as follows [global] workgroup = ENVOYWEB server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Architecture of an action multiplayer game from scratch

    - by lcf
    Not sure whether it's a good place to ask (do point me to a better one if it's not), but since what we're developing is a game - here it goes. So this is a "real-time" action multiplayer game. I have familiarized myself with concepts like lag compensation, view interpolation, input prediction and pretty much everything that I need for this. I have also prepared a set of prototypes to confirm that I understood everything correctly. My question is about the situation when game engine must be rewind to the past to find out whether there was a "hit" (sometimes it may involve the whole 'recomputation' of the world from that moment in the past up to the present moment. I already have a piece of code that does it, but it's not as neat as I need it to be. The domain logic of the app (the physics of the game) must be separated from the presentation (render) and infrastructure tools (e.g. the remote server interaction specifics). How do I organize all this? :) Is there any worthy implementation with open sources I can take a look at? What I'm thinking is something like this: -> Render / User Input -> Game Engine (this is the so called service layer) -> Processing User Commands & Remote Server -> Domain (Physics) How would you add into this scheme the concept of "ticks" or "interactions" with the possibility to rewind and recalculate "the game"? Remember, I cannot change the Domain/Physics but only the Game Engine. Should I store an array of "World's States"? Should they be just some representations of the world, optimized for this purpose somehow (how?) or should they be actual instances of the world (i.e. including behavior and all that). Has anybody had similar experience? (never worked on a game before if that matters)

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Application Scope v's Static - Not Quite the same

    - by Duncan Mills
    An interesting question came up today which, innocent as it sounded, needed a second or two to consider. What's the difference between storing say a Map of reference information as a Static as opposed to storing the same map as an application scoped variable in JSF?  From the perspective of the web application itself there seems to be no functional difference, in both cases, the information is confined to the current JVM and potentially visible to your app code (note that Application Scope is not magically propagated across a cluster, you would need a separate instance on each VM). To my mind the primary consideration here is a matter of leakage. A static will be (potentially) visible to everything running within the same VM (OK this depends on which class-loader was used but let's keep this simple), and this includes your model code and indeed other web applications running in the same container. An Application Scoped object, in JSF terms, is much more ring-fenced and is only visible to the Web app itself, not other web apps running on the same server and not directly to the business model layer if that is running in the same VM. So given that I'm a big fan of coding applications to say what I mean, then using Application Scope appeals because it explicitly states how I expect the data to be used and a provides a more explicit statement about visibility and indeed dependency as I'd generally explicitly inject it where it is needed.  Alternative viewpoints / thoughts are, as ever, welcomed...

    Read the article

  • Microeconomical simulation: coordination/planning between self-interested trading agents

    - by Milton Manfried
    In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a "game" modeling economic activity, where the possible "moves" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by "perfectly rational" agents. A great example of what I do not want is the trading algorithm in SugarScape -- paraphrasing from Growing Artificial Societies p101-102: when a pair of agents interact to trade, they each compute their internal valuations of the goods, then a bargaining process is conducted and a price is agreed to. If this price makes both agents better off, they complete the transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called "strategic thinking" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be "zero sum", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents.

    Read the article

  • Data Center Modernization: Harness the power of Oracle Exalogic and Exadata with PeopleSoft

    - by Michelle Kimihira
    Author: Latha Krishnaswamy, Senior Manager, Exalogic Product Management   Allegis Group - a Hanover, MD-based global staffing company is the largest privately held staffing company in the United States with more than 10,000 internal employees and 90,000 contract employees. Allegis Group is a $6+ billion company, offering a full range of specialized staffing and recruiting solutions to clients in a wide range of industries.   The company processes about 133,000 paychecks per week, every week of the year. With 300 offices around the world and the hefty task of managing HR and payroll, the PeopleSoft system at Allegis  is a mission-critical application. The firm is in the midst of a data center modernization initiative. Part of that project meant moving the company's PeopleSoft applications (Financials and HR Modules as well as Custom Time & Expense module) to a converged infrastructure.     The company ran a proof of concept with four different converged architectures before deciding upon Exadata and Exalogic as the platform of choice.   Performance combined with High availability for running mission-critical payroll processes drove this decision.  During the testing on Exadata and Exalogic Allegis applied a particular (11-F) tax update in production environment. What job ran for roughly six hours completed in less than 1.5 hours. With additional tuning the second run of the Tax update 11-F reduced to 33 minutes - a 90% improvement!     Not only that, the move will help the company save money on middleware by consolidating use of Oracle licensing in a single platform.   Summary With a modern data center powered by Exalogic and Exadata to run mission-critical PeopleSoft HR and Financial Applications, Allegis is positioned to manage business growth and improve employee productivity. PeopleSoft applications run on engineered systems platform minimizing hardware and software integration risks. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >