Search Results

Search found 16351 results on 655 pages for 'metro screen'.

Page 542/655 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • How to Add a Taskbar to the Desktop in Ubuntu 14.04

    - by Lori Kaufman
    If you’ve switched to Ubuntu from Windows, it may take some time to get used to the new and different interface. However, you can easily incorporate a familiar Windows feature, the Taskbar, into Ubuntu to make the transition easier. A tool called Tint2 provides a bar at the bottom of the Ubuntu Desktop that resembles the Windows Taskbar. We will show you how to install it and make it start every time you log into Ubuntu. NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise. Press Ctrl + Alt + T to open a Terminal window. To install Tint2, type the following line at the prompt and press Enter. sudo apt-get install tint2 Type your password at the prompt and press Enter. The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter. When the installation has finished, close the Terminal window by typing “exit” at the prompt and pressing Enter. Click the Search button at the top of the Unity bar. Start typing “startup applications” in the Search box. Items that match what you type start displaying below the Search box. When the Startup Applications tool displays, click the icon to open it. On the Startup Applications Preferences window, click Add. On the Add Startup Program dialog box, enter a name for the startup application. This name displays in the list on the Startup Applications Preferences window. Type “tint2” in the Command edit box, enter a description in the Comment edit box, if desired, and click Add. Tint2 is added as a startup program and will start every time you log into Ubuntu. Click Close to close the Startup Applications Preferences window. Log out and log back in to make the Taskbar available on the desktop. You do not need to reboot the computer for this change to take effect. Now, when you minimize a program, an icon for it displays on the Taskbar at the bottom of the screen, just like the Taskbar in Windows. If you decide that you don’t want the Taskbar to display every time you log into Ubuntu, you can uncheck the Tint2 startup program on the Startup Applications Preferences window. You don’t need to delete it from the list.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Contract / Project / Line-Item hierarchy design considerations

    - by Ryan
    We currently have an application that allows users to create a Contract. A contract can have 1 or more Project. A project can have 0 or more sub-projects (which can have their own sub-projects, and so on) as well as 1 or more Line. Lines can have any number of sub-lines (which can have their own sub-lines, and so on). Currently, our design contains circular references, and I'd like to get away from that. Currently, it looks a bit like this: public class Contract { public List<Project> Projects { get; set; } } public class Project { public Contract OwningContract { get; set; } public Project ParentProject { get; set; } public List<Project> SubProjects { get; set; } public List<Line> Lines { get; set; } } public class Line { public Project OwningProject { get; set; } public List ParentLine { get; set; } public List<Line> SubLines { get; set; } } We're using the M-V-VM "pattern" and use these Models (and their associated view models) to populate a large "edit" screen where users can modify their contracts and the properties on all of the objects. Where things start to get confusing for me is when we add, for example, a Cost property to the Line. The issue is reflecting at the highest level (the contract) changes made to the lowest level. Looking for some thoughts as to how to change this design to remove the circular references. One thought I had was that the contract would have a Dictionary<Guid, Project> which would contain ALL projects (regardless of their level in hierarchy). The Project would then have a Guid property called "Parent" which could be used to search the contract's dictionary for the parent object. THe same logic could be applied at the Line level. Thanks! Any help is appreciated.

    Read the article

  • Why do "Joke" programming languages exist? [closed]

    - by ThePlan
    First of all please be aware this post contains some abusive language but I hope it will not bother anyone. I apologize for the bad language but that's what the name is. As I've been doing documentation on existing programming languages attempting to make a complete list of them I stumbled across terrible programming languages, which were clearly not made for actual use and implementation due to their insane difficulty. Languages such as Brainfu*k and LOLCODE or Whitespace are fool languages because they have no real use. For example, a "Hello world" program written in BrainFu*k. Taken from Wikipedia: The following program prints "Hello World!" and a newline to the screen: +++++ +++++ initialize counter (cell #0) to 10 [ use loop to set the next four cells to 70/100/30/10 > +++++ ++ add 7 to cell #1 > +++++ +++++ add 10 to cell #2 > +++ add 3 to cell #3 > + add 1 to cell #4 <<<< - decrement counter (cell #0) ] > ++ . print 'H' > + . print 'e' +++++ ++ . print 'l' . print 'l' +++ . print 'o' > ++ . print ' ' << +++++ +++++ +++++ . print 'W' > . print 'o' +++ . print 'r' ----- - . print 'l' ----- --- . print 'd' > + . print '!' > . print '\n' or another example taken from LOLCODE language: HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE These languages are very difficult to learn/read/work with. My question is - Why do they exist? What is the purpose of them? Also, is there an official "name" for these type of languages?

    Read the article

  • I need help installing Ubuntu 11.10 to multi-drive system

    - by CookyMonzta
    I have a machine with 3 hard drives; the primary, which is 750GB (drive 0), and 2 others, each of which is 640GB (drives 1 and 2). On the last screen before the actual installation begins, this is how my hard drive configuration looks: /dev/sda [DISK0, 750GB] /dev/sda1 ntfs 104MB [Win7 System Reserved] /dev/sda2 ntfs 499,997MB [Windows 7 Pro] free space 250,052MB [This space intended for Windows 8] /dev/sdb [DISK1, 640GB] /dev/sdb1 ntfs 400,085MB [Windows XP Pro] free space 240,049MB [This space intended for Ubuntu] /dev/sdc [DISK2, 640GB] [This drive intended for various backups] free space 160,033MB /dev/sdc5 ntfs 480,101MB [Acronis Secure Zone] As you can see, I have 3 drives, all SATA. I have Win7 on my first drive (0), WinXP on my second drive (1) and a secure zone for daily backups on my third drive (2). I want to put Ubuntu 11.10 Oneiric Ocelot on the drive that also has XP. I've already used 400GB for XP and I have 240GB remaining, for which, my intention was to create a 4GB swap file and use the rest for Ubuntu itself. This is what my second hard drive looked like, for my intended setup before installation: /dev/sdb /dev/sdb5 swap 4,095MB [Linux swap] /dev/sdb6 ext4 235,951MB [Ubuntu 11.10] Needless to say, this is only the second time I have attempted to install Linux. I managed to get 7.10 Gutsy Gibbon working on an old machine. I have two problems with this installation: Ubuntu asks for a location to install the boot loader (i.e., "Device for Boot Loader Installation"). I already have a boot loader; namely, Acronis OS Selector (from Acronis Disk Director 11). So I decided to put the Ubuntu boot loader in /dev/sdb6 (where I intend to install Ubuntu), to keep it from interfering with my Acronis OS Selector. Once I hit "Install now", I ended up with the following error: "No root file system is defined. Please correct this from the partitioning menu." What am I missing? Did I attempt to put the boot loader in the wrong place? I assume I did, because as I am writing this entry, I am looking at LinuxIdentity.com's Ubuntu 11.04 Natty Narwhal magazine, and I see a screenshot (Figure 7 on Page 13) that implies that the boot loader can be installed anywhere, including the first hard drive (in the MBR, which would obviously force me to reinstall the Acronis OS Selector) or even on a floppy. But why do I get an undefined root file system error? I thought /dev/sdb6 was the root file. Obviously I'm missing something in the installation procedure. Should I try installing it in Windows using the WUBI Installer? I assume that, if I attempt to install Ubuntu from WinXP (on the second drive), it will automatically install Ubuntu on the empty partition alongside XP. But will I have the option of creating a swap partition? And what if the WUBI Installer searches all of my drives and decides to install Ubuntu on my first drive's empty partition (which I have left empty for Win8 upon its release)?

    Read the article

  • How to enable and connect to RDP on a Windows Azure Web Role Instance?

    - by Enrique Lima
    We all know there have been some updates to Windows Azure, and one of the biggest I would say is the capability of being able to remote into the “OS level” of the image running a role.  And I am not talking about VM Role, I am talking about a Web Role for example. As developers we use Visual Studio, and when we are getting ready to deploy a project, we have the option of enabling this. Here is how: 1.  We publish our Project 2. On the Deployment dialog, provide all the details for your account, and before clicking OK, click on Configure Remote Desktop connections. 3.  Enable connections and the rest of the configuration.  Now, here is where there is an extra set of steps.  The first thing to know: The certificate used here is different from the other certs you have in place.  I created a new one, the went into certmgr.msc, then to Personal, then I selected the cert I just created.  Did a right-click, then All Tasks > Export.  Because what is needed is a pfx package, make sure when exporting you select to export the private key. 4. Click OK, on the Remote Desktop Configuration screen, now before you click OK on the Deployment, you will need to visit the Azure Portal. And perform the following: Go to your hosted services. Then with the service available, select the Certificates folder location. Then, select Add Certificate from the toolbar (more like Azure Portal Ribbon) Provide the details to upload the recently create pfx file. That will create the Certificate. Click OK on the deployment dialog, this kick off the deployment process. 5. Now, we need to go to the Windows Azure Portal.  Here we will select the Web Role deployed and Configure RDP. 6. Time to test.  Click on the Instance (not the role), this will make the Remote Access Connect Button available.  A file will start the process to be downloaded too 7. You will then be prompted for the credentials you configured. 8.  Validate connectivity … 9. Open IIS Manager … From here on, this is a way to manage and work with your Instance.

    Read the article

  • How should I structure my turn based engine to allow flexibility for players/AI and observation?

    - by Reefpirate
    I've just started making a Turn Based Strategy engine in GameMaker's GML language... And I was cruising along nicely until it came time to handle the turn cycle, and determining who is controlling what player, and also how to handle the camera and what is displayed on screen. Here's an outline of the main switch happening in my main game loop at the moment: switch (GameState) { case BEGIN_TURN: // Start of turn operations/routines break; case MID_TURN: switch (PControlledBy[Turn]) { case HUMAN: switch (MidTurnState) { case MT_SELECT: // No units selected, 'idle' UI state break; case MT_MOVE: // Unit selected and attempting to move break; case MT_ATTACK: break; } break; case COMPUTER: // AI ROUTINES GO HERE break; case OBSERVER: // OBSERVER ROUTINES GO HERE break; } break; case END_TURN: // End of turn routines/operations, and move Turn to next player break; } Now, I can see a couple of problems with this set-up already... But I don't have any idea how to go about making it 'right'. Turn is a global variable that stores which player's turn it is, and the BEGIN_TURN and END_TURN states make perfect sense to me... But the MID_TURN state is baffling me because of the things I want to happen here: If there are players controlled by humans, I want the AI to do it's thing on its turn here, but I want to be able to have the camera follow the AI as it makes moves in the human player's vision. If there are no human controlled player's, I'd like to be able to watch two or more AI's battle it out on the map with god-like 'observer' vision. So basically I'm wondering if there are any resources for how to structure a Turn Based Strategy engine? I've found lots of writing about pathfinding and AI, and those are all great... But when it comes to handling the turn structure and the game states I am having trouble finding any resources at all. How should the states be divided to allow flexibility between the players and the controllers (HUMAN, COMPUTER, OBSERVER)? Also, maybe if I'm on the right track I just need some reassurance before I lay down another few hundred lines of code...

    Read the article

  • Is my work on a developer test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their developer test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • problems programmatically creating UIView on iPad App

    - by user3871
    I have been struggling with this problem for a few days. My iPad app is designed to be a portrait game. To satisfy Apple's expection, I also support landscape mode. When it goes into landscape mode, the game goes into a letterbox format with back borders on the sides. My problem is I am creating the UIWindow and UIView programmatically. For some unkown reason, the touch controls are "locked" in to think I'm always in landscape mode. And even though visually in portrait mode everything looks correct, the top and bottom of the screen does not respond to touch. To summarize how I am setting this up, let me provide the skeletal framework of what I'm doing: in main.cpp: int retVal = UIApplicationMain(argc, argv, nil, @"derbyPoker_ipadAppDelegate"); In the delegate, I am doing this: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { CGRect screenBounds = [[UIScreen mainScreen] bounds]; CGFloat scale = [[ UIScreen mainScreen] scale ]; m_device_width = screenBounds.size.width; m_device_height = screenBounds.size.height; m_device_scale = scale; // Everything is built assuming 640x960 window = [[ UIWindow alloc ] initWithFrame:[[UIScreen mainScreen] bounds]]; viewController = [ glView new ]; [self doStateChange:[blitz class]]; return YES; } The last bit of code sets up the UIView... - (void) doStateChange: (Class) state{ viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, m_device_width, m_device_height) andManager:self]; viewController.view.contentMode = UIViewContentModeScaleAspectFit; viewController.view.autoresizesSubviews = YES; [window addSubview:viewController.view]; [window makeKeyAndVisible]; } The problem seems to related to the line viewController.view.contentMode = UIViewContentModeScaleAspectFit; If I remove that line, touch works correctly in portrait mode. But the negative is when I'm landscape mode, the game stretches incorrectly. So That's not a option. The frustrating thing is, when I originally had this set up with a NIB file, it worked fine. I have read through the docs about UIWindow, UIViewController and UIView and have tried about everything to no avail. Any help would be greatly appreciated.

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • 2D Collision in Canvas - Balls Overlapping When Velocity is High

    - by kushsolitary
    I am doing a simple experiment in canvas using Javascript in which some balls will be thrown on the screen with some initial velocity and then they will bounce on colliding with each other or with the walls. I managed to do the collision with walls perfectly but now the problem is with the collision with other balls. I am using the following code for it: //Check collision between two bodies function collides(b1, b2) { //Find the distance between their mid-points var dx = b1.x - b2.x, dy = b1.y - b2.y, dist = Math.round(Math.sqrt(dx*dx + dy*dy)); //Check if it is a collision if(dist <= (b1.r + b2.r)) { //Calculate the angles var angle = Math.atan2(dy, dx), sin = Math.sin(angle), cos = Math.cos(angle); //Calculate the old velocity components var v1x = b1.vx * cos, v2x = b2.vx * cos, v1y = b1.vy * sin, v2y = b2.vy * sin; //Calculate the new velocity components var vel1x = ((b1.m - b2.m) / (b1.m + b2.m)) * v1x + (2 * b2.m / (b1.m + b2.m)) * v2x, vel2x = (2 * b1.m / (b1.m + b2.m)) * v1x + ((b2.m - b1.m) / (b2.m + b1.m)) * v2x, vel1y = v1y, vel2y = v2y; //Set the new velocities b1.vx = vel1x; b2.vx = vel2x; b1.vy = vel1y; b2.vy = vel2y; } } You can see the experiment here. The problem is, some balls overlap each other and stick together while some of them rebound perfectly. I don't know what is causing this issue. Here's my balls object if that matters: function Ball() { //Random Positions this.x = 50 + Math.random() * W; this.y = 50 + Math.random() * H; //Random radii this.r = 15 + Math.random() * 30; this.m = this.r; //Random velocity components this.vx = 1 + Math.random() * 4; this.vy = 1 + Math.random() * 4; //Random shade of grey color this.c = Math.round(Math.random() * 200); this.draw = function() { ctx.beginPath(); ctx.fillStyle = "rgb(" + this.c + ", " + this.c + ", " + this.c + ")"; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); ctx.closePath(); } }

    Read the article

  • Turn-Based RPG Battle Instance Layout For Larger Groups

    - by SoulBeaver
    What a title, eh? I'm currently designing a videogame; a turn-based RPG like Final Fantasy (because everybody knows Final Fantasy). It's a 2D sprite game. These are my ideas for combat: -The player has a group of 15 members (main character included) -During battle, five of the group are designated as active, and appear in the battle. -These five may be switched out at leisure, or when one of the five die. -At any time, the Waiting members can cast buffs, be healed by the active members, or perform special attacks. -Battles should contain 10+ monsters at least. I'm aiming for 20, but I'm not sure if that's possible yet. -Battles should feel larger than normal due to the interaction of Waiting members, active members and the increased amount of monsters per battle. -The player has two rows in which to put the Active members: front and back. -Depending on the implementation, I might allow comboing of player attacks and skills. These are just design ideas, so beware! I have not been able to test this out yet- I have no idea yet if any of these ideas bunched together will make for a compelling game. What sounds good on paper doesn't necessarily have to be good in practice! What I'm asking now is how to create the layout for this. My starting point are the battles in Final Fantasy VI, with up to 5-6 monsters on the left and the characters on the right- monsters on both sides if it's a pincer attack. However, this view would not work feasible with my goal of 20 monsters and 5 characters. All the monsters on the left would appear cluttered unless I scale them far far back. If I create a pincer-like map, then there would be no real pincer-attack possible. If I space the monsters out I force the player to scroll the screen- a game mechanic I've come across and not enjoyed imho. My question is: does anybody have any layouts or guides for designing battle maps in turn-based RPGs, especially with a larger number of enemies taken into consideration? How should it look? I am not asking for specific combat mechanics, just the layout for the moment.

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • 5 Step Procedure for Android Deployment with NetBeans IDE

    - by Geertjan
    I'm finding that it's so simple to deploy apps to Android that I'm not needing to use the Android emulator at all, haven't been able to figure out how it works anyway (big blinky screen pops up that I don't know what to do with). I just simply deploy the app straight to Android, try it out there, and then uninstall it, if needed. The whole process (only step 4 and 5 below need to be done for each deployment iteration, after you've done steps 1, 2, and 3 once to set up the deployment environment), takes a few seconds. Here's what I do: On Android, go to Settings | Applications. Check "Unknown sources". In "Development", check "USB debugging". Connect Android to your computer via a USB cable. Start up NetBeans IDE, with NBAndroid installed, as described yesterday. and create your "Hello World" app. Right-click the project in the IDE and choose "Export Signed Android Package". Create a new keystore, or choose an existing one, via the wizard that appears. At the end of the wizard (would be nice if NBAndroid would let you set up a keystore once and then reuse it for all your projects, without needing to work through the whole wizard step by step each time), you'll have a new release APK file (Android deployment archive) in the project's 'bin' folder, which you can see in the Files window. Go to the command line (would be nice if NBAndroid were to support adb, would mean I wouldn't need the command line at all), browse to the location of the APK file above. Type "adb install helloworld-release.apk" or whatever the APK file is called. You should see a "Success" message in the command line. Now the application is installed. On your Android, go to "Applications", and there you'll see your brand new app. Then try it out there and delete it if you're not happy with it. After you've made a change in your app, simply repeat step 4 and 5, i.e., create a new APK and install it via adb. Step 4 and 5 take a couple of seconds. And, given that it's all so simple, I don't see the value of the Android emulator, at all.

    Read the article

  • Is OOP hard because it is not natural?

    - by zvrba
    One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window." We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?

    Read the article

  • (SOLVED) Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • After startup, display reverts to one desktop duplicated on both screens

    - by Skilldrick
    I have a Samsung R530 laptop. I'm actually running Linux Mint, which is based on Ubuntu (which is why I thought it would be OK to post this here). I'm now getting exactly the same problem on Ubuntu 10.10. I've got it setup with a secondary LCD monitor, and I've changed the display settings so the laptop screen and monitor are at optimal resolution, with the desktop shared between them. After startup though (typically around 10-30 seconds) the display reverts to 1024 x 768 on both screens, and the display is duplicated (i.e. both screens show the same thing). I think this is something to do with my video card, as I had the same problem when I was running Arch Linux. Any ideas? Edit: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller It seems it's Intel, not Nvidia. I have a current work around, which is to have a launcher to a script containing xrandr --output LVDS1 --mode 1366x768 --output HDMI1 --mode 1280x1024 --left-of LVDS1 on my desktop, and to run it after the displays revert, but it's not ideal.

    Read the article

  • Trying to use stencils in 2D while retaining layer depth

    - by Steve
    This is a screen of what's going on just so you can get a frame of reference. http://i935.photobucket.com/albums/ad199/fobwashed/tilefloors.png The problem I'm running into is that my game is slowing down due to the amount of texture swapping I'm doing during my draw call. Since walls, characters, floors, and all objects are on their respective sprite sheet containing those types, per tile draw, the loaded texture is swapping no less than 3 to 5+ times as it cycles and draws the sprites in order to layer properly. Now, I've tried throwing all common objects together into their respective lists, and then using layerDepth drawing them that way which makes things a lot better, but the new problem I'm running into has to do with the way my doors/windows are drawn on walls. Namely, I was using stencils to clear out a block on the walls that are drawn in the shape of the door/window so that when the wall would draw, it would have a door/window sized hole in it. This is the way my draw was set up for walls when I was going tile by tile rather than grouped up common objects. first it would check to see if a door/window was on this wall. If not, it'd skip all the steps and just draw normally. Otherwise end the current spriteBatch Clear the buffers with a transparent color to preserve what was already drawn start a new spritebatch with stencil settings draw the door area end the spriteBatch start a new spritebatch that takes into account the previously set stencil draw the wall which will now be drawn with a hole in it end that spritebatch start a new spritebatch with the normal settings to continue drawing tiles In the tile by tile draw, clearing the depth/stencil buffers didn't matter since I wasn't using any layerDepth to organize what draws on top of what. Now that I'm drawing from lists of common objects rather than tile by tile, it has sped up my draw call considerably but I can't seem to figure out a way to keep the stencil system to mask out the area a door or window will be drawn into a wall. The root of the problem is that when I end a spriteBatch to change the DepthStencilState, it flattens the current RenderTarget and there is no longer any depth sorting for anything drawn further down the line. This means walls always get drawn on top of everything regardless of depth or positioning in the game world and even on top of each other as the stencil has to happen once for every wall that has a door or window. Does anyone know of a way to get around this? To boil it down, I need a way to draw having things sorted by layer depth while also being able to stencil/mask out portions of specific sprites.

    Read the article

  • Platformer Collision Error [closed]

    - by Connor
    I am currently working on a relatively simple platform game that has an odd bug.You start the game by falling onto the ground (you spawn a few blocks above the ground), but when you land your feet get stuck INSIDE the world and you can't move until you jump. Here's what I mean: The player's feet are a few pixels below the ground level. However, this problem only occurs in 3 places throughout the map and only in those 3 select places. I'm assuming that the problem lies within my collision detection code but I'm not entirely sure, as I don't get an error when it happens. public boolean isCollidingWithBlock(Point pt1, Point pt2) { //Checks x for(int x = (int) (this.x / Tile.tileSize); x < (int) (this.x / Tile.tileSize + 4); x++) { //Checks y for(int y = (int) (this.y / Tile.tileSize); y < (int) (this.y / Tile.tileSize + 4); y++) { if(x >= 0 && y >= 0 && x < Component.dungeon.block.length && y < Component.dungeon.block[0].length) { //If the block is not air if(Component.dungeon.block[x][y].id != Tile.air) { //If the player is in contact with point one or two on the block if(Component.dungeon.block[x][y].contains(pt1) || Component.dungeon.block[x][y].contains(pt2)) { //Checks for specific blocks if(Component.dungeon.block[x][y].id == Tile.portalBlock) { Component.isLevelDone = true; } if(Component.dungeon.block[x][y].id == Tile.spike) { Health.health -= 1; Component.isJumping = true; if(Health.health == 0) { Component.isDead = true; } } return true; } } } } } return false; } What I'm asking is how I would fix the problem. I've looked over my code for quite a while and I'm not sure what's wrong with it. Also, if there's a more efficient way to do my collision checking then please let me know! I hope that is enough information, if it's not just tell me what you need and I'll be sure to add it. Thank you! [EDIT] Jump code: if(!isJumping && !isCollidingWithBlock(new Point((int) x + 2, (int) (y + height)), new Point((int) (x + width + 2), (int) (y + height)))) { y += fallSpeed; //sY is the screen's Y. The game is a side-scroller Component.sY += fallSpeed; } else { if(Component.isJumping) { isJumping = true; } } if(isJumping) { if(!isCollidingWithBlock(new Point((int) x + 2, (int) y), new Point((int) (x + width + 2), (int) y))) { if(jumpCount >= jumpHeight) { isJumping = false; jumpCount = 0; } else { y -= jumpSpeed; Component.sY -= jumpSpeed; jumpCount += 1; } } else { isJumping = false; jumpCount = 0; } }

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >