Search Results

Search found 17148 results on 686 pages for 'screen coordinates'.

Page 573/686 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Why does creating dynamic bodies in JBox2D freeze my app?

    - by Amplify91
    My game hangs/freezes when I create dynamic bullet objects with Box2D and I don't know why. I am making a game where the main character can shoot bullets by the user tapping on the screen. Each touch event spawns a new FireProjectileEvent that is handled properly by an event queue. So I know my problem is not trying to create a new body while the box2d world is locked. My bullets are then created and managed by an object pool class like this: public Projectile getProjectile(){ for(int i=0;i<mProjectiles.size();i++){ if(!mProjectiles.get(i).isActive){ return mProjectiles.get(i); } } return mSpriteFactory.createProjectile(); } mSpriteFactory.createProjectile() leads to the physics component of the Projectile class creating its box2d body. I have narrowed the issue down to this method and it looks like this: public void create(World world, float x, float y, Vec2 vertices[], boolean dynamic){ BodyDef bodyDef = new BodyDef(); if(dynamic){ bodyDef.type = BodyType.DYNAMIC; }else{ bodyDef.type = BodyType.STATIC; } bodyDef.position.set(x, y); mBody = world.createBody(bodyDef); PolygonShape dynamicBox = new PolygonShape(); dynamicBox.set(vertices, vertices.length); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = dynamicBox; fixtureDef.density = 1.0f; fixtureDef.friction = 0.0f; mBody.createFixture(fixtureDef); mBody.setFixedRotation(true); } If the dynamic parameter is set to true my game freezes before crashing, but if it is false, it will create a projectile exactly how I want it just doesn't function properly (because a projectile is not a static object). Why does my program fail when I try to create a dynamic object at runtime but not when I create a static one? I have other dynamic objects (like my main character) that work fine. Any help would be greatly appreciated. This is a screenshot of a method profile I did: Especially notable is number 8. I'm just still unsure what I'm doing wrong. Other notes: I am using JBox2D 2.1.2.2. (Upgraded from 2.1.2.1 to try to fix this problem) When the application freezes, if I hit the back button, it appears to move my game backwards by one update tick. Very strange.

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • Disk drive for / not ready on boot after upgrade from 10.04 to 12.04

    - by Mathieu M-Gosselin
    After upgrading (using the Upgrade button from the update manager) from 10.04.4 to 12.04.1, I cannot boot anymore. Upon booting, I am greeted with the Ubuntu logo and the error "The disk drive for / is not ready yet or not present". I have the option to wait, to skip and to access a basic shell. Waiting overnight did nothing, skipping just gives me the same error for /tmp, /home, then for a UUID and finally it just goes to a black screen with a white "_" in the top left corner. My setup is a dual boot one with XP on a single hard drive, I use separate partitions for / and /home. Back in the day I installed 8.04 directly from the CD while leaving a partition for XP, which I installed after. This setup had never caused any such issues, even when upgrading from 8.04 to 10.04. I have done plenty of research regarding this issue, as many others seem to have had similar issues after doing the same upgrade as me. However, while for most what fixed the problem was running: apt-get -f install after remounting / in read-write, it didn't do it for me. I got dependency errors (see here), which I also investigated. I found https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/990740 where most people say the solution that worked is (prior to running the above command) running: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal but that also got me a lot of dependencies errors as output (see here), similar to #34 in the above thread. I also read that running: dpkg --configure -a could help, at first it wouldn't run because it had trouble parsing /var/lib/dpkg/status since there was an extra blank line in a package description (see https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/916799) but I removed it using vim (and then reran the command). It still gives me output that looks like an error, though. Here it is: http://paste.ubuntu.com/1338074/. I also tried re-running the above apt-get commands after that, to no avail. I'm running out of things to try in the hope of getting this fixed, your help would be very much appreciated! Thank you in advance.

    Read the article

  • Please Help Me Install Firestorm?

    - by Elle Caszatt
    I have been trying for hours just to install one program. In this time, I've tried my best to follow directions and not screw everything up but I have. I'm new to Linux. I tried to install Firestorm and this is what happened: parent@ubuntu:~$ sudo '/home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/install.sh' [sudo] password for parent: Enter the desired installation directory [/opt/firestorm-install]: /home/parent/downloads - Installing to /home/parent/downloads /home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/install.sh: line 80: /home/parent/downloads/etc/refresh_desktop_app_entry.sh: Permission denied parent@ubuntu:~$ sudo opt/firestorm-install sudo: opt/firestorm-install: command not found parent@ubuntu:~$ ./etc/refresh_desktop_app_entry.sh bash: ./etc/refresh_desktop_app_entry.sh: No such file or directory parent@ubuntu:~$ sudo '/home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/install.sh' Enter the desired installation directory [/opt/firestorm-install]: /home/parent - Backing up previous installation to /home/parent.backup-2012-08-27 - Installing to /home/parent cp: cannot stat `/home/parent/Downloads/Phoenix_Firestorm-Release_i686_4.2.1.29803/*': No such file or directory Failed parent@ubuntu:~$ Now whenever I go into my files it says it can't find anything. Like Cannot find home/parent/Downloads. Now, I KNOW there are downloads. I don't know why it's doing this all of a sudden. I'm so frustrated that I'm ready to just go back to Windows. I've already had to uninstall/reinstall Ubuntu once today. It's looking like I"m going to have to do it again. How can I fix my file problem that I'm now having and can someone please, please tell me how to install Firestorm? I mean they don't even have their repository listed. It's ridiculous to have to go through this over a program. Spotify wasn't hard at all to install so why is this? Someone please help, and I'm sorry if I sound like a total idiot. I'm pretty tech savvy but I'm honestly pretty upset after struggling with this for hours. Edit Okay, I see the problem with the directory files (showing the error I mentioned above when I try to click on them). I can only access my downloads, desktop, ect, through the backup that was created when I tried to install Firestorm. It's like that's the real home now. How can I get it back to the way it was? Edit Ubuntu has stopped working for me on reboot now. It doesn't go past the login screen. This is exactly what happened when I had to uninstall it before after trying to install Firestorm. Maybe I'm giving up too easily but I think I'm just going to go back to Windows. If this is what's going to happen every time I innocently try to install a program then it's just not worth it. I installed it specifically to run Firestorm because Windows sucks up a lot of CPU and causes lag. I still appreciate any input but this is just too much hassle for something that shouldn't be hard.

    Read the article

  • Managing Regulated Content in WebCenter: USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Suggestions for Summer Intern Application Assignments

    - by orangepips
    As part of our application process we want prospective college interns to complete an assignment on their own - either programming or analytical - to give us something tangible to evaluate such as code or a flowchart. I have two ideas for these assignments, one programming and one analytical, I am interested in gathering feedback about these. Programming Assignment Generate an a month's calendar for a given date. The first row should indicate the days of the week (e.g. Sunday - Saturday). Each subsequent row should contain a week's days. The date supplied should be highlighted (e.g. bolded). I am thinking we'll probably proscribe the output format even more strictly - probably down to what the HTML source should look like including CSS classes. Thinking is this forces answerers to actually do some work if they merely copy a solution from the internet. Analytical Assignment Diagram or describe in prose a system for managing a set of traffic lights for traffic at a four way intersection. Each direction (i.e. North, South, East and West) has two lanes (i.e. right and left). The left lane is turn only and has green arrow light to indicate right of way. The system is able to detect if lanes have cars in them and change the lights accordingly. I would expect a flow chart or some prose describing a finite state machine that deals with each contingency. This would hopefully provide some indication of the applicant's ability to reason through a logic problem of sorts and articulate an approach for solving. Areas Seeking Feedback Is it unreasonable to ask this of applicants? If not, is it better to request before or after a phone screen? Are these questions too hard or easy for a collegiate audience? Any suggestions for alternate questions? Do these seem like good tools for analyzing people who would part of a software development life cycle? Programming language suggestions - I'm thinking Java, Python and/or C# (we're actually a ColdFusion shop).

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • How to Add a Taskbar to the Desktop in Ubuntu 14.04

    - by Lori Kaufman
    If you’ve switched to Ubuntu from Windows, it may take some time to get used to the new and different interface. However, you can easily incorporate a familiar Windows feature, the Taskbar, into Ubuntu to make the transition easier. A tool called Tint2 provides a bar at the bottom of the Ubuntu Desktop that resembles the Windows Taskbar. We will show you how to install it and make it start every time you log into Ubuntu. NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise. Press Ctrl + Alt + T to open a Terminal window. To install Tint2, type the following line at the prompt and press Enter. sudo apt-get install tint2 Type your password at the prompt and press Enter. The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter. When the installation has finished, close the Terminal window by typing “exit” at the prompt and pressing Enter. Click the Search button at the top of the Unity bar. Start typing “startup applications” in the Search box. Items that match what you type start displaying below the Search box. When the Startup Applications tool displays, click the icon to open it. On the Startup Applications Preferences window, click Add. On the Add Startup Program dialog box, enter a name for the startup application. This name displays in the list on the Startup Applications Preferences window. Type “tint2” in the Command edit box, enter a description in the Comment edit box, if desired, and click Add. Tint2 is added as a startup program and will start every time you log into Ubuntu. Click Close to close the Startup Applications Preferences window. Log out and log back in to make the Taskbar available on the desktop. You do not need to reboot the computer for this change to take effect. Now, when you minimize a program, an icon for it displays on the Taskbar at the bottom of the screen, just like the Taskbar in Windows. If you decide that you don’t want the Taskbar to display every time you log into Ubuntu, you can uncheck the Tint2 startup program on the Startup Applications Preferences window. You don’t need to delete it from the list.

    Read the article

  • Opengl-es picking object

    - by lacas
    I saw a lot of picking code opengl-es, but nothing worked. Can someone give me what am I missing? My code is (from tutorials/forums) Vec3 far = Camera.getPosition(); Vec3 near = Shared.opengl().getPickingRay(ev.getX(), ev.getY(), 0); Vec3 direction = far.sub(near); direction.normalize(); Log.e("direction", direction.x+" "+direction.y+" "+direction.z); Ray mouseRay = new Ray(near, direction); for (int n=0; n<ObjectFactory.objects.size(); n++) { if (ObjectFactory.objects.get(n)!=null) { IObject obj = ObjectFactory.objects.get(n); float discriminant, b; float radius=0.1f; b = -mouseRay.getOrigin().dot(mouseRay.getDirection()); discriminant = b * b - mouseRay.getOrigin().dot(mouseRay.getOrigin()) + radius*radius; discriminant = FloatMath.sqrt(discriminant); double x1 = b - discriminant; double x2 = b + discriminant; Log.e("asd", obj.getName() + " "+discriminant+" "+x1+" "+x2); } } my camera vectors: //cam Vec3 position =new Vec3(-obj.getPosX()+x, obj.getPosZ()-0.3f, obj.getPosY()+z); Vec3 direction =new Vec3(-obj.getPosX(), obj.getPosZ(), obj.getPosY()); Vec3 up =new Vec3(0.0f, -1.0f, 0.0f); Camera.set(position, direction, up); and my picking code: public Vec3 getPickingRay(float mouseX, float mouseY, float mouseZ) { int[] viewport = getViewport(); float[] modelview = getModelView(); float[] projection = getProjection(); float winX, winY; float[] position = new float[4]; winX = (float)mouseX; winY = (float)Shared.screen.width - (float)mouseY; GLU.gluUnProject(winX, winY, mouseZ, modelview, 0, projection, 0, viewport, 0, position, 0); return new Vec3(position[0], position[1], position[2]); } My camera moving all the time in 3d space. and my actors/modells moving too. my camera is following one actor/modell and the user can move the camera on a circle on this model. How can I change the above code to working?

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Contract / Project / Line-Item hierarchy design considerations

    - by Ryan
    We currently have an application that allows users to create a Contract. A contract can have 1 or more Project. A project can have 0 or more sub-projects (which can have their own sub-projects, and so on) as well as 1 or more Line. Lines can have any number of sub-lines (which can have their own sub-lines, and so on). Currently, our design contains circular references, and I'd like to get away from that. Currently, it looks a bit like this: public class Contract { public List<Project> Projects { get; set; } } public class Project { public Contract OwningContract { get; set; } public Project ParentProject { get; set; } public List<Project> SubProjects { get; set; } public List<Line> Lines { get; set; } } public class Line { public Project OwningProject { get; set; } public List ParentLine { get; set; } public List<Line> SubLines { get; set; } } We're using the M-V-VM "pattern" and use these Models (and their associated view models) to populate a large "edit" screen where users can modify their contracts and the properties on all of the objects. Where things start to get confusing for me is when we add, for example, a Cost property to the Line. The issue is reflecting at the highest level (the contract) changes made to the lowest level. Looking for some thoughts as to how to change this design to remove the circular references. One thought I had was that the contract would have a Dictionary<Guid, Project> which would contain ALL projects (regardless of their level in hierarchy). The Project would then have a Guid property called "Parent" which could be used to search the contract's dictionary for the parent object. THe same logic could be applied at the Line level. Thanks! Any help is appreciated.

    Read the article

  • Why do "Joke" programming languages exist? [closed]

    - by ThePlan
    First of all please be aware this post contains some abusive language but I hope it will not bother anyone. I apologize for the bad language but that's what the name is. As I've been doing documentation on existing programming languages attempting to make a complete list of them I stumbled across terrible programming languages, which were clearly not made for actual use and implementation due to their insane difficulty. Languages such as Brainfu*k and LOLCODE or Whitespace are fool languages because they have no real use. For example, a "Hello world" program written in BrainFu*k. Taken from Wikipedia: The following program prints "Hello World!" and a newline to the screen: +++++ +++++ initialize counter (cell #0) to 10 [ use loop to set the next four cells to 70/100/30/10 > +++++ ++ add 7 to cell #1 > +++++ +++++ add 10 to cell #2 > +++ add 3 to cell #3 > + add 1 to cell #4 <<<< - decrement counter (cell #0) ] > ++ . print 'H' > + . print 'e' +++++ ++ . print 'l' . print 'l' +++ . print 'o' > ++ . print ' ' << +++++ +++++ +++++ . print 'W' > . print 'o' +++ . print 'r' ----- - . print 'l' ----- --- . print 'd' > + . print '!' > . print '\n' or another example taken from LOLCODE language: HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE These languages are very difficult to learn/read/work with. My question is - Why do they exist? What is the purpose of them? Also, is there an official "name" for these type of languages?

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • How to enable and connect to RDP on a Windows Azure Web Role Instance?

    - by Enrique Lima
    We all know there have been some updates to Windows Azure, and one of the biggest I would say is the capability of being able to remote into the “OS level” of the image running a role.  And I am not talking about VM Role, I am talking about a Web Role for example. As developers we use Visual Studio, and when we are getting ready to deploy a project, we have the option of enabling this. Here is how: 1.  We publish our Project 2. On the Deployment dialog, provide all the details for your account, and before clicking OK, click on Configure Remote Desktop connections. 3.  Enable connections and the rest of the configuration.  Now, here is where there is an extra set of steps.  The first thing to know: The certificate used here is different from the other certs you have in place.  I created a new one, the went into certmgr.msc, then to Personal, then I selected the cert I just created.  Did a right-click, then All Tasks > Export.  Because what is needed is a pfx package, make sure when exporting you select to export the private key. 4. Click OK, on the Remote Desktop Configuration screen, now before you click OK on the Deployment, you will need to visit the Azure Portal. And perform the following: Go to your hosted services. Then with the service available, select the Certificates folder location. Then, select Add Certificate from the toolbar (more like Azure Portal Ribbon) Provide the details to upload the recently create pfx file. That will create the Certificate. Click OK on the deployment dialog, this kick off the deployment process. 5. Now, we need to go to the Windows Azure Portal.  Here we will select the Web Role deployed and Configure RDP. 6. Time to test.  Click on the Instance (not the role), this will make the Remote Access Connect Button available.  A file will start the process to be downloaded too 7. You will then be prompted for the credentials you configured. 8.  Validate connectivity … 9. Open IIS Manager … From here on, this is a way to manage and work with your Instance.

    Read the article

  • I need help installing Ubuntu 11.10 to multi-drive system

    - by CookyMonzta
    I have a machine with 3 hard drives; the primary, which is 750GB (drive 0), and 2 others, each of which is 640GB (drives 1 and 2). On the last screen before the actual installation begins, this is how my hard drive configuration looks: /dev/sda [DISK0, 750GB] /dev/sda1 ntfs 104MB [Win7 System Reserved] /dev/sda2 ntfs 499,997MB [Windows 7 Pro] free space 250,052MB [This space intended for Windows 8] /dev/sdb [DISK1, 640GB] /dev/sdb1 ntfs 400,085MB [Windows XP Pro] free space 240,049MB [This space intended for Ubuntu] /dev/sdc [DISK2, 640GB] [This drive intended for various backups] free space 160,033MB /dev/sdc5 ntfs 480,101MB [Acronis Secure Zone] As you can see, I have 3 drives, all SATA. I have Win7 on my first drive (0), WinXP on my second drive (1) and a secure zone for daily backups on my third drive (2). I want to put Ubuntu 11.10 Oneiric Ocelot on the drive that also has XP. I've already used 400GB for XP and I have 240GB remaining, for which, my intention was to create a 4GB swap file and use the rest for Ubuntu itself. This is what my second hard drive looked like, for my intended setup before installation: /dev/sdb /dev/sdb5 swap 4,095MB [Linux swap] /dev/sdb6 ext4 235,951MB [Ubuntu 11.10] Needless to say, this is only the second time I have attempted to install Linux. I managed to get 7.10 Gutsy Gibbon working on an old machine. I have two problems with this installation: Ubuntu asks for a location to install the boot loader (i.e., "Device for Boot Loader Installation"). I already have a boot loader; namely, Acronis OS Selector (from Acronis Disk Director 11). So I decided to put the Ubuntu boot loader in /dev/sdb6 (where I intend to install Ubuntu), to keep it from interfering with my Acronis OS Selector. Once I hit "Install now", I ended up with the following error: "No root file system is defined. Please correct this from the partitioning menu." What am I missing? Did I attempt to put the boot loader in the wrong place? I assume I did, because as I am writing this entry, I am looking at LinuxIdentity.com's Ubuntu 11.04 Natty Narwhal magazine, and I see a screenshot (Figure 7 on Page 13) that implies that the boot loader can be installed anywhere, including the first hard drive (in the MBR, which would obviously force me to reinstall the Acronis OS Selector) or even on a floppy. But why do I get an undefined root file system error? I thought /dev/sdb6 was the root file. Obviously I'm missing something in the installation procedure. Should I try installing it in Windows using the WUBI Installer? I assume that, if I attempt to install Ubuntu from WinXP (on the second drive), it will automatically install Ubuntu on the empty partition alongside XP. But will I have the option of creating a swap partition? And what if the WUBI Installer searches all of my drives and decides to install Ubuntu on my first drive's empty partition (which I have left empty for Win8 upon its release)?

    Read the article

  • How should I structure my turn based engine to allow flexibility for players/AI and observation?

    - by Reefpirate
    I've just started making a Turn Based Strategy engine in GameMaker's GML language... And I was cruising along nicely until it came time to handle the turn cycle, and determining who is controlling what player, and also how to handle the camera and what is displayed on screen. Here's an outline of the main switch happening in my main game loop at the moment: switch (GameState) { case BEGIN_TURN: // Start of turn operations/routines break; case MID_TURN: switch (PControlledBy[Turn]) { case HUMAN: switch (MidTurnState) { case MT_SELECT: // No units selected, 'idle' UI state break; case MT_MOVE: // Unit selected and attempting to move break; case MT_ATTACK: break; } break; case COMPUTER: // AI ROUTINES GO HERE break; case OBSERVER: // OBSERVER ROUTINES GO HERE break; } break; case END_TURN: // End of turn routines/operations, and move Turn to next player break; } Now, I can see a couple of problems with this set-up already... But I don't have any idea how to go about making it 'right'. Turn is a global variable that stores which player's turn it is, and the BEGIN_TURN and END_TURN states make perfect sense to me... But the MID_TURN state is baffling me because of the things I want to happen here: If there are players controlled by humans, I want the AI to do it's thing on its turn here, but I want to be able to have the camera follow the AI as it makes moves in the human player's vision. If there are no human controlled player's, I'd like to be able to watch two or more AI's battle it out on the map with god-like 'observer' vision. So basically I'm wondering if there are any resources for how to structure a Turn Based Strategy engine? I've found lots of writing about pathfinding and AI, and those are all great... But when it comes to handling the turn structure and the game states I am having trouble finding any resources at all. How should the states be divided to allow flexibility between the players and the controllers (HUMAN, COMPUTER, OBSERVER)? Also, maybe if I'm on the right track I just need some reassurance before I lay down another few hundred lines of code...

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Portuguese Pointer. She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.     Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'Asbsa_hzfHl69XF3wxBd_WbW0dLNTRUH3ZHQG9qcV5EFRLuWEaOP1hjWdZ0A0P17',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });             // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; }   Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }   Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • problems programmatically creating UIView on iPad App

    - by user3871
    I have been struggling with this problem for a few days. My iPad app is designed to be a portrait game. To satisfy Apple's expection, I also support landscape mode. When it goes into landscape mode, the game goes into a letterbox format with back borders on the sides. My problem is I am creating the UIWindow and UIView programmatically. For some unkown reason, the touch controls are "locked" in to think I'm always in landscape mode. And even though visually in portrait mode everything looks correct, the top and bottom of the screen does not respond to touch. To summarize how I am setting this up, let me provide the skeletal framework of what I'm doing: in main.cpp: int retVal = UIApplicationMain(argc, argv, nil, @"derbyPoker_ipadAppDelegate"); In the delegate, I am doing this: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { CGRect screenBounds = [[UIScreen mainScreen] bounds]; CGFloat scale = [[ UIScreen mainScreen] scale ]; m_device_width = screenBounds.size.width; m_device_height = screenBounds.size.height; m_device_scale = scale; // Everything is built assuming 640x960 window = [[ UIWindow alloc ] initWithFrame:[[UIScreen mainScreen] bounds]]; viewController = [ glView new ]; [self doStateChange:[blitz class]]; return YES; } The last bit of code sets up the UIView... - (void) doStateChange: (Class) state{ viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, m_device_width, m_device_height) andManager:self]; viewController.view.contentMode = UIViewContentModeScaleAspectFit; viewController.view.autoresizesSubviews = YES; [window addSubview:viewController.view]; [window makeKeyAndVisible]; } The problem seems to related to the line viewController.view.contentMode = UIViewContentModeScaleAspectFit; If I remove that line, touch works correctly in portrait mode. But the negative is when I'm landscape mode, the game stretches incorrectly. So That's not a option. The frustrating thing is, when I originally had this set up with a NIB file, it worked fine. I have read through the docs about UIWindow, UIViewController and UIView and have tried about everything to no avail. Any help would be greatly appreciated.

    Read the article

  • Is my work on a developer test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their developer test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • 2D Collision in Canvas - Balls Overlapping When Velocity is High

    - by kushsolitary
    I am doing a simple experiment in canvas using Javascript in which some balls will be thrown on the screen with some initial velocity and then they will bounce on colliding with each other or with the walls. I managed to do the collision with walls perfectly but now the problem is with the collision with other balls. I am using the following code for it: //Check collision between two bodies function collides(b1, b2) { //Find the distance between their mid-points var dx = b1.x - b2.x, dy = b1.y - b2.y, dist = Math.round(Math.sqrt(dx*dx + dy*dy)); //Check if it is a collision if(dist <= (b1.r + b2.r)) { //Calculate the angles var angle = Math.atan2(dy, dx), sin = Math.sin(angle), cos = Math.cos(angle); //Calculate the old velocity components var v1x = b1.vx * cos, v2x = b2.vx * cos, v1y = b1.vy * sin, v2y = b2.vy * sin; //Calculate the new velocity components var vel1x = ((b1.m - b2.m) / (b1.m + b2.m)) * v1x + (2 * b2.m / (b1.m + b2.m)) * v2x, vel2x = (2 * b1.m / (b1.m + b2.m)) * v1x + ((b2.m - b1.m) / (b2.m + b1.m)) * v2x, vel1y = v1y, vel2y = v2y; //Set the new velocities b1.vx = vel1x; b2.vx = vel2x; b1.vy = vel1y; b2.vy = vel2y; } } You can see the experiment here. The problem is, some balls overlap each other and stick together while some of them rebound perfectly. I don't know what is causing this issue. Here's my balls object if that matters: function Ball() { //Random Positions this.x = 50 + Math.random() * W; this.y = 50 + Math.random() * H; //Random radii this.r = 15 + Math.random() * 30; this.m = this.r; //Random velocity components this.vx = 1 + Math.random() * 4; this.vy = 1 + Math.random() * 4; //Random shade of grey color this.c = Math.round(Math.random() * 200); this.draw = function() { ctx.beginPath(); ctx.fillStyle = "rgb(" + this.c + ", " + this.c + ", " + this.c + ")"; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); ctx.closePath(); } }

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • Turn-Based RPG Battle Instance Layout For Larger Groups

    - by SoulBeaver
    What a title, eh? I'm currently designing a videogame; a turn-based RPG like Final Fantasy (because everybody knows Final Fantasy). It's a 2D sprite game. These are my ideas for combat: -The player has a group of 15 members (main character included) -During battle, five of the group are designated as active, and appear in the battle. -These five may be switched out at leisure, or when one of the five die. -At any time, the Waiting members can cast buffs, be healed by the active members, or perform special attacks. -Battles should contain 10+ monsters at least. I'm aiming for 20, but I'm not sure if that's possible yet. -Battles should feel larger than normal due to the interaction of Waiting members, active members and the increased amount of monsters per battle. -The player has two rows in which to put the Active members: front and back. -Depending on the implementation, I might allow comboing of player attacks and skills. These are just design ideas, so beware! I have not been able to test this out yet- I have no idea yet if any of these ideas bunched together will make for a compelling game. What sounds good on paper doesn't necessarily have to be good in practice! What I'm asking now is how to create the layout for this. My starting point are the battles in Final Fantasy VI, with up to 5-6 monsters on the left and the characters on the right- monsters on both sides if it's a pincer attack. However, this view would not work feasible with my goal of 20 monsters and 5 characters. All the monsters on the left would appear cluttered unless I scale them far far back. If I create a pincer-like map, then there would be no real pincer-attack possible. If I space the monsters out I force the player to scroll the screen- a game mechanic I've come across and not enjoyed imho. My question is: does anybody have any layouts or guides for designing battle maps in turn-based RPGs, especially with a larger number of enemies taken into consideration? How should it look? I am not asking for specific combat mechanics, just the layout for the moment.

    Read the article

  • 5 Step Procedure for Android Deployment with NetBeans IDE

    - by Geertjan
    I'm finding that it's so simple to deploy apps to Android that I'm not needing to use the Android emulator at all, haven't been able to figure out how it works anyway (big blinky screen pops up that I don't know what to do with). I just simply deploy the app straight to Android, try it out there, and then uninstall it, if needed. The whole process (only step 4 and 5 below need to be done for each deployment iteration, after you've done steps 1, 2, and 3 once to set up the deployment environment), takes a few seconds. Here's what I do: On Android, go to Settings | Applications. Check "Unknown sources". In "Development", check "USB debugging". Connect Android to your computer via a USB cable. Start up NetBeans IDE, with NBAndroid installed, as described yesterday. and create your "Hello World" app. Right-click the project in the IDE and choose "Export Signed Android Package". Create a new keystore, or choose an existing one, via the wizard that appears. At the end of the wizard (would be nice if NBAndroid would let you set up a keystore once and then reuse it for all your projects, without needing to work through the whole wizard step by step each time), you'll have a new release APK file (Android deployment archive) in the project's 'bin' folder, which you can see in the Files window. Go to the command line (would be nice if NBAndroid were to support adb, would mean I wouldn't need the command line at all), browse to the location of the APK file above. Type "adb install helloworld-release.apk" or whatever the APK file is called. You should see a "Success" message in the command line. Now the application is installed. On your Android, go to "Applications", and there you'll see your brand new app. Then try it out there and delete it if you're not happy with it. After you've made a change in your app, simply repeat step 4 and 5, i.e., create a new APK and install it via adb. Step 4 and 5 take a couple of seconds. And, given that it's all so simple, I don't see the value of the Android emulator, at all.

    Read the article

  • Is OOP hard because it is not natural?

    - by zvrba
    One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window." We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >