Search Results

Search found 386 results on 16 pages for 'sqrt'.

Page 11/16 | < Previous Page | 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Best way to implement a simple bullet trajectory

    - by AirieFenix
    I searched and searched and although it's a fair simple question, I don't find the proper answer but general ideas (which I already have). I have a top-down game and I want to implement a gun which shoots bullets that follow a simple path (no physics nor change of trajectory, just go from A to B thing). a: vector of the position of the gun/player. b: vector of the mouse position (cross-hair). w: the vector of the bullet's trajectory. So, w=b-a. And the position of the bullet = [x=x0+speed*time*normalized w.x , y=y0+speed*time * normalized w.y]. I have the constructor: public Shot(int shipX, int shipY, int mouseX, int mouseY) { //I get mouse with Gdx.input.getX()/getY() ... this.shotTime = TimeUtils.millis(); this.posX = shipX; this.posY = shipY; //I used aVector = aVector.nor() here before but for some reason didn't work float tmp = (float) (Math.pow(mouseX-shipX, 2) + Math.pow(mouseY-shipY, 2)); tmp = (float) Math.sqrt(Math.abs(tmp)); this.vecX = (mouseX-shipX)/tmp; this.vecY = (mouseY-shipY)/tmp; } And here I update the position and draw the shot: public void drawShot(SpriteBatch batch) { this.lifeTime = TimeUtils.millis() - this.shotTime; //position = positionBefore + v*t this.posX = this.posX + this.vecX*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); this.posY = this.posY + this.vecY*this.lifeTime*speed*Gdx.graphics.getDeltaTime(); ... } Now, the behavior of the bullet seems very awkward, not going exactly where my mouse is (it's like the mouse is 30px off) and with a random speed. I know I probably need to open the old algebra book from college but I'd like somebody says if I'm in the right direction (or points me to it); if it's a calculation problem, a code problem or both. Also, is it possible that Gdx.input.getX() gives me non-precise position? Because when I draw the cross-hair it also draws off the cursor position. Sorry for the long post and sorry if it's a very basic question. Thanks!

    Read the article

  • Why is my arrow texture being drawn in odd places?

    - by tyjkenn
    This is a script I wrote that places an arrow on the screen, pointing to an enemy off-screen, or, if the enemy is on-screen, it places an arrow hovering above the enemy. Everything seems to work, except for some odd reason, I see random arrows floating around, often skewed and resized (which I really don't understand, because I only rotate and place in this script). Even when I only have one enemy in the scene, I still see these random arrows. It should only be drawing one per enemy. Note: when all enemies are removed, no arrows appear. var arrow : Texture; var cam : Camera; var dim : int = 30; function OnGUI() { var objects = GameObject.FindGameObjectsWithTag("Enemy"); for(var ob : GameObject in objects) { var pos = cam.WorldToViewportPoint(ob.transform.position); if(gameObject.GetComponent(FollowCamera).target != null){ var tar = gameObject.GetComponent(FollowCamera).target.parent; } if(pos.z>1 && ob.transform != tar){ var xDiff = (pos.x*cam.pixelWidth)-(cam.pixelWidth/2); var yDiff = (pos.y*cam.pixelHeight)-(cam.pixelHeight/2); var angle = Mathf.Rad2Deg*Mathf.Atan(yDiff/xDiff)+180; if(xDiff>0) angle += 180; var dist = Mathf.Sqrt(xDiff*xDiff + yDiff*yDiff); var slope = yDiff/xDiff; var camSlope = cam.pixelHeight/cam.pixelWidth; var theX = -1000.0; var theY = -1000.0; var mult = 0; var temp; if(Mathf.Abs(xDiff)>(cam.pixelWidth/2)||Mathf.Abs(yDiff)>(cam.pixelHeight/2)){ //touching right if(slope<camSlope && slope>-camSlope) { if(xDiff>(cam.pixelWidth/2)) { theX = cam.pixelWidth - (dim/2); mult = -1; }else if(xDiff<-(cam.pixelWidth/2)) { theX = (dim/2); mult = 1; } temp = ((cam.pixelWidth/2)*yDiff)/xDiff; theY =(cam.pixelHeight/2)+(mult*temp); } else{ if(yDiff>(cam.pixelHeight/2)) { theY = (dim/2); mult = 1; }else if(yDiff<-(cam.pixelHeight/2)) { theY = cam.pixelHeight - (dim/2); mult = -1; } temp = ((cam.pixelHeight/2)*xDiff)/yDiff; theX =(cam.pixelWidth/2)+(mult*temp); } } else { angle = -90; theX = (cam.pixelWidth/2)+xDiff; theY = (cam.pixelHeight/2)-yDiff-dim; } GUIUtility.RotateAroundPivot(-angle, Vector2(theX, theY)); Graphics.DrawTexture(Rect(theX-(dim/2),theY-(dim/2),dim,dim),arrow,null); GUIUtility.RotateAroundPivot(angle, Vector2(theX, theY)); } } }

    Read the article

  • 2D Collision in Canvas - Balls Overlapping When Velocity is High

    - by kushsolitary
    I am doing a simple experiment in canvas using Javascript in which some balls will be thrown on the screen with some initial velocity and then they will bounce on colliding with each other or with the walls. I managed to do the collision with walls perfectly but now the problem is with the collision with other balls. I am using the following code for it: //Check collision between two bodies function collides(b1, b2) { //Find the distance between their mid-points var dx = b1.x - b2.x, dy = b1.y - b2.y, dist = Math.round(Math.sqrt(dx*dx + dy*dy)); //Check if it is a collision if(dist <= (b1.r + b2.r)) { //Calculate the angles var angle = Math.atan2(dy, dx), sin = Math.sin(angle), cos = Math.cos(angle); //Calculate the old velocity components var v1x = b1.vx * cos, v2x = b2.vx * cos, v1y = b1.vy * sin, v2y = b2.vy * sin; //Calculate the new velocity components var vel1x = ((b1.m - b2.m) / (b1.m + b2.m)) * v1x + (2 * b2.m / (b1.m + b2.m)) * v2x, vel2x = (2 * b1.m / (b1.m + b2.m)) * v1x + ((b2.m - b1.m) / (b2.m + b1.m)) * v2x, vel1y = v1y, vel2y = v2y; //Set the new velocities b1.vx = vel1x; b2.vx = vel2x; b1.vy = vel1y; b2.vy = vel2y; } } You can see the experiment here. The problem is, some balls overlap each other and stick together while some of them rebound perfectly. I don't know what is causing this issue. Here's my balls object if that matters: function Ball() { //Random Positions this.x = 50 + Math.random() * W; this.y = 50 + Math.random() * H; //Random radii this.r = 15 + Math.random() * 30; this.m = this.r; //Random velocity components this.vx = 1 + Math.random() * 4; this.vy = 1 + Math.random() * 4; //Random shade of grey color this.c = Math.round(Math.random() * 200); this.draw = function() { ctx.beginPath(); ctx.fillStyle = "rgb(" + this.c + ", " + this.c + ", " + this.c + ")"; ctx.arc(this.x, this.y, this.r, 0, Math.PI*2, false); ctx.fill(); ctx.closePath(); } }

    Read the article

  • GLSL per pixel lighting with custom light type

    - by Justin
    Ok, I am having a big problem here. I just got into GLSL yesterday, so the code will be terrible, I'm sure. Basically, I am attempting to make a light that can be passed into the fragment shader (for learning purposes). I have four input values: one for the position of the light, one for the color, one for the distance it can travel, and one for the intensity. I want to find the distance between the light and the fragment, then calculate the color from there. The code I have gives me a simply gorgeous ring of light that get's twisted and widened as the matrix is modified. I love the results, but it is not even close to what I am after. I want the light to be moved with all of the vertices, so it is always in the same place in relation to the objects. I can easily take it from there, but getting that to work seems to be impossible with my current structure. Can somebody give me a few pointers (pun not intended)? Vertex shader: attribute vec4 position; attribute vec4 color; attribute vec2 textureCoordinates; varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; void main() { vec4 ECposition = gl_ModelViewMatrix * gl_Vertex; vec3 tnorm = normalize(vec3 (gl_NormalMatrix * gl_Normal)); fposition = ftransform(); gl_Position = fposition; gl_TexCoord[0] = gl_MultiTexCoord0; fposition = ECposition; lightPosition = vec4(0.0, 0.0, 5.0, 0.0) * gl_ModelViewMatrix * gl_Vertex; lightDistance = 5.0; lightIntensity = 1.0; lightColor = vec4(0.2, 0.2, 0.2, 1.0); } Fragment shader: varying vec4 colorVarying; varying vec2 texturePosition; varying vec4 fposition; varying vec4 lightPosition; varying float lightDistance; varying float lightIntensity; varying vec4 lightColor; uniform sampler2D texture; void main() { float l_distance = sqrt((gl_FragCoord.x * lightPosition.x) + (gl_FragCoord.y * lightPosition.y) + (gl_FragCoord.z * lightPosition.z)); float l_value = lightIntensity / (l_distance / lightDistance); vec4 l_color = vec4(l_value * lightColor.r, l_value * lightColor.g, l_value * lightColor.b, l_value * lightColor.a); vec4 color; color = texture2D(texture, gl_TexCoord[0].st); gl_FragColor = l_color * color; //gl_FragColor = fposition; }

    Read the article

  • Replacing latex with unicode symbols

    - by Elazar Leibovich
    Often, during a conversation or an email, or at a forum, I would like to type some math, but I don't need full equation support. Unicode symbols should suffice. What I need is an easy way to type math related unicode symbols. Since I already know latex, it makes sense to use the latex symbol mnemonics to type the math symbols. What I currently did is to write an AutoHotKey script which automatically replaces \latexSymbol with the corresponding unicode symbol, using the "hotstrings" AutoHotKey feautres. However, the AutoHotKey hotstrings proved unstable for many strings. Having a couple of tens lines would cause AHK to fail recognizing the strings from time to time. Any other solution? (No, Alt+unicode number isn't convenient enough) Attached is my AHK script. The PutUni function is taken from here. ::\infty:: PutUni("e2889e") return ::\sum:: PutUni("e28891") return ::\int:: PutUni("e288ab") return ::\pm:: PutUni("c2b1") return ::\alpha:: PutUni("c991") return ::\beta:: PutUni("c992") return ::\phi:: PutUni("c9b8") return ::\delta:: PutUni("ceb4") return ::\pi:: PutUni("cf80") return ::\omega:: PutUni("cf89") return ::\in:: PutUni("e28888") return ::\notin:: PutUni("e28889") return ::\iff:: PutUni("e28794") return ::\leq:: PutUni("e289a4") return ::\geq:: PutUni("e289a5") return ::\sqrt:: PutUni("e2889a") return ::\neq:: PutUni("e289a0") return ::\subset:: PutUni("e28a82") return ::\nsubset:: PutUni("e28a84") return ::\nsubseteq:: PutUni("e28a88") return ::\subseteq:: PutUni("e28a86") return ::\prod:: PutUni("e2888f") return ::\N:: PutUni("e28495") return

    Read the article

  • Replacing latex with unicode symbols

    - by Elazar Leibovich
    Often, during a conversation or an email, or at a forum, I would like to type some math, but I don't need full equation support. Unicode symbols should suffice. What I need is an easy way to type math related unicode symbols. Since I already know latex, it makes sense to use the latex symbol mnemonics to type the math symbols. What I currently did is to write an AutoHotKey script which automatically replaces \latexSymbol with the corresponding unicode symbol, using the "hotstrings" AutoHotKey feautres. However, the AutoHotKey hotstrings proved unstable for many strings. Having a couple of tens lines would cause AHK to fail recognizing the strings from time to time. Any other solution? (No, Alt+unicode number isn't convenient enough) Attached is my AHK script. The PutUni function is taken from here. ::\infty:: PutUni("e2889e") return ::\sum:: PutUni("e28891") return ::\int:: PutUni("e288ab") return ::\pm:: PutUni("c2b1") return ::\alpha:: PutUni("c991") return ::\beta:: PutUni("c992") return ::\phi:: PutUni("c9b8") return ::\delta:: PutUni("ceb4") return ::\pi:: PutUni("cf80") return ::\omega:: PutUni("cf89") return ::\in:: PutUni("e28888") return ::\notin:: PutUni("e28889") return ::\iff:: PutUni("e28794") return ::\leq:: PutUni("e289a4") return ::\geq:: PutUni("e289a5") return ::\sqrt:: PutUni("e2889a") return ::\neq:: PutUni("e289a0") return ::\subset:: PutUni("e28a82") return ::\nsubset:: PutUni("e28a84") return ::\nsubseteq:: PutUni("e28a88") return ::\subseteq:: PutUni("e28a86") return ::\prod:: PutUni("e2888f") return ::\N:: PutUni("e28495") return

    Read the article

  • Binary Cosine Cofficient

    - by hairyyak
    I was given the following forumulae for calculating this sim=|QnD| / v|Q|v|D| I went ahed and implemented a class to compare strings consisting of a series of words #pragma once #include <vector> #include <string> #include <iostream> #include <vector> using namespace std; class StringSet { public: StringSet(void); StringSet( const string the_strings[], const int no_of_strings); ~StringSet(void); StringSet( const vector<string> the_strings); void add_string( const string the_string); bool remove_string( const string the_string); void clear_set(void); int no_of_strings(void) const; friend ostream& operator <<(ostream& outs, StringSet& the_strings); friend StringSet operator *(const StringSet& first, const StringSet& second); friend StringSet operator +(const StringSet& first, const StringSet& second); double binary_coefficient( const StringSet& the_second_set); private: vector<string> set; }; #include "StdAfx.h" #include "StringSet.h" #include <iterator> #include <algorithm> #include <stdexcept> #include <iostream> #include <cmath> StringSet::StringSet(void) { } StringSet::~StringSet(void) { } StringSet::StringSet( const vector<string> the_strings) { set = the_strings; } StringSet::StringSet( const string the_strings[], const int no_of_strings) { copy( the_strings, &the_strings[no_of_strings], back_inserter(set)); } void StringSet::add_string( const string the_string) { try { if( find( set.begin(), set.end(), the_string) == set.end()) { set.push_back(the_string); } else { //String is already in the set. throw domain_error("String is already in the set"); } } catch( domain_error e) { cout << e.what(); exit(1); } } bool StringSet::remove_string( const string the_string) { //Found the occurrence of the string. return it an iterator pointing to it. vector<string>::iterator iter; if( ( iter = find( set.begin(), set.end(), the_string) ) != set.end()) { set.erase(iter); return true; } return false; } void StringSet::clear_set(void) { set.clear(); } int StringSet::no_of_strings(void) const { return set.size(); } ostream& operator <<(ostream& outs, StringSet& the_strings) { vector<string>::const_iterator const_iter = the_strings.set.begin(); for( ; const_iter != the_strings.set.end(); const_iter++) { cout << *const_iter << " "; } cout << endl; return outs; } //This function returns the union of the two string sets. StringSet operator *(const StringSet& first, const StringSet& second) { vector<string> new_string_set; new_string_set = first.set; for( unsigned int i = 0; i < second.set.size(); i++) { vector<string>::const_iterator const_iter = find(new_string_set.begin(), new_string_set.end(), second.set[i]); //String is new - include it. if( const_iter == new_string_set.end() ) { new_string_set.push_back(second.set[i]); } } StringSet the_set(new_string_set); return the_set; } //This method returns the intersection of the two string sets. StringSet operator +(const StringSet& first, const StringSet& second) { //For each string in the first string look though the second and see if //there is a matching pair, in which case include the string in the set. vector<string> new_string_set; vector<string>::const_iterator const_iter = first.set.begin(); for ( ; const_iter != first.set.end(); ++const_iter) { //Then search through the entire second string to see if //there is a duplicate. vector<string>::const_iterator const_iter2 = second.set.begin(); for( ; const_iter2 != second.set.end(); const_iter2++) { if( *const_iter == *const_iter2 ) { new_string_set.push_back(*const_iter); } } } StringSet new_set(new_string_set); return new_set; } double StringSet::binary_coefficient( const StringSet& the_second_set) { double coefficient; StringSet intersection = the_second_set + set; coefficient = intersection.no_of_strings() / sqrt((double) no_of_strings()) * sqrt((double)the_second_set.no_of_strings()); return coefficient; } However when I try and calculate the coefficient using the following main function: // Exercise13.cpp : main project file. #include "stdafx.h" #include <boost/regex.hpp> #include "StringSet.h" using namespace System; using namespace System::Runtime::InteropServices; using namespace boost; //This function takes as input a string, which //is then broken down into a series of words //where the punctuaction is ignored. StringSet break_string( const string the_string) { regex re; cmatch matches; StringSet words; string search_pattern = "\\b(\\w)+\\b"; try { // Assign the regular expression for parsing. re = search_pattern; } catch( regex_error& e) { cout << search_pattern << " is not a valid regular expression: \"" << e.what() << "\"" << endl; exit(1); } sregex_token_iterator p(the_string.begin(), the_string.end(), re, 0); sregex_token_iterator end; for( ; p != end; ++p) { string new_string(p->first, p->second); String^ copy_han = gcnew String(new_string.c_str()); String^ copy_han2 = copy_han->ToLower(); char* str2 = (char*)(void*)Marshal::StringToHGlobalAnsi(copy_han2); string new_string2(str2); words.add_string(new_string2); } return words; } int main(array<System::String ^> ^args) { StringSet words = break_string("Here is a string, with some; words"); StringSet words2 = break_string("There is another string,"); cout << words.binary_coefficient(words2); return 0; } I get an index which is 1.5116 rather than a value from 0 to 1. Does anybody have a clue why this is the case? Any help would be appreciated.

    Read the article

  • How do I implement AABB ray cast hit checking for opengl es on the iPhone

    - by Big Fizzy
    Basically, I draw a 3D cube, I can spin it around but I want to be able to touch it and know where on my cube's surface the user touched. I'm using for setting up, generating and spinning. Its based on the Molecules code and NeHe tutorial #5. Any help, links, tutorials and code would be greatly appreciated. I have lots of development experience but nothing much in the way of openGL and 3d. // // GLViewController.h // NeHe Lesson 05 // // Created by Jeff LaMarche on 12/12/08. // Copyright Jeff LaMarche Consulting 2008. All rights reserved. // #import "GLViewController.h" #import "GLView.h" @implementation GLViewController - (void)drawBox { static const GLfloat cubeVertices[] = { -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f, 1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f }; static const GLubyte cubeNumberOfIndices = 36; const GLubyte cubeVertexFaces[] = { 0, 1, 5, // Half of top face 0, 5, 4, // Other half of top face 4, 6, 5, // Half of front face 4, 6, 7, // Other half of front face 0, 1, 2, // Half of back face 0, 3, 2, // Other half of back face 1, 2, 5, // Half of right face 2, 5, 6, // Other half of right face 0, 3, 4, // Half of left face 7, 4, 3, // Other half of left face 3, 6, 2, // Half of bottom face 6, 7, 3, // Other half of bottom face }; const GLubyte cubeFaceColors[] = { 0, 255, 0, 255, 255, 125, 0, 255, 255, 0, 0, 255, 255, 255, 0, 255, 0, 0, 255, 255, 255, 0, 255, 255 }; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, cubeVertices); int colorIndex = 0; for(int i = 0; i < cubeNumberOfIndices; i += 3) { glColor4ub(cubeFaceColors[colorIndex], cubeFaceColors[colorIndex+1], cubeFaceColors[colorIndex+2], cubeFaceColors[colorIndex+3]); int face = (i / 3.0); if (face%2 != 0.0) colorIndex+=4; glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, &cubeVertexFaces[i]); } glDisableClientState(GL_VERTEX_ARRAY); } //move this to a data model later! - (GLfixed)floatToFixed:(GLfloat)aValue; { return (GLfixed) (aValue * 65536.0f); } - (void)drawViewByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleFactor translationInX:(float)xTranslation translationInY:(float)yTranslation view:(GLView*)view; { glMatrixMode(GL_MODELVIEW); GLfixed currentModelViewMatrix[16] = { 45146, 47441, 2485, 0, -25149, 26775,-54274, 0, -40303, 36435, 36650, 0, 0, 0, 0, 65536 }; /* GLfixed currentModelViewMatrix[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65536 }; */ //glLoadIdentity(); //glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 4.0f); // Reset rotation system if (isFirstDrawing) { //glLoadIdentity(); glMultMatrixx(currentModelViewMatrix); [self configureLighting]; isFirstDrawing = NO; } // Scale the view to fit current multitouch scaling GLfixed fixedPointScaleFactor = [self floatToFixed:scaleFactor]; glScalex(fixedPointScaleFactor, fixedPointScaleFactor, fixedPointScaleFactor); // Perform incremental rotation based on current angles in X and Y glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation); glRotatex([self floatToFixed:totalRotation], (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[1] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[0]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[5] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[4]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[9] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[8]) ); // Translate the model by the accumulated amount glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); float currentScaleFactor = sqrt(pow((GLfloat)currentModelViewMatrix[0] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[1] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[2] / 65536.0f, 2.0f)); xTranslation = xTranslation / (currentScaleFactor * currentScaleFactor); yTranslation = yTranslation / (currentScaleFactor * currentScaleFactor); // Grab the current model matrix, and use the (0,4,8) components to figure the eye's X axis in the model coordinate system, translate along that glTranslatef(xTranslation * (GLfloat)currentModelViewMatrix[0] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[4] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[8] / 65536.0f); // Grab the current model matrix, and use the (1,5,9) components to figure the eye's Y axis in the model coordinate system, translate along that glTranslatef(yTranslation * (GLfloat)currentModelViewMatrix[1] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[5] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[9] / 65536.0f); // Black background, with depth buffer enabled glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); [self drawBox]; } - (void)configureLighting; { const GLfixed lightAmbient[] = {13107, 13107, 13107, 65535}; const GLfixed lightDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed matAmbient[] = {65535, 65535, 65535, 65535}; const GLfixed matDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed lightPosition[] = {30535, -30535, 0, 0}; const GLfixed lightShininess = 20; glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_COLOR_MATERIAL); glMaterialxv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialxv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialx(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightxv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightxv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightxv(GL_LIGHT0, GL_POSITION, lightPosition); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_NORMALIZE); } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.1, zFar = 1000.0, fieldOfView = 60.0; GLfloat size; glMatrixMode(GL_PROJECTION); glEnable(GL_DEPTH_TEST); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glScissor(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glTranslatef(0.0f, 0.0f, -6.0f); isFirstDrawing = YES; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Segmentation fault in my C program

    - by user233542
    I don't understand why this would give me a seg fault. Any ideas? This is the function that returns the signal to stop the program (plus the other function that is called within this): double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 > tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } double shoot(double Sol[N],double A,double c) { int i,j; /*Initial Conditions*/ for (i=0;i<buff;i++) { Sol[i] = 0.; } for (i=buff+l;i<N;i++) { Sol[i] = 2.*Pi; } Sol[buff]= 0; Sol[buff+1]= A*exp(sqrt(1+3*c)*dx); for (i=buff+2;i<buff+l;i++) { Sol[i] = (dx*dx)*( sin(Sol[i-1]) + c*sin(3.*(Sol[i-1])) ) - Sol[i-2] + 2.*Sol[i-1]; } return Sol[i-1]; } The values buff, l, N are defined using a #define statement. l = 401, buff = 50, N = 2000 Here is the full code: #include <stdio.h> #include <stdlib.h> #include <math.h> #define w 10 /*characteristic width of a soliton*/ #define dx 0.05 /*distance between lattice sites*/ #define s (2*w)/dx /*size of soliton shape*/ #define l (int)(s+1) /*array length for soliton*/ #define N (int)2000 /*length of field array--lattice sites*/ #define Pi (double)4*atan(1) #define buff (int)50 double shoot(double Sol[N],double A,double c); double bisect(double A0,double A1,double Sol[N],double tol,double c); void super_pos(double antiSol[N],double Sol[N],double phi[][N]); void vel_ver(double phi[][N],double v,double c,int tsteps,double dt); int main(int argc, char **argv) { double c,Sol[N],antiSol[N],A,A0,A1,tol,v,dt; int tsteps,i; FILE *fp1,*fp2,*fp3; fp1 = fopen("soliton.dat","w"); fp2 = fopen("final-phi.dat","w"); fp3 = fopen("energy.dat","w"); printf("Please input the number of time steps:"); scanf("%d",&tsteps); printf("Also, enter the time step size:"); scanf("%lf",&dt); do{ printf("Please input the parameter c in the interval [-1/3,1]:"); scanf("%lf",&c);} while(c < (-1./3.) || c > 1.); printf("Please input the inital speed of eiter soliton:"); scanf("%lf",&v); double phi[tsteps+1][N]; tol = 0.0000001; A0 = 0.; A1 = 2.*Pi; A = bisect(A0,A1,Sol,tol,c); shoot(Sol,A,c); for (i=0;i<N;i++) { fprintf(fp1,"%d\t",i); fprintf(fp1,"%lf\n",Sol[i]); } fclose(fp1); super_pos(antiSol,Sol,phi); /*vel_ver(phi,v,c,tsteps,dt); for (i=0;i<N;i++){ fprintf(fp2,"%d\t",i); fprintf(fp2,"%lf\n",phi[tsteps][i]); }*/ } double shoot(double Sol[N],double A,double c) { int i,j; /*Initial Conditions*/ for (i=0;i<buff;i++) { Sol[i] = 0.; } for (i=buff+l;i<N;i++) { Sol[i] = 2.*Pi; } Sol[buff]= 0; Sol[buff+1]= A*exp(sqrt(1+3*c)*dx); for (i=buff+2;i<buff+l;i++) { Sol[i] = (dx*dx)*( sin(Sol[i-1]) + c*sin(3.*(Sol[i-1])) ) - Sol[i-2] + 2.*Sol[i-1]; } return Sol[i-1]; } double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 > tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } void super_pos(double antiSol[N],double Sol[N],double phi[][N]) { int i; /*for (i=0;i<N;i++) { phi[i]=0; } for (i=buffer+s;i<1950-s;i++) { phi[i]=2*Pi; }*/ for (i=0;i<N;i++) { antiSol[i] = Sol[N-i]; } /*for (i=0;i<s+1;i++) { phi[buffer+j] = Sol[j]; phi[1549+j] = antiSol[j]; }*/ for (i=0;i<N;i++) { phi[0][i] = antiSol[i] + Sol[i] - 2.*Pi; } } /* This funciton will set the 2nd input array to the derivative at the time t, for all points x in the lattice */ void deriv2(double phi[][N],double DphiDx2[][N],int t) { //double SolDer2[s+1]; int x; for (x=0;x<N;x++) { DphiDx2[t][x] = (phi[buff+x+1][t] + phi[buff+x-1][t] - 2.*phi[x][t])/(dx*dx); } /*for (i=0;i<N;i++) { ptr[i] = &SolDer2[i]; }*/ //return DphiDx2[x]; } void vel_ver(double phi[][N],double v,double c,int tsteps,double dt) { int t,x; double d1,d2,dp,DphiDx1[tsteps+1][N],DphiDx2[tsteps+1][N],dpdt[tsteps+1][N],p[tsteps+1][N]; for (t=0;t<tsteps;t++){ if (t==0){ for (x=0;x<N;x++){//inital conditions deriv2(phi,DphiDx2,t); dpdt[t][x] = DphiDx2[t][x] - sin(phi[t][x]) - sin(3.*phi[t][x]); DphiDx1[t][x] = (phi[t][x+1] - phi[t][x])/dx; p[t][x] = -v*DphiDx1[t][x]; } } for (x=0;x<N;x++){//velocity-verlet phi[t+1][x] = phi[t][x] + dt*p[t][x] + (dt*dt/2)*dpdt[t][x]; p[t+1][x] = p[t][x] + (dt/2)*dpdt[t][x]; deriv2(phi,DphiDx2,t+1); dpdt[t][x] = DphiDx2[t][x] - sin(phi[t+1][x]) - sin(3.*phi[t+1][x]); p[t+1][x] += (dt/2)*dpdt[t+1][x]; } } } So, this really isn't due to my overwriting the end of the Sol array. I've commented out both functions that I suspected of causing the problem (bisect or shoot) and inserted a print function. Two things happen. When I have code like below: double A,Pi,B,c; c=0; Pi = 4.*atan(1.); A = Pi; B = 1./4.; printf("%lf",B); B = shoot(Sol,A,c); printf("%lf",B); I get a segfault from the function, shoot. However, if I take away the shoot function so that I have: double A,Pi,B,c; c=0; Pi = 4.*atan(1.); A = Pi; B = 1./4.; printf("%lf",B); it gives me a segfault at the printf... Why!?

    Read the article

  • F# - Facebook Hacker Cup - Double Squares

    - by Jacob
    I'm working on strengthening my F#-fu and decided to tackle the Facebook Hacker Cup Double Squares problem. I'm having some problems with the run-time and was wondering if anyone could help me figure out why it is so much slower than my C# equivalent. There's a good description from another post; Source: Facebook Hacker Cup Qualification Round 2011 A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 3^2 + 1^2. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 3^2 + 1^2 (we don't count 1^2 + 3^2 as being different). On the other hand, 25 can be written as 5^2 + 0^2 or as 4^2 + 3^2. You need to solve this problem for 0 = X = 2,147,483,647. Examples: 10 = 1 25 = 2 3 = 0 0 = 1 1 = 1 My basic strategy (which I'm open to critique on) is to; Create a dictionary (for memoize) of the input numbers initialzed to 0 Get the largest number (LN) and pass it to count/memo function Get the LN square root as int Calculate squares for all numbers 0 to LN and store in dict Sum squares for non repeat combinations of numbers from 0 to LN If sum is in memo dict, add 1 to memo Finally, output the counts of the original numbers. Here is the F# code (See code changes at bottom) I've written that I believe corresponds to this strategy (Runtime: ~8:10); open System open System.Collections.Generic open System.IO /// Get a sequence of values let rec range min max = seq { for num in [min .. max] do yield num } /// Get a sequence starting from 0 and going to max let rec zeroRange max = range 0 max /// Find the maximum number in a list with a starting accumulator (acc) let rec maxNum acc = function | [] -> acc | p::tail when p > acc -> maxNum p tail | p::tail -> maxNum acc tail /// A helper for finding max that sets the accumulator to 0 let rec findMax nums = maxNum 0 nums /// Build a collection of combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) let rec combos range = seq { let count = ref 0 for inner in range do for outer in Seq.skip !count range do yield (inner, outer) count := !count + 1 } let rec squares nums = let dict = new Dictionary<int, int>() for s in nums do dict.[s] <- (s * s) dict /// Counts the number of possible double squares for a given number and keeps track of other counts that are provided in the memo dict. let rec countDoubleSquares (num: int) (memo: Dictionary<int, int>) = // The highest relevent square is the square root because it squared plus 0 squared is the top most possibility let maxSquare = System.Math.Sqrt((float)num) // Our relevant squares are 0 to the highest possible square; note the cast to int which shouldn't hurt. let relSquares = range 0 ((int)maxSquare) // calculate the squares up front; let calcSquares = squares relSquares // Build up our square combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) for (sq1, sq2) in combos relSquares do let v = calcSquares.[sq1] + calcSquares.[sq2] // Memoize our relevant results if memo.ContainsKey(v) then memo.[v] <- memo.[v] + 1 // return our count for the num passed in memo.[num] // Read our numbers from file. //let lines = File.ReadAllLines("test2.txt") //let nums = [ for line in Seq.skip 1 lines -> Int32.Parse(line) ] // Optionally, read them from straight array let nums = [1740798996; 1257431873; 2147483643; 602519112; 858320077; 1048039120; 415485223; 874566596; 1022907856; 65; 421330820; 1041493518; 5; 1328649093; 1941554117; 4225; 2082925; 0; 1; 3] // Initialize our memoize dictionary let memo = new Dictionary<int, int>() for num in nums do memo.[num] <- 0 // Get the largest number in our set, all other numbers will be memoized along the way let maxN = findMax nums // Do the memoize let maxCount = countDoubleSquares maxN memo // Output our results. for num in nums do printfn "%i" memo.[num] // Have a little pause for when we debug let line = Console.Read() And here is my version in C# (Runtime: ~1:40: using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Text; namespace FBHack_DoubleSquares { public class TestInput { public int NumCases { get; set; } public List<int> Nums { get; set; } public TestInput() { Nums = new List<int>(); } public int MaxNum() { return Nums.Max(); } } class Program { static void Main(string[] args) { // Read input from file. //TestInput input = ReadTestInput("live.txt"); // As example, load straight. TestInput input = new TestInput { NumCases = 20, Nums = new List<int> { 1740798996, 1257431873, 2147483643, 602519112, 858320077, 1048039120, 415485223, 874566596, 1022907856, 65, 421330820, 1041493518, 5, 1328649093, 1941554117, 4225, 2082925, 0, 1, 3, } }; var maxNum = input.MaxNum(); Dictionary<int, int> memo = new Dictionary<int, int>(); foreach (var num in input.Nums) { if (!memo.ContainsKey(num)) memo.Add(num, 0); } DoMemoize(maxNum, memo); StringBuilder sb = new StringBuilder(); foreach (var num in input.Nums) { //Console.WriteLine(memo[num]); sb.AppendLine(memo[num].ToString()); } Console.Write(sb.ToString()); var blah = Console.Read(); //File.WriteAllText("out.txt", sb.ToString()); } private static int DoMemoize(int num, Dictionary<int, int> memo) { var highSquare = (int)Math.Floor(Math.Sqrt(num)); var squares = CreateSquareLookup(highSquare); var relSquares = squares.Keys.ToList(); Debug.WriteLine("Starting - " + num.ToString()); Debug.WriteLine("RelSquares.Count = {0}", relSquares.Count); int sum = 0; var index = 0; foreach (var square in relSquares) { foreach (var inner in relSquares.Skip(index)) { sum = squares[square] + squares[inner]; if (memo.ContainsKey(sum)) memo[sum]++; } index++; } if (memo.ContainsKey(num)) return memo[num]; return 0; } private static TestInput ReadTestInput(string fileName) { var lines = File.ReadAllLines(fileName); var input = new TestInput(); input.NumCases = int.Parse(lines[0]); foreach (var lin in lines.Skip(1)) { input.Nums.Add(int.Parse(lin)); } return input; } public static Dictionary<int, int> CreateSquareLookup(int maxNum) { var dict = new Dictionary<int, int>(); int square; foreach (var num in Enumerable.Range(0, maxNum)) { square = num * num; dict[num] = square; } return dict; } } } Thanks for taking a look. UPDATE Changing the combos function slightly will result in a pretty big performance boost (from 8 min to 3:45): /// Old and Busted... let rec combosOld range = seq { let rangeCache = Seq.cache range let count = ref 0 for inner in rangeCache do for outer in Seq.skip !count rangeCache do yield (inner, outer) count := !count + 1 } /// The New Hotness... let rec combos maxNum = seq { for i in 0..maxNum do for j in i..maxNum do yield i,j }

    Read the article

  • Atmospheric scattering OpenGL 3.3

    - by user1419305
    Im currently trying to convert a shader by Sean O'Neil to version 330 so i can try it out in a application im writing. Im having some issues with deprecated functions, so i replaced them, but im almost completely new to glsl, so i probably did a mistake somewhere. Original shaders can be found here: http://www.gamedev.net/topic/592043-solved-trying-to-use-atmospheric-scattering-oneill-2004-but-get-black-sphere/ My horrible attempt at converting them: Vertex Shader: #version 330 core layout(location = 0) in vec3 vertexPosition_modelspace; //layout(location = 1) in vec2 vertexUV; layout(location = 2) in vec3 vertexNormal_modelspace; uniform vec3 v3CameraPos; uniform vec3 v3LightPos; uniform vec3 v3InvWavelength; uniform float fCameraHeight; uniform float fCameraHeight2; uniform float fOuterRadius; uniform float fOuterRadius2; uniform float fInnerRadius; uniform float fInnerRadius2; uniform float fKrESun; uniform float fKmESun; uniform float fKr4PI; uniform float fKm4PI; uniform float fScale; uniform float fScaleDepth; uniform float fScaleOverScaleDepth; // passing in matrixes for transformations uniform mat4 MVP; uniform mat4 V; uniform mat4 M; const int nSamples = 4; const float fSamples = 4.0; out vec3 v3Direction; out vec4 gg_FrontColor; out vec4 gg_FrontSecondaryColor; float scale(float fCos) { float x = 1.0 - fCos; return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { vec3 v3Pos = vertexPosition_modelspace; vec3 v3Ray = v3Pos - v3CameraPos; float fFar = length(v3Ray); v3Ray /= fFar; vec3 v3Start = v3CameraPos; float fHeight = length(v3Start); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight)); float fStartAngle = dot(v3Ray, v3Start) / fHeight; float fStartOffset = fDepth*scale(fStartAngle); float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight)); float fLightAngle = dot(v3LightPos, v3SamplePoint) / fHeight; float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; float fScatter = (fStartOffset + fDepth*(scale(fLightAngle) - scale(fCameraAngle))); vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } gg_FrontSecondaryColor.rgb = v3FrontColor * fKmESun; gg_FrontColor.rgb = v3FrontColor * (v3InvWavelength * fKrESun); gl_Position = MVP * vec4(vertexPosition_modelspace,1); v3Direction = v3CameraPos - v3Pos; } Fragment Shader: #version 330 core uniform vec3 v3LightPos; uniform float g; uniform float g2; in vec3 v3Direction; out vec4 FragColor; in vec4 gg_FrontColor; in vec4 gg_FrontSecondaryColor; void main (void) { float fCos = dot(v3LightPos, v3Direction) / length(v3Direction); float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5); FragColor = gg_FrontColor + fMiePhase * gg_FrontSecondaryColor; FragColor.a = FragColor.b; } I wrote a function to render a sphere, and im trying to render this shader onto a inverted version of it, the sphere works completely fine, with normals and all. My problem is that the sphere gets rendered all black, so the shader is not working. This is how i'm trying to render the atmosphere inside my main rendering loop. glUseProgram(programAtmosphere); glBindTexture(GL_TEXTURE_2D, 0); //###################### glUniform3f(v3CameraPos, getPlayerPos().x, getPlayerPos().y, getPlayerPos().z); glUniform3f(v3LightPos, lightPos.x / sqrt(lightPos.x * lightPos.x + lightPos.y * lightPos.y), lightPos.y / sqrt(lightPos.x * lightPos.x + lightPos.y * lightPos.y), 0); glUniform3f(v3InvWavelength, 1.0 / pow(0.650, 4.0), 1.0 / pow(0.570, 4.0), 1.0 / pow(0.475, 4.0)); glUniform1fARB(fCameraHeight, 1); glUniform1fARB(fCameraHeight2, 1); glUniform1fARB(fInnerRadius, 6350); glUniform1fARB(fInnerRadius2, 6350 * 6350); glUniform1fARB(fOuterRadius, 6450); glUniform1fARB(fOuterRadius2, 6450 * 6450); glUniform1fARB(fKrESun, 0.0025 * 20.0); glUniform1fARB(fKmESun, 0.0015 * 20.0); glUniform1fARB(fKr4PI, 0.0025 * 4.0 * 3.141592653); glUniform1fARB(fKm4PI, 0.0015 * 4.0 * 3.141592653); glUniform1fARB(fScale, 1.0 / (6450 - 6350)); glUniform1fARB(fScaleDepth, 0.25); glUniform1fARB(fScaleOverScaleDepth, 4.0 / (6450 - 6350)); glUniform1fARB(g, -0.85); glUniform1f(g2, -0.85 * -0.85); // vertices glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer[1]); glVertexAttribPointer( 0, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? 0, // stride (void*)0 // array buffer offset ); // normals glEnableVertexAttribArray(2); glBindBuffer(GL_ARRAY_BUFFER, normalbuffer[1]); glVertexAttribPointer( 2, // attribute 3, // size GL_FLOAT, // type GL_FALSE, // normalized? 0, // stride (void*)0 // array buffer offset ); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer[1]); glUniformMatrix4fv(ModelMatrixAT, 1, GL_FALSE, &ModelMatrix[0][0]); glUniformMatrix4fv(ViewMatrixAT, 1, GL_FALSE, &ViewMatrix[0][0]); glUniformMatrix4fv(ModelViewPAT, 1, GL_FALSE, &MVP[0][0]); // Draw the triangles glDrawElements( GL_TRIANGLES, // mode cubeIndices[1], // count GL_UNSIGNED_SHORT, // type (void*)0 // element array buffer offset ); Any ideas?

    Read the article

  • value types in the vm

    - by john.rose
    value types in the vm p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} p.p2 {margin: 0.0px 0.0px 14.0px 0.0px; font: 14.0px Times} p.p3 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times} p.p4 {margin: 0.0px 0.0px 15.0px 0.0px; font: 14.0px Times} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Courier; min-height: 17.0px} p.p7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p8 {margin: 0.0px 0.0px 0.0px 36.0px; text-indent: -36.0px; font: 14.0px Times; min-height: 18.0px} p.p9 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; min-height: 18.0px} p.p10 {margin: 0.0px 0.0px 12.0px 0.0px; font: 14.0px Times; color: #000000} li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times} li.li7 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Times; min-height: 18.0px} span.s1 {font: 14.0px Courier} span.s2 {color: #000000} span.s3 {font: 14.0px Courier; color: #000000} ol.ol1 {list-style-type: decimal} Or, enduring values for a changing world. Introduction A value type is a data type which, generally speaking, is designed for being passed by value in and out of methods, and stored by value in data structures. The only value types which the Java language directly supports are the eight primitive types. Java indirectly and approximately supports value types, if they are implemented in terms of classes. For example, both Integer and String may be viewed as value types, especially if their usage is restricted to avoid operations appropriate to Object. In this note, we propose a definition of value types in terms of a design pattern for Java classes, accompanied by a set of usage restrictions. We also sketch the relation of such value types to tuple types (which are a JVM-level notion), and point out JVM optimizations that can apply to value types. This note is a thought experiment to extend the JVM’s performance model in support of value types. The demonstration has two phases.  Initially the extension can simply use design patterns, within the current bytecode architecture, and in today’s Java language. But if the performance model is to be realized in practice, it will probably require new JVM bytecode features, changes to the Java language, or both.  We will look at a few possibilities for these new features. An Axiom of Value In the context of the JVM, a value type is a data type equipped with construction, assignment, and equality operations, and a set of typed components, such that, whenever two variables of the value type produce equal corresponding values for their components, the values of the two variables cannot be distinguished by any JVM operation. Here are some corollaries: A value type is immutable, since otherwise a copy could be constructed and the original could be modified in one of its components, allowing the copies to be distinguished. Changing the component of a value type requires construction of a new value. The equals and hashCode operations are strictly component-wise. If a value type is represented by a JVM reference, that reference cannot be successfully synchronized on, and cannot be usefully compared for reference equality. A value type can be viewed in terms of what it doesn’t do. We can say that a value type omits all value-unsafe operations, which could violate the constraints on value types.  These operations, which are ordinarily allowed for Java object types, are pointer equality comparison (the acmp instruction), synchronization (the monitor instructions), all the wait and notify methods of class Object, and non-trivial finalize methods. The clone method is also value-unsafe, although for value types it could be treated as the identity function. Finally, and most importantly, any side effect on an object (however visible) also counts as an value-unsafe operation. A value type may have methods, but such methods must not change the components of the value. It is reasonable and useful to define methods like toString, equals, and hashCode on value types, and also methods which are specifically valuable to users of the value type. Representations of Value Value types have two natural representations in the JVM, unboxed and boxed. An unboxed value consists of the components, as simple variables. For example, the complex number x=(1+2i), in rectangular coordinate form, may be represented in unboxed form by the following pair of variables: /*Complex x = Complex.valueOf(1.0, 2.0):*/ double x_re = 1.0, x_im = 2.0; These variables might be locals, parameters, or fields. Their association as components of a single value is not defined to the JVM. Here is a sample computation which computes the norm of the difference between two complex numbers: double distance(/*Complex x:*/ double x_re, double x_im,         /*Complex y:*/ double y_re, double y_im) {     /*Complex z = x.minus(y):*/     double z_re = x_re - y_re, z_im = x_im - y_im;     /*return z.abs():*/     return Math.sqrt(z_re*z_re + z_im*z_im); } A boxed representation groups component values under a single object reference. The reference is to a ‘wrapper class’ that carries the component values in its fields. (A primitive type can naturally be equated with a trivial value type with just one component of that type. In that view, the wrapper class Integer can serve as a boxed representation of value type int.) The unboxed representation of complex numbers is practical for many uses, but it fails to cover several major use cases: return values, array elements, and generic APIs. The two components of a complex number cannot be directly returned from a Java function, since Java does not support multiple return values. The same story applies to array elements: Java has no ’array of structs’ feature. (Double-length arrays are a possible workaround for complex numbers, but not for value types with heterogeneous components.) By generic APIs I mean both those which use generic types, like Arrays.asList and those which have special case support for primitive types, like String.valueOf and PrintStream.println. Those APIs do not support unboxed values, and offer some problems to boxed values. Any ’real’ JVM type should have a story for returns, arrays, and API interoperability. The basic problem here is that value types fall between primitive types and object types. Value types are clearly more complex than primitive types, and object types are slightly too complicated. Objects are a little bit dangerous to use as value carriers, since object references can be compared for pointer equality, and can be synchronized on. Also, as many Java programmers have observed, there is often a performance cost to using wrapper objects, even on modern JVMs. Even so, wrapper classes are a good starting point for talking about value types. If there were a set of structural rules and restrictions which would prevent value-unsafe operations on value types, wrapper classes would provide a good notation for defining value types. This note attempts to define such rules and restrictions. Let’s Start Coding Now it is time to look at some real code. Here is a definition, written in Java, of a complex number value type. @ValueSafe public final class Complex implements java.io.Serializable {     // immutable component structure:     public final double re, im;     private Complex(double re, double im) {         this.re = re; this.im = im;     }     // interoperability methods:     public String toString() { return "Complex("+re+","+im+")"; }     public List<Double> asList() { return Arrays.asList(re, im); }     public boolean equals(Complex c) {         return re == c.re && im == c.im;     }     public boolean equals(@ValueSafe Object x) {         return x instanceof Complex && equals((Complex) x);     }     public int hashCode() {         return 31*Double.valueOf(re).hashCode()                 + Double.valueOf(im).hashCode();     }     // factory methods:     public static Complex valueOf(double re, double im) {         return new Complex(re, im);     }     public Complex changeRe(double re2) { return valueOf(re2, im); }     public Complex changeIm(double im2) { return valueOf(re, im2); }     public static Complex cast(@ValueSafe Object x) {         return x == null ? ZERO : (Complex) x;     }     // utility methods and constants:     public Complex plus(Complex c)  { return new Complex(re+c.re, im+c.im); }     public Complex minus(Complex c) { return new Complex(re-c.re, im-c.im); }     public double abs() { return Math.sqrt(re*re + im*im); }     public static final Complex PI = valueOf(Math.PI, 0.0);     public static final Complex ZERO = valueOf(0.0, 0.0); } This is not a minimal definition, because it includes some utility methods and other optional parts.  The essential elements are as follows: The class is marked as a value type with an annotation. The class is final, because it does not make sense to create subclasses of value types. The fields of the class are all non-private and final.  (I.e., the type is immutable and structurally transparent.) From the supertype Object, all public non-final methods are overridden. The constructor is private. Beyond these bare essentials, we can observe the following features in this example, which are likely to be typical of all value types: One or more factory methods are responsible for value creation, including a component-wise valueOf method. There are utility methods for complex arithmetic and instance creation, such as plus and changeIm. There are static utility constants, such as PI. The type is serializable, using the default mechanisms. There are methods for converting to and from dynamically typed references, such as asList and cast. The Rules In order to use value types properly, the programmer must avoid value-unsafe operations.  A helpful Java compiler should issue errors (or at least warnings) for code which provably applies value-unsafe operations, and should issue warnings for code which might be correct but does not provably avoid value-unsafe operations.  No such compilers exist today, but to simplify our account here, we will pretend that they do exist. A value-safe type is any class, interface, or type parameter marked with the @ValueSafe annotation, or any subtype of a value-safe type.  If a value-safe class is marked final, it is in fact a value type.  All other value-safe classes must be abstract.  The non-static fields of a value class must be non-public and final, and all its constructors must be private. Under the above rules, a standard interface could be helpful to define value types like Complex.  Here is an example: @ValueSafe public interface ValueType extends java.io.Serializable {     // All methods listed here must get redefined.     // Definitions must be value-safe, which means     // they may depend on component values only.     List<? extends Object> asList();     int hashCode();     boolean equals(@ValueSafe Object c);     String toString(); } //@ValueSafe inherited from supertype: public final class Complex implements ValueType { … The main advantage of such a conventional interface is that (unlike an annotation) it is reified in the runtime type system.  It could appear as an element type or parameter bound, for facilities which are designed to work on value types only.  More broadly, it might assist the JVM to perform dynamic enforcement of the rules for value types. Besides types, the annotation @ValueSafe can mark fields, parameters, local variables, and methods.  (This is redundant when the type is also value-safe, but may be useful when the type is Object or another supertype of a value type.)  Working forward from these annotations, an expression E is defined as value-safe if it satisfies one or more of the following: The type of E is a value-safe type. E names a field, parameter, or local variable whose declaration is marked @ValueSafe. E is a call to a method whose declaration is marked @ValueSafe. E is an assignment to a value-safe variable, field reference, or array reference. E is a cast to a value-safe type from a value-safe expression. E is a conditional expression E0 ? E1 : E2, and both E1 and E2 are value-safe. Assignments to value-safe expressions and initializations of value-safe names must take their values from value-safe expressions. A value-safe expression may not be the subject of a value-unsafe operation.  In particular, it cannot be synchronized on, nor can it be compared with the “==” operator, not even with a null or with another value-safe type. In a program where all of these rules are followed, no value-type value will be subject to a value-unsafe operation.  Thus, the prime axiom of value types will be satisfied, that no two value type will be distinguishable as long as their component values are equal. More Code To illustrate these rules, here are some usage examples for Complex: Complex pi = Complex.valueOf(Math.PI, 0); Complex zero = pi.changeRe(0);  //zero = pi; zero.re = 0; ValueType vtype = pi; @SuppressWarnings("value-unsafe")   Object obj = pi; @ValueSafe Object obj2 = pi; obj2 = new Object();  // ok List<Complex> clist = new ArrayList<Complex>(); clist.add(pi);  // (ok assuming List.add param is @ValueSafe) List<ValueType> vlist = new ArrayList<ValueType>(); vlist.add(pi);  // (ok) List<Object> olist = new ArrayList<Object>(); olist.add(pi);  // warning: "value-unsafe" boolean z = pi.equals(zero); boolean z1 = (pi == zero);  // error: reference comparison on value type boolean z2 = (pi == null);  // error: reference comparison on value type boolean z3 = (pi == obj2);  // error: reference comparison on value type synchronized (pi) { }  // error: synch of value, unpredictable result synchronized (obj2) { }  // unpredictable result Complex qq = pi; qq = null;  // possible NPE; warning: “null-unsafe" qq = (Complex) obj;  // warning: “null-unsafe" qq = Complex.cast(obj);  // OK @SuppressWarnings("null-unsafe")   Complex empty = null;  // possible NPE qq = empty;  // possible NPE (null pollution) The Payoffs It follows from this that either the JVM or the java compiler can replace boxed value-type values with unboxed ones, without affecting normal computations.  Fields and variables of value types can be split into their unboxed components.  Non-static methods on value types can be transformed into static methods which take the components as value parameters. Some common questions arise around this point in any discussion of value types. Why burden the programmer with all these extra rules?  Why not detect programs automagically and perform unboxing transparently?  The answer is that it is easy to break the rules accidently unless they are agreed to by the programmer and enforced.  Automatic unboxing optimizations are tantalizing but (so far) unreachable ideal.  In the current state of the art, it is possible exhibit benchmarks in which automatic unboxing provides the desired effects, but it is not possible to provide a JVM with a performance model that assures the programmer when unboxing will occur.  This is why I’m writing this note, to enlist help from, and provide assurances to, the programmer.  Basically, I’m shooting for a good set of user-supplied “pragmas” to frame the desired optimization. Again, the important thing is that the unboxing must be done reliably, or else programmers will have no reason to work with the extra complexity of the value-safety rules.  There must be a reasonably stable performance model, wherein using a value type has approximately the same performance characteristics as writing the unboxed components as separate Java variables. There are some rough corners to the present scheme.  Since Java fields and array elements are initialized to null, value-type computations which incorporate uninitialized variables can produce null pointer exceptions.  One workaround for this is to require such variables to be null-tested, and the result replaced with a suitable all-zero value of the value type.  That is what the “cast” method does above. Generically typed APIs like List<T> will continue to manipulate boxed values always, at least until we figure out how to do reification of generic type instances.  Use of such APIs will elicit warnings until their type parameters (and/or relevant members) are annotated or typed as value-safe.  Retrofitting List<T> is likely to expose flaws in the present scheme, which we will need to engineer around.  Here are a couple of first approaches: public interface java.util.List<@ValueSafe T> extends Collection<T> { … public interface java.util.List<T extends Object|ValueType> extends Collection<T> { … (The second approach would require disjunctive types, in which value-safety is “contagious” from the constituent types.) With more transformations, the return value types of methods can also be unboxed.  This may require significant bytecode-level transformations, and would work best in the presence of a bytecode representation for multiple value groups, which I have proposed elsewhere under the title “Tuples in the VM”. But for starters, the JVM can apply this transformation under the covers, to internally compiled methods.  This would give a way to express multiple return values and structured return values, which is a significant pain-point for Java programmers, especially those who work with low-level structure types favored by modern vector and graphics processors.  The lack of multiple return values has a strong distorting effect on many Java APIs. Even if the JVM fails to unbox a value, there is still potential benefit to the value type.  Clustered computing systems something have copy operations (serialization or something similar) which apply implicitly to command operands.  When copying JVM objects, it is extremely helpful to know when an object’s identity is important or not.  If an object reference is a copied operand, the system may have to create a proxy handle which points back to the original object, so that side effects are visible.  Proxies must be managed carefully, and this can be expensive.  On the other hand, value types are exactly those types which a JVM can “copy and forget” with no downside. Array types are crucial to bulk data interfaces.  (As data sizes and rates increase, bulk data becomes more important than scalar data, so arrays are definitely accompanying us into the future of computing.)  Value types are very helpful for adding structure to bulk data, so a successful value type mechanism will make it easier for us to express richer forms of bulk data. Unboxing arrays (i.e., arrays containing unboxed values) will provide better cache and memory density, and more direct data movement within clustered or heterogeneous computing systems.  They require the deepest transformations, relative to today’s JVM.  There is an impedance mismatch between value-type arrays and Java’s covariant array typing, so compromises will need to be struck with existing Java semantics.  It is probably worth the effort, since arrays of unboxed value types are inherently more memory-efficient than standard Java arrays, which rely on dependent pointer chains. It may be sufficient to extend the “value-safe” concept to array declarations, and allow low-level transformations to change value-safe array declarations from the standard boxed form into an unboxed tuple-based form.  Such value-safe arrays would not be convertible to Object[] arrays.  Certain connection points, such as Arrays.copyOf and System.arraycopy might need additional input/output combinations, to allow smooth conversion between arrays with boxed and unboxed elements. Alternatively, the correct solution may have to wait until we have enough reification of generic types, and enough operator overloading, to enable an overhaul of Java arrays. Implicit Method Definitions The example of class Complex above may be unattractively complex.  I believe most or all of the elements of the example class are required by the logic of value types. If this is true, a programmer who writes a value type will have to write lots of error-prone boilerplate code.  On the other hand, I think nearly all of the code (except for the domain-specific parts like plus and minus) can be implicitly generated. Java has a rule for implicitly defining a class’s constructor, if no it defines no constructors explicitly.  Likewise, there are rules for providing default access modifiers for interface members.  Because of the highly regular structure of value types, it might be reasonable to perform similar implicit transformations on value types.  Here’s an example of a “highly implicit” definition of a complex number type: public class Complex implements ValueType {  // implicitly final     public double re, im;  // implicitly public final     //implicit methods are defined elementwise from te fields:     //  toString, asList, equals(2), hashCode, valueOf, cast     //optionally, explicit methods (plus, abs, etc.) would go here } In other words, with the right defaults, a simple value type definition can be a one-liner.  The observant reader will have noticed the similarities (and suitable differences) between the explicit methods above and the corresponding methods for List<T>. Another way to abbreviate such a class would be to make an annotation the primary trigger of the functionality, and to add the interface(s) implicitly: public @ValueType class Complex { … // implicitly final, implements ValueType (But to me it seems better to communicate the “magic” via an interface, even if it is rooted in an annotation.) Implicitly Defined Value Types So far we have been working with nominal value types, which is to say that the sequence of typed components is associated with a name and additional methods that convey the intention of the programmer.  A simple ordered pair of floating point numbers can be variously interpreted as (to name a few possibilities) a rectangular or polar complex number or Cartesian point.  The name and the methods convey the intended meaning. But what if we need a truly simple ordered pair of floating point numbers, without any further conceptual baggage?  Perhaps we are writing a method (like “divideAndRemainder”) which naturally returns a pair of numbers instead of a single number.  Wrapping the pair of numbers in a nominal type (like “QuotientAndRemainder”) makes as little sense as wrapping a single return value in a nominal type (like “Quotient”).  What we need here are structural value types commonly known as tuples. For the present discussion, let us assign a conventional, JVM-friendly name to tuples, roughly as follows: public class java.lang.tuple.$DD extends java.lang.tuple.Tuple {      double $1, $2; } Here the component names are fixed and all the required methods are defined implicitly.  The supertype is an abstract class which has suitable shared declarations.  The name itself mentions a JVM-style method parameter descriptor, which may be “cracked” to determine the number and types of the component fields. The odd thing about such a tuple type (and structural types in general) is it must be instantiated lazily, in response to linkage requests from one or more classes that need it.  The JVM and/or its class loaders must be prepared to spin a tuple type on demand, given a simple name reference, $xyz, where the xyz is cracked into a series of component types.  (Specifics of naming and name mangling need some tasteful engineering.) Tuples also seem to demand, even more than nominal types, some support from the language.  (This is probably because notations for non-nominal types work best as combinations of punctuation and type names, rather than named constructors like Function3 or Tuple2.)  At a minimum, languages with tuples usually (I think) have some sort of simple bracket notation for creating tuples, and a corresponding pattern-matching syntax (or “destructuring bind”) for taking tuples apart, at least when they are parameter lists.  Designing such a syntax is no simple thing, because it ought to play well with nominal value types, and also with pre-existing Java features, such as method parameter lists, implicit conversions, generic types, and reflection.  That is a task for another day. Other Use Cases Besides complex numbers and simple tuples there are many use cases for value types.  Many tuple-like types have natural value-type representations. These include rational numbers, point locations and pixel colors, and various kinds of dates and addresses. Other types have a variable-length ‘tail’ of internal values. The most common example of this is String, which is (mathematically) a sequence of UTF-16 character values. Similarly, bit vectors, multiple-precision numbers, and polynomials are composed of sequences of values. Such types include, in their representation, a reference to a variable-sized data structure (often an array) which (somehow) represents the sequence of values. The value type may also include ’header’ information. Variable-sized values often have a length distribution which favors short lengths. In that case, the design of the value type can make the first few values in the sequence be direct ’header’ fields of the value type. In the common case where the header is enough to represent the whole value, the tail can be a shared null value, or even just a null reference. Note that the tail need not be an immutable object, as long as the header type encapsulates it well enough. This is the case with String, where the tail is a mutable (but never mutated) character array. Field types and their order must be a globally visible part of the API.  The structure of the value type must be transparent enough to have a globally consistent unboxed representation, so that all callers and callees agree about the type and order of components  that appear as parameters, return types, and array elements.  This is a trade-off between efficiency and encapsulation, which is forced on us when we remove an indirection enjoyed by boxed representations.  A JVM-only transformation would not care about such visibility, but a bytecode transformation would need to take care that (say) the components of complex numbers would not get swapped after a redefinition of Complex and a partial recompile.  Perhaps constant pool references to value types need to declare the field order as assumed by each API user. This brings up the delicate status of private fields in a value type.  It must always be possible to load, store, and copy value types as coordinated groups, and the JVM performs those movements by moving individual scalar values between locals and stack.  If a component field is not public, what is to prevent hostile code from plucking it out of the tuple using a rogue aload or astore instruction?  Nothing but the verifier, so we may need to give it more smarts, so that it treats value types as inseparable groups of stack slots or locals (something like long or double). My initial thought was to make the fields always public, which would make the security problem moot.  But public is not always the right answer; consider the case of String, where the underlying mutable character array must be encapsulated to prevent security holes.  I believe we can win back both sides of the tradeoff, by training the verifier never to split up the components in an unboxed value.  Just as the verifier encapsulates the two halves of a 64-bit primitive, it can encapsulate the the header and body of an unboxed String, so that no code other than that of class String itself can take apart the values. Similar to String, we could build an efficient multi-precision decimal type along these lines: public final class DecimalValue extends ValueType {     protected final long header;     protected private final BigInteger digits;     public DecimalValue valueOf(int value, int scale) {         assert(scale >= 0);         return new DecimalValue(((long)value << 32) + scale, null);     }     public DecimalValue valueOf(long value, int scale) {         if (value == (int) value)             return valueOf((int)value, scale);         return new DecimalValue(-scale, new BigInteger(value));     } } Values of this type would be passed between methods as two machine words. Small values (those with a significand which fits into 32 bits) would be represented without any heap data at all, unless the DecimalValue itself were boxed. (Note the tension between encapsulation and unboxing in this case.  It would be better if the header and digits fields were private, but depending on where the unboxing information must “leak”, it is probably safer to make a public revelation of the internal structure.) Note that, although an array of Complex can be faked with a double-length array of double, there is no easy way to fake an array of unboxed DecimalValues.  (Either an array of boxed values or a transposed pair of homogeneous arrays would be reasonable fallbacks, in a current JVM.)  Getting the full benefit of unboxing and arrays will require some new JVM magic. Although the JVM emphasizes portability, system dependent code will benefit from using machine-level types larger than 64 bits.  For example, the back end of a linear algebra package might benefit from value types like Float4 which map to stock vector types.  This is probably only worthwhile if the unboxing arrays can be packed with such values. More Daydreams A more finely-divided design for dynamic enforcement of value safety could feature separate marker interfaces for each invariant.  An empty marker interface Unsynchronizable could cause suitable exceptions for monitor instructions on objects in marked classes.  More radically, a Interchangeable marker interface could cause JVM primitives that are sensitive to object identity to raise exceptions; the strangest result would be that the acmp instruction would have to be specified as raising an exception. @ValueSafe public interface ValueType extends java.io.Serializable,         Unsynchronizable, Interchangeable { … public class Complex implements ValueType {     // inherits Serializable, Unsynchronizable, Interchangeable, @ValueSafe     … It seems possible that Integer and the other wrapper types could be retro-fitted as value-safe types.  This is a major change, since wrapper objects would be unsynchronizable and their references interchangeable.  It is likely that code which violates value-safety for wrapper types exists but is uncommon.  It is less plausible to retro-fit String, since the prominent operation String.intern is often used with value-unsafe code. We should also reconsider the distinction between boxed and unboxed values in code.  The design presented above obscures that distinction.  As another thought experiment, we could imagine making a first class distinction in the type system between boxed and unboxed representations.  Since only primitive types are named with a lower-case initial letter, we could define that the capitalized version of a value type name always refers to the boxed representation, while the initial lower-case variant always refers to boxed.  For example: complex pi = complex.valueOf(Math.PI, 0); Complex boxPi = pi;  // convert to boxed myList.add(boxPi); complex z = myList.get(0);  // unbox Such a convention could perhaps absorb the current difference between int and Integer, double and Double. It might also allow the programmer to express a helpful distinction among array types. As said above, array types are crucial to bulk data interfaces, but are limited in the JVM.  Extending arrays beyond the present limitations is worth thinking about; for example, the Maxine JVM implementation has a hybrid object/array type.  Something like this which can also accommodate value type components seems worthwhile.  On the other hand, does it make sense for value types to contain short arrays?  And why should random-access arrays be the end of our design process, when bulk data is often sequentially accessed, and it might make sense to have heterogeneous streams of data as the natural “jumbo” data structure.  These considerations must wait for another day and another note. More Work It seems to me that a good sequence for introducing such value types would be as follows: Add the value-safety restrictions to an experimental version of javac. Code some sample applications with value types, including Complex and DecimalValue. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. A staggered roll-out like this would decouple language changes from bytecode changes, which is always a convenient thing. A similar investigation should be applied (concurrently) to array types.  In this case, it seems to me that the starting point is in the JVM: Add an experimental unboxing array data structure to a production JVM, perhaps along the lines of Maxine hybrids.  No bytecode or language support is required at first; everything can be done with encapsulated unsafe operations and/or method handles. Create an experimental JVM which internally unboxes value types but does not require new bytecodes to do so.  Ensure the feasibility of the performance model for the sample applications. Add tuple-like bytecodes (with or without generic type reification) to a major revision of the JVM, and teach the Java compiler to switch in the new bytecodes without code changes. That’s enough musing me for now.  Back to work!

    Read the article

  • Tile engine Texture not updating when numbers in array change

    - by Corey
    I draw my map from a txt file. I am using java with slick2d library. When I print the array the number changes in the array, but the texture doesn't change. public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; public Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); x = 0; for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); x++; } //System.out.println(""); y++; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int y = 0; y < map.length; y++) { for(int x = 0; x < map[0].length; x ++) { int textureIndex = map[x][y]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } Mouse Picking Where I change the number in the array Input input = gc.getInput(); gc.getInput().setOffset(cameraX-400, cameraY-300); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } I never ask a question until I have tried to figure it out myself. I have been stuck with this problem for two weeks. It's not like this site is made for asking questions or anything. So if you actually try to help me instead of telling me to use a debugger thank you. You either get told you have too much or too little code. Nothing is never enough for the people on here it's as bad as something like reddit. Idk what is wrong all my textures work when I render them it just doesn't update when the number in the array changes. I am obviously debugging when I say that I was printing the array and the number is changing like it should, so it's not a problem with my mouse picking code. It is a problem with my textures, but I don't know what because they all render correctly. That is why I need help.

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    Do you have any reliable face normal calculation code? I'm using this but it fails when faces are 90 degrees upright or similar. // the normal point var x:Number = 0; var y:Number = 0; var z:Number = 0; // if is a triangle with 3 points if (points.length == 3) { // read vertices of triangle var Ax:Number, Bx:Number, Cx:Number; var Ay:Number, By:Number, Cy:Number; var Az:Number, Bz:Number, Cz:Number; Ax = points[0].x; Bx = points[1].x; Cx = points[2].x; Ay = points[0].y; By = points[1].y; Cy = points[2].y; Az = points[0].z; Bz = points[1].z; Cz = points[2].z; // calculate normal of a triangle x = (By - Ay) * (Cz - Az) - (Bz - Az) * (Cy - Ay); y = (Bz - Az) * (Cx - Ax) - (Bx - Ax) * (Cz - Az); z = (Bx - Ax) * (Cy - Ay) - (By - Ay) * (Cx - Ax); // if is a polygon with 4+ points }else if (points.length > 3){ // calculate normal of a polygon using all points var n:int = points.length; x = 0; y = 0; z = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } if (minx > 0 || miny > 0 || minz > 0){ for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // if area is 0 if (length == 0) { return null; }else{ // turn large values into a unit vector x = x / length; y = y / length; z = z / length; }

    Read the article

  • Java - Tile engine changing number in array not changing texture

    - by Corey
    I draw my map from a txt file. Would I have to write to the text file to notice the changes I made? Right now it changes the number in the array but the tile texture doesn't change. Do I have to do more than just change the number in the array? public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; private Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; Player player = new Player(); public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int x = 0; x < map.length; x++) { for(int y = 0; y < map.length; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } Mouse picking public void checkDistance(GameContainer gc) { Input input = gc.getInput(); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } System.out.println(tiles.map[(int)mousetileX][(int)mousetileY]); }

    Read the article

  • Is there an easy way to type in common math symbols?

    - by srcspider
    Disclaimer: I'm sure someone is going to moan about easy-of-use, for the purpose of this question consider readability to be the only factor that matters So I found this site that converts to easting northing, it's not really important what that even means but here's how the piece of javascript looks. /** * Convert Ordnance Survey grid reference easting/northing coordinate to (OSGB36) latitude/longitude * * @param {OsGridRef} gridref - easting/northing to be converted to latitude/longitude * @returns {LatLonE} latitude/longitude (in OSGB36) of supplied grid reference */ OsGridRef.osGridToLatLong = function(gridref) { var E = gridref.easting; var N = gridref.northing; var a = 6377563.396, b = 6356256.909; // Airy 1830 major & minor semi-axes var F0 = 0.9996012717; // NatGrid scale factor on central meridian var f0 = 49*Math.PI/180, ?0 = -2*Math.PI/180; // NatGrid true origin var N0 = -100000, E0 = 400000; // northing & easting of true origin, metres var e2 = 1 - (b*b)/(a*a); // eccentricity squared var n = (a-b)/(a+b), n2 = n*n, n3 = n*n*n; // n, n², n³ var f=f0, M=0; do { f = (N-N0-M)/(a*F0) + f; var Ma = (1 + n + (5/4)*n2 + (5/4)*n3) * (f-f0); var Mb = (3*n + 3*n*n + (21/8)*n3) * Math.sin(f-f0) * Math.cos(f+f0); var Mc = ((15/8)*n2 + (15/8)*n3) * Math.sin(2*(f-f0)) * Math.cos(2*(f+f0)); var Md = (35/24)*n3 * Math.sin(3*(f-f0)) * Math.cos(3*(f+f0)); M = b * F0 * (Ma - Mb + Mc - Md); // meridional arc } while (N-N0-M >= 0.00001); // ie until < 0.01mm var cosf = Math.cos(f), sinf = Math.sin(f); var ? = a*F0/Math.sqrt(1-e2*sinf*sinf); // nu = transverse radius of curvature var ? = a*F0*(1-e2)/Math.pow(1-e2*sinf*sinf, 1.5); // rho = meridional radius of curvature var ?2 = ?/?-1; // eta = ? var tanf = Math.tan(f); var tan2f = tanf*tanf, tan4f = tan2f*tan2f, tan6f = tan4f*tan2f; var secf = 1/cosf; var ?3 = ?*?*?, ?5 = ?3*?*?, ?7 = ?5*?*?; var VII = tanf/(2*?*?); var VIII = tanf/(24*?*?3)*(5+3*tan2f+?2-9*tan2f*?2); var IX = tanf/(720*?*?5)*(61+90*tan2f+45*tan4f); var X = secf/?; var XI = secf/(6*?3)*(?/?+2*tan2f); var XII = secf/(120*?5)*(5+28*tan2f+24*tan4f); var XIIA = secf/(5040*?7)*(61+662*tan2f+1320*tan4f+720*tan6f); var dE = (E-E0), dE2 = dE*dE, dE3 = dE2*dE, dE4 = dE2*dE2, dE5 = dE3*dE2, dE6 = dE4*dE2, dE7 = dE5*dE2; f = f - VII*dE2 + VIII*dE4 - IX*dE6; var ? = ?0 + X*dE - XI*dE3 + XII*dE5 - XIIA*dE7; return new LatLonE(f.toDegrees(), ?.toDegrees(), GeoParams.datum.OSGB36); } I found that to be a really nice way of writing an algorythm, at least as far as redability is concerned. Is there any way to easily write the special symbols. And by easily write I mean NOT copy/paste them.

    Read the article

  • My frustum culling is culling from the wrong point

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] /= length; m_Planes[side][1] /= length; m_Planes[side][2] /= length; m_Planes[side][3] /= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } What am i doing wrong?

    Read the article

  • My frustum culling is culling from the wrong point [SOLVED]

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] *= length; m_Planes[side][1] *= length; m_Planes[side][2] *= length; m_Planes[side][3] *= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } Matrices look like (Camera is at (5, 0, 0)): ModelView [0,0,0.99,0] [0,1,0,0] [-0.99,0,0,0] [0,0,-5,1] Projection [0.814,0,0,0] [0,1.303,0,0] [0,0,-1,0] [0,0,-0.02,0] Clip [0,0,-1,-0.999] [0,1.30,0,0] [-0.814,0,0,0] [0,0,4.98,4.99] What am i doing wrong?

    Read the article

  • Circle-Line Collision Detection Problem

    - by jazzdawg
    I am currently developing a breakout clone and I have hit a roadblock in getting collision detection between a ball (circle) and a brick (convex polygon) working correctly. I am using a Circle-Line collision detection test where each line represents and edge on the convex polygon brick. For the majority of the time the Circle-Line test works properly and the points of collision are resolved correctly. Collision detection working correctly. However, occasionally my collision detection code returns false due to a negative discriminant when the ball is actually intersecting the brick. Collision detection failing. I am aware of the inefficiency with this method and I am using axis aligned bounding boxes to cut down on the number of bricks tested. My main concern is if there are any mathematical bugs in my code below. /* * from and to are points at the start and end of the convex polygons edge. * This function is called for every edge in the convex polygon until a * collision is detected. */ bool circleLineCollision(Vec2f from, Vec2f to) { Vec2f lFrom, lTo, lLine; Vec2f line, normal; Vec2f intersectPt1, intersectPt2; float a, b, c, disc, sqrt_disc, u, v, nn, vn; bool one = false, two = false; // set line vectors lFrom = from - ball.circle.centre; // localised lTo = to - ball.circle.centre; // localised lLine = lFrom - lTo; // localised line = from - to; // calculate a, b & c values a = lLine.dot(lLine); b = 2 * (lLine.dot(lFrom)); c = (lFrom.dot(lFrom)) - (ball.circle.radius * ball.circle.radius); // discriminant disc = (b * b) - (4 * a * c); if (disc < 0.0f) { // no intersections return false; } else if (disc == 0.0f) { // one intersection u = -b / (2 * a); intersectPt1 = from + (lLine.scale(u)); one = pointOnLine(intersectPt1, from, to); if (!one) return false; return true; } else { // two intersections sqrt_disc = sqrt(disc); u = (-b + sqrt_disc) / (2 * a); v = (-b - sqrt_disc) / (2 * a); intersectPt1 = from + (lLine.scale(u)); intersectPt2 = from + (lLine.scale(v)); one = pointOnLine(intersectPt1, from, to); two = pointOnLine(intersectPt2, from, to); if (!one && !two) return false; return true; } } bool pointOnLine(Vec2f p, Vec2f from, Vec2f to) { if (p.x >= min(from.x, to.x) && p.x <= max(from.x, to.x) && p.y >= min(from.y, to.y) && p.y <= max(from.y, to.y)) return true; return false; }

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • Slide 2d Vector to destination over a period of time

    - by SchautDollar
    I am making a library of GUI controls for games I make with XNA. I am currently developing the library as I make a game so I can test the features and find errors/bugs and hopefully smash them right away. My current issue is on a slide feature I want to implement for my base class that all controls inherit. My goal is to get the control to slide to a specified point over a specified amount of time. Here is the #region containing the code #region Slide private bool sliding; private Vector2 endPoint; private float slideTimeLeft; private float speed; private bool wasEnabled; private Vector2 slideDirection; private float slideDistance; public void Slide(Vector2 startPoint, Vector2 endPoint, float slideTime) { this.location = startPoint; Slide(endPoint,slideTime); } public void Slide(Vector2 endPoint, float slideTime) { this.wasEnabled = this.enabled; this.enabled = false; this.sliding = true; Vector2 tempLength = endPoint - this.location; this.slideDistance = tempLength.Length(); //Was this.slideDistance = (float)Math.Sqrt(tempLength.LengthSquared()); this.speed = slideTime / this.slideDistance; this.endPoint = endPoint; this.slideTimeLeft = slideTime; } private void UpdateSlide(GameTime gameTime) { if (this.sliding) { this.slideTimeLeft -= gameTime.ElapsedGameTime.Milliseconds; if (this.slideTimeLeft >= 0 ) { if ((this.endPoint-this.location).Length() != 0){//Was if (this.endPoint.LengthSquared() > 0 || this.location.LengthSquared() > 0) { this.slideDirection = Vector2.Normalize(this.endPoint - this.location); } this.location += this.slideDirection * speed * gameTime.ElapsedGameTime.Milliseconds;//This is where I believe the issue is, but I'm not sure. It seems right to me... (Even though it doesn't work) } else { this.enabled = this.wasEnabled; this.location = this.endPoint;//After time, the controls position will get set to be the endpoint. this.sliding = false; } } } #endregion this.location is the location of the control elsewhere defined in the class. I have looked at this blog as a huge reference and have googled around quite and have looked on many forums but can't find anything that shows how to implement it. Please and Thanks for your time! EDIT: I have switched this line "this.location += this.slideDirection * speed * gameTime.ElapsedGameTime.Milliseconds;" several times to see what it does. My issue is getting the control to smoothly move to the end location. It moves after the time has expired, but It doesn't move other then that except flash in my face. EDIT2: I have used the first slide method with 3 parameters and it works except it doesn't do it in a period of time and once it gets to its destination, it starts moving randomly towards the previous location and the end location.

    Read the article

  • How to draw an Arc in OpenGL

    - by rpgFANATIC
    While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)). This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0. Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious? void Ball::drawBounce() { float piecesToDraw = 100.0f; float arcWidth = 10.0f; float arcAngle = 4.0f; glBegin(GL_LINE_STRIP); for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half { float currentX = (i / piecesToDraw) * arcWidth; glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth)); } for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now) { float currentX = (j / piecesToDraw) * arcWidth; glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth)); } glEnd(); } Thanks in advance.

    Read the article

  • Prevent full table scan for query with multiple where clauses

    - by Dave Jarvis
    A while ago I posted a message about optimizing a query in MySQL. I have since ported the data and query to PostgreSQL, but now PostgreSQL has the same problem. The solution in MySQL was to force the optimizer to not optimize using STRAIGHT_JOIN. PostgreSQL offers no such option. Here is the explain: Here is the query: SELECT avg(d.amount) AS amount, y.year FROM station s, station_district sd, year_ref y, month_ref m, daily d LEFT JOIN city c ON c.id = 10663 WHERE -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 AND -- Ignore stations outside the given elevations -- s.elevation BETWEEN 0 AND 2000 AND sd.id = s.station_district_id AND -- Gather all known years for that station ... -- y.station_district_id = sd.id AND -- The data before 1900 is shaky; insufficient after 2009. -- y.year BETWEEN 1980 AND 2000 AND -- Filtered by all known months ... -- m.year_ref_id = y.id AND m.month = 12 AND -- Whittled down by category ... -- m.category_id = '001' AND -- Into the valid daily climate data. -- m.id = d.month_ref_id AND d.daily_flag_id <> 'M' GROUP BY y.year It appears as though PostgreSQL is looking at the DAILY table first, which is simply not the right way to go about this query as there are nearly 300 million rows. How do I force PostgreSQL to start at the CITY table? Thank you!

    Read the article

  • simple Stata program

    - by Cyrus S
    I am trying to write a simple program to combine coefficient and standard error estimates from a set of regression fits. I run, say, 5 regressions, and store the coefficient(s) and standard error(s) of interest into vectors (Stata matrix objects, actually). Then, I need to do the following: Find the mean value of the coefficient estimates. Combine the standard error estimates according to the formula suggested for combining results from "multiple imputation". The formula is the square root of the formula for "T" on page 6 of the following document: http://bit.ly/b05WX3 I have written Stata code that does this once, but I want to write this as a function (or "program", in Stata speak) that takes as arguments the vector (or matrix, if possible, to combine multiple estimates at once) of regression coefficient estimates and the vector (or matrix) of corresponding standard error estimates, and then generates 1 and 2 above. Here is the code that I wrote: (breg is a 1x5 vector of the regression coefficient estimates, and sereg is a 1x5 vector of the associated standard error estimates) mat ones = (1,1,1,1,1) mat bregmean = (1/5)*(ones*breg’) scalar bregmean_s = bregmean[1,1] mat seregmean = (1/5)*(ones*sereg’) mat seregbtv = (1/4)*(breg - bregmean#ones)* (breg - bregmean#ones)’ mat varregmi = (1/5)*(sereg*sereg’) + (1+(1/5))* seregbtv scalar varregmi_s = varregmi[1,1] scalar seregmi = sqrt(varregmi_s) disp bregmean_s disp seregmi This gives the right answer for a single instance. Any pointers would be great! UPDATE: I completed the code for combining estimates in a kXm matrix of coefficients/parameters (k is the number of parameters, m the number of imputations). Code can be found here: http://bit.ly/cXJRw1 Thanks to Tristan and Gabi for the pointers.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16  | Next Page >