Search Results

Search found 654 results on 27 pages for 'divide and conquer'.

Page 8/27 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Failed to fetch *.deb Size mismatch, then packages with unmet dependencies [solved]

    - by user113907
    I recently bought the wonderfully looking and reviewed Amnesia The Dark Descent and I'm trying to install it. The first time I tried to download it, I had to stop in the middle of the download (may have broken something). The second time I tried to download, at the end of the download it gave me the following error: Failed to fetch https://private-ppa.launchpad.net/commercial-ppa-uploaders/amnesia/ubuntu/pool/main/a/amnesia/amnesia_1.2.1-0ubuntu2_i386.deb Size mismatch Now, whenever I try to download it, it gives me this error: The following packages have unmet dependencies: amnesia: Depends: libalut0 (>= 1.0.1) but it is not going to be installed Depends: libc6 (>= 2.4) but 2.15-0ubuntu10.3 is to be installed Depends: libfontconfig1 (>= 2.8.0) but 2.8.0-3ubuntu9.1 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.8-1ubuntu2 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.3-1ubuntu5 is to be installed Depends: libopenal1 (>= 1:1.13) but 1:1.13-4ubuntu3 is to be installed Depends: libsdl1.2debian (>= 1.2.10-1) but 1.2.14-6.4ubuntu3 is to be installed Depends: libstdc++6 (>= 4.1.1) but 4.6.3-1ubuntu5 is to be installed Depends: libxft2 (> 2.1.1) but 2.2.0-3ubuntu2 is to be installed Depends: zlib1g (>= 1:1.1.4) but 1:1.2.3.4.dfsg-3ubuntu4 is to be installed I already searched the net and ran a few command line commands. Ex: sudo dpkg --configure -a sudo apt-get install -f Or configure the software packages to download from Main instead of the local UK server. But I'm really not figuring out a solution. I have a fresh install of the latest LTS (12.04). The only non-standard thing so far is that I installed gnome-shell (?) because I really can't stand Unity. Help would be much appreciated. I am currently more than entertained enough with World of Goo and Command & Conquer, but I will want to play Amnesia in the close future.

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • How to stay creative when going through tough emotional times (divorce, family death, etc)? [closed]

    - by gaearon
    Hi everyone. I believe this is not a duplicate of motivation question because I want to especially emphasize the emotional breakdown. You may conquer lack of motivation by working harder and getting through the dip, however this was not the case when I was separating with my girlfriend. I actually liked the project, it was (and it still is!) my first programming job at an amazing workplace and I wasn't being pressured in any way but I found myself absolutely unable to code, blankly staring at the screen, my thoughts disorganized, the feeling of emptiness all in my chest. I could perform some straightforward coding but anything that involves creative thinking, designing abstractions, solving new problems and, worst of all, fixing bugs in legacy code, completely wiped out my brain to the point I started avoiding work, which I never have done before. Coffee only used to make it worse. Eventually I got over that, and I remember the happy day I solved a problem elegantly and thought—hell, first time in a month! Thankfully the project wasn't top priority and I had the time to catch up. I wonder now, was there any other way to boost my productivity back then? I bet people would say I should've taken a break—and I think I really should have—but what if I needed the money? Didn't want to lose my job? Are there any ways to trick your brain into being creative despite emotional losses? From your experience, would it be worth talking to my boss, collegues?

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Help with Collision Resolution?

    - by Milo
    I'm trying to learn about physics by trying to make a simplified GTA 2 clone. My only problem is collision resolution. Everything else works great. I have a rigid body class and from there cars and a wheel class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private OBB2D predictionRect = new OBB2D(new Vector2D(), 1.0f, 1.0f, 0.0f); private float mass; private Vector2D deltaVec = new Vector2D(); private Vector2D v = new Vector2D(); //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; private Matrix mat = new Matrix(); private float[] Vector2Ds = new float[2]; private Vector2D tangent = new Vector2D(); private static Vector2D worldRelVec = new Vector2D(); private static Vector2D relWorldVec = new Vector2D(); private static Vector2D pointVelVec = new Vector2D(); public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; setLayer(LAYER_OBJECTS); } protected void rectChanged() { if(getWorld() != null) { getWorld().updateDynamic(this); } } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); predictionRect.set(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); rectChanged(); } public void setPredictionLocation(Vector2D position, float angle) { getPredictionRect().set(position, getWidth(), getHeight(), angle); } public void setPredictionCenter(Vector2D center) { getPredictionRect().moveTo(center); } public void setPredictionAngle(float angle) { predictionRect.setAngle(angle); } public Vector2D getPosition() { return getRect().getCenter(); } public OBB2D getPredictionRect() { return predictionRect; } @Override public void update(float timeStep) { doUpdate(false,timeStep); } public void doUpdate(boolean prediction, float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); if(prediction) { Vector2D velocity = Vector2D.add(this.velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setPredictionCenter(c); //forces = new Vector2D(0,0); //clear forces } else { velocity.x += (acceleration.x * timeStep); velocity.y += (acceleration.y * timeStep); //velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); v.x = getRect().getCenter().getX() + (velocity.x * timeStep); v.y = getRect().getCenter().getY() + (velocity.y * timeStep); deltaVec.x = v.x - c.x; deltaVec.y = v.y - c.y; deltaVec.normalize(); setCenter(v.x, v.y); forces.x = 0; //clear forces forces.y = 0; } //angular float angAcc = torque / inertia; if(prediction) { float angularVelocity = this.angularVelocity + angAcc * timeStep; setPredictionAngle(getAngle() + angularVelocity * timeStep); //torque = 0; //clear torque } else { angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } } public void updatePrediction(float timeStep) { doUpdate(true, timeStep); } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { mat.reset(); Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); relWorldVec.x = Vector2Ds[0]; relWorldVec.y = Vector2Ds[1]; return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { mat.reset(); Vector2Ds[0] = world.x; Vector2Ds[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { tangent.x = -worldOffset.y; tangent.y = worldOffset.x; return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces.x += worldForce.x; forces.y += worldForce.y; //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } public Vector2D getVelocity() { return velocity; } public void setVelocity(Vector2D velocity) { this.velocity = velocity; } public Vector2D getDeltaVec() { return deltaVec; } } Vehicle public class Wheel { private Vector2D forwardVec; private Vector2D sideVec; private float wheelTorque; private float wheelSpeed; private float wheelInertia; private float wheelRadius; private Vector2D position = new Vector2D(); public Wheel(Vector2D position, float radius) { this.position = position; setSteeringAngle(0); wheelSpeed = 0; wheelRadius = radius; wheelInertia = (radius * radius) * 1.1f; } public void setSteeringAngle(float newAngle) { Matrix mat = new Matrix(); float []vecArray = new float[4]; //forward Vector vecArray[0] = 0; vecArray[1] = 1; //side Vector vecArray[2] = -1; vecArray[3] = 0; mat.postRotate(newAngle / (float)Math.PI * 180.0f); mat.mapVectors(vecArray); forwardVec = new Vector2D(vecArray[0], vecArray[1]); sideVec = new Vector2D(vecArray[2], vecArray[3]); } public void addTransmissionTorque(float newValue) { wheelTorque += newValue; } public float getWheelSpeed() { return wheelSpeed; } public Vector2D getAnchorPoint() { return position; } public Vector2D calculateForce(Vector2D relativeGroundSpeed, float timeStep, boolean prediction) { //calculate speed of tire patch at ground Vector2D patchSpeed = Vector2D.scalarMultiply(Vector2D.scalarMultiply( Vector2D.negative(forwardVec), wheelSpeed), wheelRadius); //get velocity difference between ground and patch Vector2D velDifference = Vector2D.add(relativeGroundSpeed , patchSpeed); //project ground speed onto side axis Float forwardMag = new Float(0.0f); Vector2D sideVel = velDifference.project(sideVec); Vector2D forwardVel = velDifference.project(forwardVec, forwardMag); //calculate super fake friction forces //calculate response force Vector2D responseForce = Vector2D.scalarMultiply(Vector2D.negative(sideVel), 2.0f); responseForce = Vector2D.subtract(responseForce, forwardVel); float topSpeed = 500.0f; //calculate torque on wheel wheelTorque += forwardMag * wheelRadius; //integrate total torque into wheel wheelSpeed += wheelTorque / wheelInertia * timeStep; //top speed limit (kind of a hack) if(wheelSpeed > topSpeed) { wheelSpeed = topSpeed; } //clear our transmission torque accumulator wheelTorque = 0; //return force acting on body return responseForce; } public void setTransmissionTorque(float newValue) { wheelTorque = newValue; } public float getTransmissionTourque() { return wheelTorque; } public void setWheelSpeed(float speed) { wheelSpeed = speed; } } //our vehicle object public class Vehicle extends RigidBody { private Wheel [] wheels = new Wheel[4]; private boolean throttled = false; public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //front wheels wheels[0] = new Wheel(new Vector2D(halfSize.x, halfSize.y), 0.45f); wheels[1] = new Wheel(new Vector2D(-halfSize.x, halfSize.y), 0.45f); //rear wheels wheels[2] = new Wheel(new Vector2D(halfSize.x, -halfSize.y), 0.75f); wheels[3] = new Wheel(new Vector2D(-halfSize.x, -halfSize.y), 0.75f); super.initialize(halfSize, mass, bitmap); } public void setSteering(float steering) { float steeringLock = 0.13f; //apply steering angle to front wheels wheels[0].setSteeringAngle(steering * steeringLock); wheels[1].setSteeringAngle(steering * steeringLock); } public void setThrottle(float throttle, boolean allWheel) { float torque = 85.0f; throttled = true; //apply transmission torque to back wheels if (allWheel) { wheels[0].addTransmissionTorque(throttle * torque); wheels[1].addTransmissionTorque(throttle * torque); } wheels[2].addTransmissionTorque(throttle * torque); wheels[3].addTransmissionTorque(throttle * torque); } public void setBrakes(float brakes) { float brakeTorque = 15.0f; //apply brake torque opposing wheel vel for (Wheel wheel : wheels) { float wheelVel = wheel.getWheelSpeed(); wheel.addTransmissionTorque(-wheelVel * brakeTorque * brakes); } } public void doUpdate(float timeStep, boolean prediction) { for (Wheel wheel : wheels) { float wheelVel = wheel.getWheelSpeed(); //apply negative force to naturally slow down car if(!throttled && !prediction) wheel.addTransmissionTorque(-wheelVel * 0.11f); Vector2D worldWheelOffset = relativeToWorld(wheel.getAnchorPoint()); Vector2D worldGroundVel = pointVelocity(worldWheelOffset); Vector2D relativeGroundSpeed = worldToRelative(worldGroundVel); Vector2D relativeResponseForce = wheel.calculateForce(relativeGroundSpeed, timeStep,prediction); Vector2D worldResponseForce = relativeToWorld(relativeResponseForce); applyForce(worldResponseForce, worldWheelOffset); } //no throttling yet this frame throttled = false; if(prediction) { super.updatePrediction(timeStep); } else { super.update(timeStep); } } @Override public void update(float timeStep) { doUpdate(timeStep,false); } public void updatePrediction(float timeStep) { doUpdate(timeStep,true); } public void inverseThrottle() { float scalar = 0.2f; for(Wheel wheel : wheels) { wheel.setTransmissionTorque(-wheel.getTransmissionTourque() * scalar); wheel.setWheelSpeed(-wheel.getWheelSpeed() * 0.1f); } } } And my big hack collision resolution: private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_STEAL_CAR)) { vehicle.setThrottle(-1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); //vehicle.update(16.6666666f / 1000.0f); boolean colided = false; vehicle.updatePrediction(16.66666f / 1000.0f); List<Entity> buildings = world.queryStaticSolid(vehicle,vehicle.getPredictionRect()); if(buildings.size() > 0) { colided = true; } if(!colided) { vehicle.update(16.66f / 1000.0f); } else { Vector2D delta = vehicle.getDeltaVec(); vehicle.setVelocity(Vector2D.negative(vehicle.getVelocity().multiply(0.2f)). add(delta.multiply(-1.0f))); vehicle.inverseThrottle(); } } Here is OBB public class OBB2D { // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(Vector2D center, float w, float h, float angle) { set(center,w,h,angle); } public OBB2D(float left, float top, float width, float height) { set(new Vector2D(left + (width / 2), top + (height / 2)),width,height,0.0f); } public void set(Vector2D center,float w, float h,float angle) { Vector2D X = new Vector2D( (float)Math.cos(angle), (float)Math.sin(angle)); Vector2D Y = new Vector2D((float)-Math.sin(angle), (float)Math.cos(angle)); X = X.multiply( w / 2); Y = Y.multiply( h / 2); corner[0] = center.subtract(X).subtract(Y); corner[1] = center.add(X).subtract(Y); corner[2] = center.add(X).add(Y); corner[3] = center.subtract(X).add(Y); computeAxes(); extents.x = w / 2; extents.y = h / 2; computeDimensions(center,angle); } private void computeDimensions(Vector2D center,float angle) { this.center.x = center.x; this.center.y = center.y; this.angle = angle; boundingRect.left = Math.min(Math.min(corner[0].x, corner[3].x), Math.min(corner[1].x, corner[2].x)); boundingRect.top = Math.min(Math.min(corner[0].y, corner[1].y),Math.min(corner[2].y, corner[3].y)); boundingRect.right = Math.max(Math.max(corner[1].x, corner[2].x), Math.max(corner[0].x, corner[3].x)); boundingRect.bottom = Math.max(Math.max(corner[2].y, corner[3].y),Math.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(new Vector2D(rect.centerX(),rect.centerY()),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0] = corner[1].subtract(corner[0]); axis[1] = corner[3].subtract(corner[0]); // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { axis[a] = axis[a].divide((axis[a].length() * axis[a].length())); origin[a] = corner[0].dot(axis[a]); } } public void moveTo(Vector2D center) { Vector2D centroid = (corner[0].add(corner[1]).add(corner[2]).add(corner[3])).divide(4.0f); Vector2D translation = center.subtract(centroid); for (int c = 0; c < 4; ++c) { corner[c] = corner[c].add(translation); } computeAxes(); computeDimensions(center,angle); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } }; What I do is when I predict a hit on the car, I force it back. It does not work that well and seems like a bad idea. What could I do to have more proper collision resolution. Such that if I hit a wall I will never get stuck in it and if I hit the side of a wall I can steer my way out of it. Thanks I found this nice ppt. It talks about pulling objects apart and calculating new velocities. How could I calc new velocities in my case? http://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CC8QFjAB&url=http%3A%2F%2Fcoitweb.uncc.edu%2F~tbarnes2%2FGameDesignFall05%2FSlides%2FCh4.2-CollDet.ppt&ei=x4ucULy5M6-N0QGRy4D4Cg&usg=AFQjCNG7FVDXWRdLv8_-T5qnFyYld53cTQ&cad=rja

    Read the article

  • Slot Machine Pay Out

    - by Kris.Mitchell
    I have done a lot of research into random number generators for slot machines, reel stop calculations and how to physically give the user a good chance on winning. What I can't figure out is how to properly insure that the machine is going to have a payout rating of (lets say) 95%. So, I have a reel set up wit 22 spaces on it. Filled with 16 different symbols. When I get my random number, mod divide it by 64 and get the remainder, I hop over to a loop up table to see how the virtual stop relates to the reel position. Now that I have how the reels are going to stop, do I make sure the payout ratio is correct? For every dollar they put in, how to I make sure the machine will pay out .95 cents? Thanks for the ideas. I am working in actionscript, if that helps with the language issues, but in general I am just looking for theory.

    Read the article

  • Creating Wizard in ASP.NET MVC (Part 1)

    - by bipinjoshi
    At times you want to accept user input in your web applications by presenting them with a wizard driven user interface. A wizard driven user interface allows you to logically divide and group pieces of information so that user can fill them up easily in step-by-step manner. While creating a wizard is easy in ASP.NET Web Forms applications, you need to implement it yourself in ASP.NET MVC applications. There are more than one approaches to creating a wizard in ASP.NET MVC and this article shows one of them. In Part 1 of this article you will develop a wizard that stores its data in ASP.NET Session and the wizard works on traditional form submission.http://www.binaryintellect.net/articles/9a5fe277-6e7e-43e5-8408-a28ff5be7801.aspx    

    Read the article

  • Answers to “What source control system do you use?” (and some winners)

    - by jamiet
    About a month ago I posed a question here on my blog SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff) in which I asked SQL Server developers to answer the following questions: Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? I had some really great responses (I highly recommend going and reading them). I promised that the five best, most thought-provoking, responses (as determined by me) would win one of five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect; here are the five that I chose (note that if you responded but did not leave a means of getting in touch then you weren’t considered for one of the prizes – sorry): In general, I don't think the management overhead and licensing cost associated with TFS is worthwhile if all you're doing is using source control. To get value from TFS, at a minimum you need to be using team build, and possibly other stuff as well, such as the sharepoint integration. If that's all you need, then svn with Tortoise would be my first choice. If you want to add build automation later, you can do this with cruisecontrol (is it still called that?), JetBrains, etc. For a long time I thought that Redgate's claims about "bridging the SSMS-VS divide" were a load of hot air, since in my experience anyone who knew what they were doing was using Visual Studio, in particular SSDT and its predecessors. However, on a recent client I was putting in source control for the first time, and I discovered that the "divide" really does exist. That client has ended up using svn with Redgate SQL Source Control, with no build automation, but with scope to add it in the future. Gavin Campbell I think putting the DB under source control is a great idea.  I have issues with the earlier versions of SQL Source Control in that it provides little help in versioning the DB. I think the latest version merges SQL Compare and SQL Source Control together.  Which is how it should have been all along. Sure I have the DB scripts in SVN, but I can't automate DB builds and changes without more tools.  Frankly I'm surprised databases don't have some sort of versioning built into them. Nick Portelli Source control has been immensely useful and saved me from a lot of rework on more than one occasion.  I have learned that you have to be extremely careful checking in data.  Our system is internal only so during the system production run once a week, if there is a problem that I can fix easily(for example, a control table points to a file in the wrong environment), I'll do it directly in production so the run can continue as soon as possible since we have a specified time window.  We do full test runs to minimize this but it has come up once or twice.  We use Red-Gate source control to "push" from the test environment to the production environment.  There have been a couple of occasions where the test environment with the wrong setting was pushed back over the production environment because the change was made only in production.  Gotta keep an eye on that. Alan Dykes Goodness is it manual.  And can be extremely painful at times.  Not only are we running thin, we are constrained on the tools we can get ($$ must mean free).  Certainly no excuse, and a great opportunity to improve my skills by learning new things.  But...  Getting buy in a on a proven process or methodology is hard, takes time, and diverts us from development.  If SQL Source Control is easy to use and proven oh boy could you get some serious fans around here!  Seriously though, as the "accidental dba" of this shop any new ideas / easy to implement tools can make a world of difference in productivity and most importantly accuracy.  Manual = bad. :) John Hennesey (who left his email address) The one thing I would love to know more about is the unique challenges of working with databases as source code - you can store scripts, but are they written as deployment scripts with all the logic about how to apply them to an existing DB? Where is that baseline DB? Where's the data? How does a team share the data and the code? It's a real challenge. Merrill Aldrich Congratulations to the five of you. Red Gate will be in touch with you soon about your free licenses. Thank you to all those that responded. And again, go and check out all the responses – those above are only small proportion from what is a very interesting comment thread. @Jamiet

    Read the article

  • Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    - by Jason Fitzpatrick
    If you’re currently using any 64-bit version of Windows you may have noticed there are two “Program Files” folders, one for 64-bit and one for 32-bit apps. Why does Windows need to sub-divide them? Read on to see why. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • How can I generate a 2d navigation mesh in a dynamic environment at runtime?

    - by Stephen
    So I've grasped how to use A* for path-finding, and I am able to use it on a grid. However, my game world is huge and I have many enemies moving toward the player, which is a moving target, so a grid system is too slow for path-finding. I need to simplify my node graph by using a navigational mesh. I grasp the concept of "how" a mesh works (finding a path through nodes on the vertices and/or the centers of the edges of polygons). My game uses dynamic obstacles that are procedurally generated at run-time. I can't quite wrap my head around how to take a plane that has multiple obstacles in it and programatically divide the walkable area up into polygons for the navigation mesh, like the following image. Where do I start? How do I know when a segment of walk-able area is already defined, or worse, when I realize I need to subdivide a previously defined walk-able area as the algorithm "walks" through the map? I'm using javascript in nodejs, if it matters.

    Read the article

  • web server response code 500

    - by Bryan Kemp
    I realize that this may spur a religious discussion, but I discussed this with friends and get great, but conflicting answers and the actual documentation is of little help. What does the 500 series response codes mean from the webserver? Internal Server Error, but that is vague. My assumption is that it means that something bad happened to the server (file system corruption, no connection to the database, network issue, etc.) but not specifically a data driven error (divide by zero, record missing, bad parameter, etc). Something to note, there are some web client implementations (the default Android and Blackberry httpclients) that do not allow access to the html boddy if the server response is 500 so there is no way to determine what caused the issue from the client. What I have been been implementing recently is a web service that returns a json payload wrapped in a response object that contains more specific error information if it is data related, but the server response will be 200 since it finished the actual processing. Thoughts?

    Read the article

  • My Latest Hare-Brained Scheme

    - by Liam McLennan
    I have not had a significant side project for a while but I have been working on a product idea. Its an analytics application that analyses twitter data and reports on market sentiment. The target market is companies who want to track trends in consumer sentiment. My idea is to teach the application to divide relevant tweets into ‘positive’ and ‘negative’ categories. If the input was the set of tweets featuring the word ‘telstra’ the application would find the following tweet:   and put it in the ‘negative’ category. Collecting data in this fashion facilitates the creation of graphs such as: which can then be correlated against events, such as a share offer or new product release. I may go ahead and build this, just because I am a programmer and it amuses me to do so. My concerns are: There  is no market for this tool There is a market, but I don’t understand it and have no way to reach it.

    Read the article

  • Which kind of public sitemap should I build for a search based navigation site

    - by Noam
    I have a search based navigation web-site. Each query has filters as well as sort-by. The search results point to end-pages inside the site. Each of those pages has many outlinks to other end-pages. Currently I have a XML sitemap which directs crawlers to all the end pages. I'm trying to add a silo sitemap directory to improve SEO. Assuming this is the right direction I have a couple of options: end pages sorted alphabetically. Pages by major search filters, and then divide alphabetically. Pages for every filter and cross option between them and the sort-by. Which would you recommend and why? NOTE: I'm not referring to a XML sitemap.

    Read the article

  • Omni-directional shadow mapping

    - by gridzbi
    What is a good/the best way to fill a cube map with depth values that are going to give me the least amount of trouble with floating point imprecision? To get up and running I'm just writing the raw depth to the buffer, as you can imagine it's pretty terrible - I need to to improve it, but I'm not sure how. A few tutorials on directional lights divide the depth by W and store the Z/W value in the cube map - How would I perform the depth comparison in my shadow mapping step? The nvidia article here http://http.developer.nvidia.com/GPUGems/gpugems_ch12.html appears to do something completely different and use the dot of the light vector, presumably to counter the depth precision worsening over distance? He also scales the geometry so that it fits into the range -.5 +.5 - The article looks a bit dated, though - is this technique still reasonable? Shader code http://pastebin.com/kNBzX4xU Screenshot http://imgur.com/54wFI

    Read the article

  • Doing a P2V in OVM 3.0.3

    - by Steen Schmidt
    The other day I was talking to a customer about how you can do a P2V in OVM. I had already written about this topic earlier in my Blog and there was also some good documentation on the topic on how you do this. But what about seing the whole process from start to end, so I have include a link to a demo on the topic. Here is demo that has been divide into three steps: Step 1. Taget System,   Step 2. Import into OVM, and    Step 3. Use the new Template.

    Read the article

  • Parallelism implies concurrency but not the other way round right?

    - by Cedric Martin
    I often read that parallelism and concurrency are different things. Very often the answerers/commenters go as far as writing that they're two entirely different things. Yet in my view they're related but I'd like some clarification on that. For example if I'm on a multi-core CPU and manage to divide the computation into x smaller computation (say using fork/join) each running in its own thread, I'll have a program that is both doing parallel computation (because supposedly at any point in time several threads are going to run on several cores) and being concurrent right? While if I'm simply using, say, Java and dealing with UI events and repaints on the Event Dispatch Thread plus running the only thread I created myself, I'll have a program that is concurrent (EDT + GC thread + my main thread etc.) but not parallel. I'd like to know if I'm getting this right and if parallelism (on a "single but multi-cores" system) always implies concurrency or not? Also, are multi-threaded programs running on multi-cores CPU but where the different threads are doing totally different computation considered to be using "parallelism"?

    Read the article

  • How to explain to non-technical person why the task will take much longer then they think?

    - by Mag20
    Almost every developer has to answer questions from business side like: Why is going to take 2 days to add this simple contact form? When developer estimates this task, they may divide it into steps: make some changes to Database optimize DB changes for speed add front end HTML write server side code add validation add client side javascript use unit tests make sure SEO is setup is working implement email confirmation refactor and optimize the code for speed ... These maybe hard to explain to non-technical person, who basically sees the whole task as just putting together some HTML and creating a table to store the data. To them it could be 2 hours MAX. So is there a better way to explain why the estimate is high to non-developer?

    Read the article

  • Which kind of sitemap directory should I build for a search based navigation site

    - by Noam
    I have a search based navigation web-site. Each query has filters as well as sort-by. The search results point to end-pages inside the site. Each of those pages has many outlinks to other end-pages. Currently I have a XML sitemap which directs crawlers to all the end pages. I'm trying to add a silo sitemap directory to improve SEO. Assuming this is the right direction I have a couple of options: end pages sorted alphabetically. Pages by major search filters, and then divide alphabetically. Pages for every filter and cross option between them and the sort-by. Which would you recommend and why?

    Read the article

  • 3D BSP rendering for maps made in 2d platform style

    - by Dev Joy
    I wish to render a 3D map which is always seen from top, camera is in sky and always looking at earth. Sample of a floor layout: I don't think I need complex structures like BSP trees to render them. I mean I can divide the map in grids and render them like done in 2D platform games. I just want to know if this is a good idea and what may go wrong if I don't choose a BSP tree rendering here. Please also mention is any better known rendering techniques are available for such situations.

    Read the article

  • I need help with algorithms, how do I improve?

    - by David Burr
    I usually do well at figuring out solutions to programming assignments but for some reason, I'm really struggling in my Algorithms class. I'm not failing but I know I can do better. When I'm confronted with problems like "Divide the array to 2 subarrays so that the sum of each subarray is equal to the other subarray," I feel like my brain won't cooperate and think and I end up not being able to solve it. Some of the things I'm doing right now to help myself: reading CLR (1st ed.) -- it takes a lot of time for stuff to sink in and I can't understand most of it solving some problems -- no matter how much I try, most of the time, I end up googling for the solution before I understand how to solve it I know that good algorithmic skills are very important because lots of good companies ask these sorts of questions in their interview process so I'm a bit worried right now. What else can can I do to improve my algorithmic/problem solving skills? Any advice on how to deal with this?

    Read the article

  • Multiple Vertex Buffers per Mesh

    - by Daniel
    I've run into the situation where the size of my mesh with all its vertices and indices, is larger than the (optimal) vertex buffer object upper limit (~8MB). I was wondering if I can sub-divide the mesh across multiple vertex buffers, and somehow retain validity of the indices. Ie a triangle with a indice at the first vertex, and an indice at the last (ie in seperate VBOs). All the while maintaining this within Vertex Array Objects. My thoughts are, save myself the hassle, and for meshes (messes :P) such as this, just use the necessary size ( 8MB); which is what I do at the moment. But ideally my buffer manager (wip) at the moment is using optimal sizes; I may just have to make a special case then... Any ideas? If necessary, a simple C++ code example is appreciated. Note: I have also cross-posted this on stackoverflow, as I was not sure as to which it would be more suitable (its partly a design question).

    Read the article

  • Social Engagement: One Size Doesn't Fit Anyone

    - by Mike Stiles
    The key to achieving meaningful social engagement is to know who you’re talking to, know what they like, and consistently deliver that kind of material to them. Every magazine for women knows this. When you read the article titles promoted on their covers, there’s no mistaking for whom that magazine is intended. And yet, confusion still reigns at many brands as to exactly whom they want to talk to, what those people want to hear, and what kind of content they should be creating for them. In most instances, the root problem is brands want to be all things to all people. Their target audience…the world! Good luck with that. It’s 2012, the age of aggregation and custom content delivery. To cope with the modern day barrage of information, people have constructed technological filters so that content they regard as being “for them” is mostly what gets through. Even if your brand is for men and women, young and old, you may want to consider social properties that divide men from women, and young from old. Yes, a man might find something in a women’s magazine that interests him. But that doesn’t mean he’s going to subscribe to it, or buy even one issue. In fact he’ll probably never see the article he’d otherwise be interested in, because in his mind, “This isn’t for me.” It wasn’t packaged for him. News Flash: men and women are different. So it’s a tall order to craft your Facebook Page or Twitter handle to simultaneously exude the motivators for both. The Harris Interactive study “2012 Connecting and Communicating Online: State of Social Media” sheds light on the differing social behaviors and drivers. -65% of women (vs. 59% of men) stay glued to social because they don’t want to miss anything. -25% of women check social when they wake up, before they check email. Only 18% of men check social before e-mail. -95% of women surveyed belong to Facebook vs. 86% of men. -67% of women log in to Facebook once a day or more vs. 54% of men. -Conventional wisdom is Pinterest is mostly a woman-thing, right? That may be true for viewing, but not true for sharing. Men are actually more likely to share on Pinterest than women, 23% to 10%. -The sharing divide extends to YouTube. 68% of women use it mainly for consumption, as opposed to 52% of men. -Women are as likely to have a Twitter account as men, but they’re much less likely to check it often. 54% of women check it once a week compared to 2/3 of men. Obviously, there are some takeaways from this depending on your target. Women don’t want to miss out on anything, so serialized content might be a good idea, right? Promotional posts that lead to a big payoff could keep them hooked. Posts for women might be better served first thing in the morning. If sharing is your goal, maybe male-targeted content is more likely to get those desired shares. And maybe Twitter is a better place to aim your male-targeted content than Facebook. Some grocery stores started experimenting with male-only aisles. The results have been impressive. Why? Because while it’s true men were finding those same items in the store just fine before, now something has been created just for them. They have a place in the store where they belong. Each brand’s strategy and targets are going to differ. The point is…know who you’re talking to, know how they behave, know what they like, and deliver content using any number of social relationship management targeting tools that meets their expectations. If, however, you’re committed to a one-size-fits-all, “our content is for everybody” strategy (or even worse, a “this is what we want to put out and we expect everybody to love it” strategy), your content will miss the mark for more often than it hits. @mikestilesPhoto via stock.schng

    Read the article

  • Solving a probabilistic problem

    - by ????????????
    So I am interested in Computational Investing and came across this problem on a wiki page: Write a program to discover the answer to this puzzle:"Let's say men and women are paid equally (from the same uniform distribution). If women date randomly and marry the first man with a higher salary, what fraction of the population will get married?" I don't have much knowledge in probability theory, so I'm not really sure how to implement this in code. My thinking: Populate two arrays(female,male) with random salary values from a uniform distribution. Randomly pair one female and one male array element and see if condition of higher salary is met. If it is, increment a counter. Divide counter by population and get percentage. Is this the correct logic? Do woman continually date until there is no males left with higher salaries than women?

    Read the article

  • Getting isometric grid coordinates from standard X,Y coordinates

    - by RoryHarvey
    I'm currently trying to add sprites to an isometric Tiled TMX map using Objects in cocos2d. The problem is the X and Y metadata from TMX object are in standard 2d format (pixels x, pixels y), instead of isometric grid X and Y format. Usually you would just divide them by the tile size, but isometric needs some sort of transform. For example on a 64x32 isometric tilemap of size 40 tiles by 40 tiles an object at (20,21)'s coordinates come out as (640,584) So the question really is what formula gets (20,21) from (640,584)?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >