Search Results

Search found 24734 results on 990 pages for 'floating point conversion'.

Page 113/990 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • flash core engine by Dinesh [closed]

    - by hdinesh
    This post was a dump of the following code (without the highlights). No question, just a dump. Please update this q. with a real question to have it reopened. You (the asker) risk to be flagged as spammer (if not already) and a bad reputation. This is a q/a site, not a site to promote your own code libraries. package facers { import flash.display.*; import flash.events.*; import flash.geom.ColorTransform; import flash.utils.Dictionary; import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.events.FileLoadEvent; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; import caurina.transitions.*; public class Main extends Sprite { public var viewport :BasicView; public var displayObject :DisplayObject3D; private var light :PointLight3D; private var shadowPlane :Plane; private var dataArray :Array; private var material :BitmapFileMaterial; private var planeByContainer :Dictionary = new Dictionary(); private var paperSize :Number = 0.5; private var cloudSize :Number = 1500; private var rotSize :Number = 360; private var maxAlbums :Number = 50; private var num :Number = 0; public function Main():void { trace("START APPLICATION"); viewport = new BasicView(1024, 690, true, true, CameraType.FREE); viewport.camera.zoom = 50; viewport.camera.extra = { goPosition: new DisplayObject3D(),goTarget: new DisplayObject3D() }; addChild(viewport); displayObject = new DisplayObject3D(); viewport.scene.addChild(displayObject); createAlbum(); addEventListener(Event.ENTER_FRAME, onRenderEvent); } private function createAlbum() { dataArray = new Array("images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg", "images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg"); for (var i:int = 0; i < dataArray.length; i++) { material = new BitmapFileMaterial(dataArray[i]); material.doubleSided = true; material.addEventListener(FileLoadEvent.LOAD_COMPLETE, loadMaterial); } } public function loadMaterial(event:Event) { var plane:Plane = new Plane(material, 300, 180); displayObject.addChild(plane); var _x:int = Math.random() * cloudSize - cloudSize/2; var _y:int = Math.random() * cloudSize - cloudSize/2; var _z:int = Math.random() * cloudSize - cloudSize/2; var _rotationX:int = Math.random() * rotSize; var _rotationY:int = Math.random() * rotSize; var _rotationZ:int = Math.random() * rotSize; Tweener.addTween(plane, { x:_x, y:_y, z:_z, rotationX:_rotationX, rotationY:_rotationY, rotationZ:_rotationZ, time:5, transition:"easeIn" } ); } protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; viewport.singleRender(); } } } package designLab.events { import flash.display.BlendMode; import flash.display.Sprite; import flash.events.Event; import flash.filters.BlurFilter; // Import designLab import designLab.layer.IntroLayer; import designLab.shadow.ShadowCaster; import designLab.utils.LayerConstant; // Import Papervision3D import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; public class CoreEnging extends Sprite { public var viewport :BasicView; // Create BasicView public var displayObject :DisplayObject3D; // Create DisplayObject public var shadowCaster :ShadowCaster; // Create ShadowCaster private var light :PointLight3D; // Create PointLight private var shadowPlane :Plane; // Create Plane private var layer :LayerConstant; // Create constant resource layer private static var instance :CoreEnging; // Create CoreEnging class static instance // CoreEnging class static instance mathod function public static function getinstance() { if (instance != null) return instance; else { instance = new CoreEnging(); return instance; } } // CoreEnging constrictor public function CoreEnging () { trace("INFO: Design Lab Application : Core Enging v0.1"); layer = new LayerConstant(); viewport = new BasicView(900, 600, true, true, CameraType.FREE); // pass the width, height, scaleToStage, interactive, cameraType to BasicView viewport.camera.zoom = 100; // Define the zoom level of camera addChild(viewport); createFloor(); // Create the floor displayObject = new DisplayObject3D(); // Create new instance of DisplayObject viewport.scene.addChild(displayObject); // Add the DisplayObject to the BasicView light = new PointLight3D(); // Create new instance of PointLight light.z = -50; // Position the Z of create instance light.x = 0; //Position the X of create instance light.rotationZ = 45; //Position the rotation angel of the Z of create instance light.y = 500; //Position the Y of create instance shadowCaster = new ShadowCaster("shadow", 0x000000, BlendMode.MULTIPLY, .1, [new BlurFilter(20, 20, 1)]); // pass shadowcaster name, color, blend mode, alpha and filters shadowCaster.setType(ShadowCaster.SPOTLIGHT); // Define the shadow type addEventListener(Event.ENTER_FRAME, onRenderEvent); // Add frame render event } // Start create floor public function createFloor() { var spr:Sprite = new Sprite(); // Create Sprite spr.graphics.beginFill(0xFFFFFF); // Define the fill color for sprite spr.graphics.drawRect(0, 0, 600, 600); // Define the X, Y, width, height of the sprite var sprMaterial:MovieMaterial = new MovieMaterial(spr, true, true, true); //Create a texture from an existing sprite instance shadowPlane = new Plane(sprMaterial, 2000, 2000, 1, 1); // create new instance of the Plane and pass the texture material, width, height, segmentsW and segmentsH shadowPlane.rotationX = 80; //Position the rotation angel of the X of Plane shadowPlane.y = -200; //Position the Y of Plane viewport.scene.addChild(shadowPlane); // Add the Plane to the BasicView } // switch method function of the page layer control public function addLayer(type:String) { switch (type) { case layer.INTRO: var intro:IntroLayer = new IntroLayer(); break; } } // Create get mathod function for DisplayObject public function getDisplayObject():DisplayObject3D { return displayObject; } // Create get mathod function for BasicView public function getViewport():BasicView { return viewport; } // Rendering function protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; // Remove the shadow shadowCaster.invalidate(); // create new shadow on DisplayObject move shadowCaster.castModel(displayObject, light, shadowPlane); viewport.singleRender(); } } } package designLab.layer { import flash.display.Sprite; import flash.events.Event; // Import designLab import designLab.materials.iBusinessCard; import designLab.events.CoreEnging; // Import Papervision3D import org.papervision3d.objects.primitives.Cube; import org.papervision3d.materials.ColorMaterial; import org.papervision3d.materials.MovieMaterial; public class IntroLayer { // IntroLayer constrictor public function IntroLayer() { trace("INFO: Load Intro layer"); var indexDP:DP_index = new DP_index(); //Create the library MovieClip var blackMaterial:MovieMaterial = new MovieMaterial(indexDP, true); //Create a texture from an existing library MovieClip instance blackMaterial.smooth = true; blackMaterial.doubleSided = false; var mycolor:ColorMaterial = new ColorMaterial(0x000000); //Create solid color material var mycard:iBusinessCard = new iBusinessCard(blackMaterial, blackMaterial, mycolor, 372, 10, 207); // Create custom 3D cube object to pass the Front, Back, All, CubeWidth, CubeDepth and CubeHeight CoreEnging.getinstance().getDisplayObject().addChild(mycard.create3DCube()); // Add the custom 3D cube to the DisplayObject } } } package designLab.materials { import flash.display.*; import flash.events.*; // Import Papervision3D import org.papervision3d.materials.*; import org.papervision3d.materials.utils.MaterialsList; import org.papervision3d.objects.primitives.Cube; public class iBusinessCard extends Sprite { private var materialsList :MaterialsList; private var cube :Cube; private var Front :MovieMaterial = new MovieMaterial(); private var Back :MovieMaterial = new MovieMaterial(); private var All :ColorMaterial = new ColorMaterial(); private var CubeWidth :Number; private var CubeDepth :Number; private var CubeHeight :Number; public function iBusinessCard(Front:MovieMaterial, Back:MovieMaterial, All:ColorMaterial, CubeWidth:Number, CubeDepth:Number, CubeHeight:Number) { setFront(Front); setBack(Back); setAll(All); setCubeWidth(CubeWidth); setCubeDepth(CubeDepth); setCubeHeight(CubeHeight); } public function create3DCube():Cube { materialsList = new MaterialsList(); materialsList.addMaterial(Front, "front"); materialsList.addMaterial(Back, "back"); materialsList.addMaterial(All, "left"); materialsList.addMaterial(All, "right"); materialsList.addMaterial(All, "top"); materialsList.addMaterial(All, "bottom"); cube = new Cube(materialsList, CubeWidth, CubeDepth, CubeHeight); cube.x = 0; cube.y = 0; cube.z = 0; cube.rotationY = 180; return cube; } public function setFront(Front:MovieMaterial) { this.Front = Front; } public function getFront():MovieMaterial { return Front; } public function setBack(Back:MovieMaterial) { this.Back = Back; } public function getBack():MovieMaterial { return Back; } public function setAll(All:ColorMaterial) { this.All = All; } public function getAll():ColorMaterial { return All; } public function setCubeWidth(CubeWidth:Number) { this.CubeWidth = CubeWidth; } public function getCubeWidth():Number { return CubeWidth; } public function setCubeDepth(CubeDepth:Number) { this.CubeDepth = CubeDepth; } public function getCubeDepth():Number { return CubeDepth; } public function setCubeHeight(CubeHeight:Number) { this.CubeHeight = CubeHeight; } public function getCubeHeight():Number { return CubeHeight; } } } package designLab.shadow { import flash.display.Sprite; import flash.filters.BlurFilter; import flash.geom.Point; import flash.geom.Rectangle; import flash.utils.Dictionary; import org.papervision3d.core.geom.TriangleMesh3D; import org.papervision3d.core.geom.renderables.Triangle3D; import org.papervision3d.core.geom.renderables.Vertex3D; import org.papervision3d.core.math.BoundingSphere; import org.papervision3d.core.math.Matrix3D; import org.papervision3d.core.math.Number3D; import org.papervision3d.core.math.Plane3D; import org.papervision3d.lights.PointLight3D; import org.papervision3d.materials.MovieMaterial; import org.papervision3d.objects.DisplayObject3D; import org.papervision3d.objects.primitives.Plane; public class ShadowCaster { private var vertexRefs:Dictionary; private var numberRefs:Dictionary; private var lightRay:Number3D = new Number3D() private var p3d:Plane3D = new Plane3D(); public var color:uint = 0; public var alpha:Number = 0; public var blend:String = ""; public var filters:Array; public var uid:String; private var _type:String = "point"; private var dir:Number3D; private var planeBounds:Dictionary; private var targetBounds:Dictionary; private var models:Dictionary; public static var DIRECTIONAL:String = "dir"; public static var SPOTLIGHT:String = "spot"; public function ShadowCaster(uid:String, color:uint = 0, blend:String = "multiply", alpha:Number = 1, filters:Array=null) { this.uid = uid; this.color = color; this.alpha = alpha; this.blend = blend; this.filters = filters ? filters : [new BlurFilter()]; numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); planeBounds = new Dictionary(true); models = new Dictionary(true); } public function castModel(model:DisplayObject3D, light:PointLight3D, plane:Plane, faces:Boolean = true, cull:Boolean = false):void{ var ar:Array; if(models[model]) { ar = models[model]; }else{ ar = new Array(); getChildMesh(model, ar); models[model] = ar; } var reset:Boolean = true; for each(var t:TriangleMesh3D in ar){ if(faces) castFaces(light, t, plane, cull, reset); else castBoundingSphere(light, t, plane, 0.75, reset); reset = false; } } private function getChildMesh(do3d:DisplayObject3D, ar):void{ if(do3d is TriangleMesh3D) ar.push(do3d); for each(var d:DisplayObject3D in do3d.children) getChildMesh(d, ar); } public function setType(type:String="point"):void{ _type = type; } public function getType():String{ return _type; } public function castBoundingSphere(light:PointLight3D, target:TriangleMesh3D, plane:Plane, scaleRadius:Number=0.8, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); var b:BoundingSphere = target.geometry.boundingSphere; var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var tbounds:Object = targetBounds[target]; if(!tbounds){ tbounds = target.boundingBox(); targetBounds[target] = tbounds; } var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); var tlp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); var center:Number3D = new Number3D(tbounds.min.x+tbounds.size.x*0.5, tbounds.min.y+tbounds.size.y*0.5, tbounds.min.z+tbounds.size.z*0.5); var dif:Number3D = Number3D.sub(lp, center); dif.normalize(); var other:Number3D = new Number3D(); other.x = -dif.y; other.y = dif.x; other.z = 0; other.normalize(); var cross:Number3D = Number3D.cross(new Number3D(plane.transform.n12, plane.transform.n22, plane.transform.n32), p3d.normal); cross.normalize(); //cross = new Number3D(-dif.y, dif.x, 0); //cross.normalize(); cross.multiplyEq(b.radius*scaleRadius); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } //numberRefs = new Dictionary(true); var pos:Number3D; var c2d:Point; var r2d:Point; //_type = SPOTLIGHT; pos = projectVertex(new Vertex3D(center.x, center.y, center.z), lp, inv, target.world); c2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); pos = projectVertex(new Vertex3D(center.x+cross.x, center.y+cross.y, center.z+cross.z), lp, inv, target.world); r2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); var dx:Number = r2d.x-c2d.x; var dy:Number = r2d.y-c2d.y; var rad:Number = Math.sqrt(dx*dx+dy*dy); castClip.graphics.beginFill(color); castClip.graphics.moveTo(c2d.x, c2d.y); castClip.graphics.drawCircle(c2d.x, c2d.y, rad); castClip.graphics.endFill(); } public function getCastClip(plane:Plane):Sprite{ var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite;// = new Sprite(); if(planeMovie.getChildByName("castClip"+uid)) return Sprite(planeMovie.getChildByName("castClip"+uid)); else{ castClip = new Sprite(); castClip.name = "castClip"+uid; castClip.scrollRect = new Rectangle(0, 0, movieSize.x, movieSize.y); //castClip.alpha = 0.4; planeMovie.addChild(castClip); return castClip; } } public function castFaces(light:PointLight3D, target:TriangleMesh3D, plane:Plane, cull:Boolean=false, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); var tlp:Number3D; if(cull){ tlp = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); } //Matrix3D.multiplyVector(Matrix3D.inverse(target.transform), tlp); //p3d.setThreePoints(planeVertices[0].getPosition(), planeVertices[1].getPosition(), planeVertices[2].getPosition()); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); //numberRefs = new Dictionary(true); var pos:Number3D; var p2d:Point; var s2d:Point; var hitVert:Number3D = new Number3D(); for each(var t:Triangle3D in target.geometry.faces){ if( cull){ hitVert.x = t.v0.x; hitVert.y = t.v0.y; hitVert.z = t.v0.z; if(Number3D.dot(t.faceNormal, Number3D.sub(tlp, hitVert)) <= 0) continue; } castClip.graphics.beginFill(color); pos = projectVertex(t.v0, lp, inv, target.world); s2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.moveTo(s2d.x, s2d.y); pos = projectVertex(t.v1, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); pos = projectVertex(t.v2, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); castClip.graphics.lineTo(s2d.x, s2d.y); castClip.graphics.endFill(); } } public function invalidate():void{ invalidateModels(); invalidatePlanes(); } public function invalidatePlanes():void{ planeBounds = new Dictionary(true); } public function invalidateTargets():void{ numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); } public function invalidateModels():void{ models = new Dictionary(true); invalidateTargets(); } private function get2dPoint(pos3D:Number3D, min3D:Number3D, size3D:Number3D, movieSize:Point):Point{ return new Point((pos3D.x-min3D.x)/size3D.x*movieSize.x, ((-pos3D.y-min3D.y)/size3D.y*movieSize.y)); } private function projectVertex(v:Vertex3D, light:Number3D, invMat:Matrix3D, world:Matrix3D):Number3D{ var pos:Number3D = vertexRefs[v]; if(pos) return pos; var n:Number3D = numberRefs[v]; if(!n){ n = new Number3D(v.x, v.y, v.z); Matrix3D.multiplyVector(world, n); Matrix3D.multiplyVector(invMat, n); numberRefs[v] = n; } if(_type == SPOTLIGHT){ lightRay.x = light.x; lightRay.y = light.y; lightRay.z = light.z; }else{ lightRay.x = n.x-dir.x; lightRay.y = n.y-dir.y; lightRay.z = n.z-dir.z; } pos = p3d.getIntersectionLineNumbers(lightRay, n); vertexRefs[v] = pos; return pos; } } } package designLab.utils { public class LayerConstant { public const INTRO:String = "INTRO"; // Intro layer string constant } }*emphasized text*

    Read the article

  • Triangle Line-Segment Intersection - detecting near misses

    - by Will
    A ray is a very poor approximation of a player! I think approximating a player with a sphere traveling a straight line each game tick will solve my problems of the player intersecting edges of scenery because their line segment missed it yet their own model is not infinitely thin... I have a 3D triangle and a line segment. I have the normal triangle-line-segment intersection code which I admit I have only a woolly grasp of. To model movement and compute collisions of the player I have to determine if a line passes within sphere-radius of a triangle. But I can find no convenient line near-miss intersection code! Here's the classic triangle intersection ### commented ### code with my starting assumptions: function triangle_ray_intersection(a,b,c,ray_origin,ray_dir,ray_radius) { // http://softsurfer.com/Archive/algorithm_0105/algorithm_0105.htm#intersect_RayTriangle%28%29 // get triangle edge vectors and plane normal var u = vec3_sub(b,a); var v = vec3_sub(c,a); var n = vec3_cross(u,v); if(n[0]==0 && n[1]==0 && n[2]==0) return null; // triangle is degenerate var w0 = vec3_sub(ray_origin,a); var j = vec3_dot(n,ray_dir); if(Math.abs(j) < 0.00000001) { //### if parallel, might still pass within ray_radius of it return null; // parallel, disjoint or on plane } var i = -vec3_dot(n,w0); // get intersect point of ray with triangle plane var k = i / j; if(k < 0.0) return null; // ray goes away from triangle //### as its a line segment, k > 1+ray_radius means no intersect var hit = vec3_add(ray_origin,vec3_scale(ray_dir,k)); // intersect point of ray and plane // is I inside T? //### here I'm a bit lost; this is presumably computing barycentric coordinates? var uu = vec3_dot(u,u); var uv = vec3_dot(u,v); var vv = vec3_dot(v,v); var w = vec3_sub(hit,a); var wu = vec3_dot(w,u); var wv = vec3_dot(w,v); var D = uv * uv - uu * vv; var s = (uv * wv - vv * wu) / D; //### therefore, compute if its within ray_radius scaled to the 0..1 of barycentric coordinates? if(s<0.0 || s>1.0) return null; // I is outside T var t = (uv * wu - uu * wv) / D; if(t<0.0 || (s+t)>1.0) return null; // I is outside T //### finally, if it passses a barycentric test it might still be too far //### to a point; must check that its distance from a corner is within ray_radius too if more than one barycentric coord is >1 //### so we have rounded corners... return [hit,n]; // I is in T } Given the distance between the point of plane intersection and each corner, I ought to be able to determine distance at world scale of how far beyond the edge - beyond 1.0 in barycentric coordinates for each axis - that point is... At this point my head explodes! Is this the right track? What's the actual code? UPDATE: you can earn 100 pts on SO if you answer this question there...! How can you determine if a line segment passes within some distance of a triangle?

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • WPF / C#: Transforming coordinates from an image control to the image source

    - by Gabriel
    I'm trying to learn WPF, so here's a simple question, I hope: I have a window that contains an Image element bound to a separate data object with user-configurable Stretch property <Image Name="imageCtrl" Source="{Binding MyImage}" Stretch="{Binding ImageStretch}" /> When the user moves the mouse over the image, I would like to determine the coordinates of the mouse with respect to the original image (before stretching/cropping that occurs when it is displayed in the control), and then do something with those coordinates (update the image). I know I can add an event-handler to the MouseMove event over the Image control, but I'm not sure how best to transform the coordinates: void imageCtrl_MouseMove(object sender, MouseEventArgs e) { Point locationInControl = e.GetPosition(imageCtrl); Point locationInImage = ??? updateImage(locationInImage); } Now I know I could compare the size of Source to the ActualSize of the control, and then switch on imageCtrl.Stretch to compute the scalars and offsets on X and Y, and do the transform myself. But WPF has all the information already, and this seems like functionality that might be built-in to the WPF libraries somewhere. So I'm wondering: is there a short and sweet solution? Or do I need to write this myself? EDIT I'm appending my current, not-so-short-and-sweet solution. Its not that bad, but I'd be somewhat suprised if WPF didn't provide this functionality automatically: Point ImgControlCoordsToPixelCoords(Point locInCtrl, double imgCtrlActualWidth, double imgCtrlActualHeight) { if (ImageStretch == Stretch.None) return locInCtrl; Size renderSize = new Size(imgCtrlActualWidth, imgCtrlActualHeight); Size sourceSize = bitmap.Size; double xZoom = renderSize.Width / sourceSize.Width; double yZoom = renderSize.Height / sourceSize.Height; if (ImageStretch == Stretch.Fill) return new Point(locInCtrl.X / xZoom, locInCtrl.Y / yZoom); double zoom; if (ImageStretch == Stretch.Uniform) zoom = Math.Min(xZoom, yZoom); else // (imageCtrl.Stretch == Stretch.UniformToFill) zoom = Math.Max(xZoom, yZoom); return new Point(locInCtrl.X / zoom, locInCtrl.Y / zoom); }

    Read the article

  • Can't create more than one overlay in Seadragon

    - by XGreen
    Hi everyone, I am trying to add overlays to a seadragon map I am making but for some reason that I can not figure our seadragon ignores all my overlays except the first one. Any help with this is much appreciated. var viewer = null; function init() { Seadragon.Config.autoHideControls = false; viewer = new Seadragon.Viewer("container"); viewer.addEventListener("open", addOverlays); viewer.addControl(makeControl(), Seadragon.ControlAnchor.TOP_RIGHT); $(viewer.getNavControl()).parent().parent().css({ 'top': 10, 'right': 10 }); viewer.openDzi("_assets/Mapdata/dzc_output.xml"); } function makeControl() { var control = document.createElement("a"); var controlText = document.createTextNode(""); control.href = "#"; // so browser shows it as link control.className = "control"; control.appendChild(controlText); Seadragon.Utils.addEvent(control, "click", onControlClick); return control; } function onControlClick(event) { Seadragon.Utils.cancelEvent(event); // don't process link if (!viewer.isOpen()) { return; } // These are the coordinates of europe on this map var x = 0.5398693914203284; var y = 0.21155952391206562; var z = 5; viewer.viewport.panTo(new Seadragon.Point(x, y)); viewer.viewport.zoomTo(z); viewer.viewport.ensureVisible(); } function addOverlays(viewer) { drawer = viewer.drawer; var img = document.createElement("img"); img.src = "_assets/Images/pushpin.png"; $(img).addClass('pushPin'); var overlays = [ { elmt: img, point: new Seadragon.Point(0.51, 0.22) }, { elmt: img, point: new Seadragon.Point(0.20, 0.13) } ]; for (var i = 0; i < overlays.length; i++) { drawer.addOverlay(overlays[i].elmt, overlays[i].point); } } Seadragon.Utils.addEvent(window, "load", init);

    Read the article

  • Echo styles into the mashup of a google map and wordpress custom fields

    - by zac
    Using wordpress, I am pulling in a custom fields from specific posts to fill in the content for a google generated map. I am using this code var point = new GLatLng(48.5139,-123.150531); var marker = createMarker(point,"Lime Kiln State Park", '<?php $post_id = 182; $my_post = get_post($post_id); $title = $my_post->post_title; $snip = get_post_meta($post_id, 'mapExcerpt', true); echo $title; echo $snip; ?>') map.addOverlay(marker); I am trying to echo css style blocks but this causes a javascript error var point = new GLatLng(48.5139,-123.150531); var marker = createMarker(point,"Lime Kiln State Park", '<?php $post_id = 182; $my_post = get_post($post_id); $title = $my_post->post_title; $snip = get_post_meta($post_id, 'mapExcerpt', true); echo "<div class='theTitle'>"; echo $title; echo "</div>"; echo $snip; ?>') map.addOverlay(marker); I get the error missing ) after argument list and the output is var point = new GLatLng(48.5139,-123.150531); var marker = createMarker(point,"Lime Kiln State Park", '<div class='theTitle'>Site Title</div>Site excerpt') map.addOverlay(marker); Can someone please show me a more elegant (working) solution for this?

    Read the article

  • Transforming coordinates from an image control to the image source in WPF

    - by Gabriel
    I'm trying to learn WPF, so here's a simple question, I hope: I have a window that contains an Image element bound to a separate data object with user-configurable Stretch property <Image Name="imageCtrl" Source="{Binding MyImage}" Stretch="{Binding ImageStretch}" /> When the user moves the mouse over the image, I would like to determine the coordinates of the mouse with respect to the original image (before stretching/cropping that occurs when it is displayed in the control), and then do something with those coordinates (update the image). I know I can add an event-handler to the MouseMove event over the Image control, but I'm not sure how best to transform the coordinates: void imageCtrl_MouseMove(object sender, MouseEventArgs e) { Point locationInControl = e.GetPosition(imageCtrl); Point locationInImage = ??? updateImage(locationInImage); } Now I know I could compare the size of Source to the ActualSize of the control, and then switch on imageCtrl.Stretch to compute the scalars and offsets on X and Y, and do the transform myself. But WPF has all the information already, and this seems like functionality that might be built-in to the WPF libraries somewhere. So I'm wondering: is there a short and sweet solution? Or do I need to write this myself? EDIT I'm appending my current, not-so-short-and-sweet solution. Its not that bad, but I'd be somewhat suprised if WPF didn't provide this functionality automatically: Point ImgControlCoordsToPixelCoords(Point locInCtrl, double imgCtrlActualWidth, double imgCtrlActualHeight) { if (ImageStretch == Stretch.None) return locInCtrl; Size renderSize = new Size(imgCtrlActualWidth, imgCtrlActualHeight); Size sourceSize = bitmap.Size; double xZoom = renderSize.Width / sourceSize.Width; double yZoom = renderSize.Height / sourceSize.Height; if (ImageStretch == Stretch.Fill) return new Point(locInCtrl.X / xZoom, locInCtrl.Y / yZoom); double zoom; if (ImageStretch == Stretch.Uniform) zoom = Math.Min(xZoom, yZoom); else // (imageCtrl.Stretch == Stretch.UniformToFill) zoom = Math.Max(xZoom, yZoom); return new Point(locInCtrl.X / zoom, locInCtrl.Y / zoom); }

    Read the article

  • Dependency Property on ValueConverter

    - by spoon16
    I'm trying to initialize a converter in the Resources section of my UserControl with a reference to one of the objects in my control. When I try to run the application I get an XAML parse exception. XAML: <UserControl.Resources> <converter:PointConverter x:Key="pointConverter" Map="{Binding ElementName=ThingMap}" /> </UserControl.Resources> <Grid> <m:Map x:Name="ThingMap" /> </Grid> Point Converter Class: public class PointConverter : DependencyObject, IValueConverter { public Microsoft.Maps.MapControl.Map Map { get { return (Microsoft.Maps.MapControl.Map)GetValue(MapProperty); } set { SetValue(MapProperty, value); } } // Using a DependencyProperty as the backing store for Map. This enables animation, styling, binding, etc... public static readonly DependencyProperty MapProperty = DependencyProperty.Register("Map", typeof(Microsoft.Maps.MapControl.Map), typeof(PointConverter), null); public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { string param = (string)parameter; Microsoft.Maps.MapControl.Location location = value as Microsoft.Maps.MapControl.Location; if (location != null) { Point point = Map.LocationToViewportPoint(location); if (string.Compare(param.ToUpper(), "X") == 0) return point.X; else if (string.Compare(param.ToUpper(), "Y") == 0) return point.Y; return point; } return null; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } }

    Read the article

  • JList - deselect when clicking an already selected item

    - by Peter
    If a selected index on a JList is clicked, I want it to de-select. In other words, clicking on the indices actually toggles their selection. Didn't look like this was supported, so I tried list.addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent evt) { java.awt.Point point = evt.getPoint(); int index = list.locationToIndex(point); if (list.isSelectedIndex(index)) list.removeSelectionInterval(index, index); } }); The problem here is that this is being invoked after JList has already acted on the mouse event, so it deselects everything. So then I tried removing all of JList's MouseListeners, adding my own, and then adding all of the default listeners back. That didn't work, since JList would reselect the index after I had deselected it. Anyway, what I eventually came up with is MouseListener[] mls = list.getMouseListeners(); for (MouseListener ml : mls) list.removeMouseListener(ml); list.addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent evt) { java.awt.Point point = evt.getPoint(); final int index = list.locationToIndex(point); if (list.isSelectedIndex(index)) SwingUtilities.invokeLater(new Runnable() { public void run() { list.removeSelectionInterval(index, index); } }); } }); for (MouseListener ml : mls) list.addMouseListener(ml); ... and that works. But I don't like it. Is there a better way?

    Read the article

  • correct fisheye distortion

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; }

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Setting up DrJava to work through Friedman / Felleisen "A Little Java"

    - by JDelage
    All, I'm going through the Friedman & Felleisen book "A Little Java, A Few Patterns". I'm trying to type the examples in DrJava, but I'm getting some errors. I'm a beginner, so I might be making rookie mistakes. Here is what I have set-up: public class ALittleJava { //ABSTRACT CLASS POINT abstract class Point { abstract int distanceToO(); } class CartesianPt extends Point { int x; int y; int distanceToO(){ return((int)Math.sqrt(x*x+y*y)); } CartesianPt(int _x, int _y) { x=_x; y=_y; } } class ManhattanPt extends Point { int x; int y; int distanceToO(){ return(x+y); } ManhattanPt(int _x, int _y){ x=_x; y=_y; } } } And on the main's side: public class Main{ public static void main (String [] args){ Point y = new ManhattanPt(2,8); System.out.println(y.distanceToO()); } } The compiler cannot find the symbols Point and ManhattanPt in the program. If I precede each by ALittleJava., I get another error in the main, i.e., an enclosing instance that contains ALittleJava.ManhattanPt is required I've tried to find ressources on the 'net, but the book must have a pretty confidential following and I couldn't find much. Thank you all. JDelage

    Read the article

  • How can I specify the character encoding to be used by OLEDB when querying a DBF?

    - by Manga Lee
    Is it possible to specify which character encoding should be used by OLEDB when querying a DBF file? A possible work-around would be to encode the query string before the OLEDB call to the DBF file's character encoding and then encode all the results when they are returned. This will work but it would be nice if OLEDB or possibly ADO.NET could do this for me. UPDATE The suggestion by Viktor Jevdokimov does not seem to work automatically. But it made me investigate manual conversion of the strings. It is possible to use the TextInfo property of CultureInfo to find out the OemCodePage and the WindowsCodePage and use those to get the corresponding Encoding instances to perform manual conversion. But I can not get ADO.NET use these encondings to perform the conversion for me.

    Read the article

  • Make winform run away from the mouse.

    - by JACK IN THE CRACK
    Okay so I'm trying to make a little gag program that will "run away" from the mouse. So, to get the mouse coordinates for the whole screen and not just the form control I had to create a little helper: static class MouseHelper { [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] internal static extern bool GetCursorPos(ref Point pt); public static Point GetPosition() { Point w32Mouse = new Point(); GetCursorPos(ref w32Mouse); return w32Mouse; } } Now I thought I was going to use the MouseMove event... but that doesn't work for outside the form control either so I have an auto-enabled timer on a 10ms loop called timerMouseMove. public partial class Form1 : Form { public Form1() { InitializeComponent(); } private bool CollisionCheck() { Point win32Mouse = MouseHelper.GetPosition(); if (win32Mouse.X <= Location.X || win32Mouse.X >= (Location.X + Width)) return false; if (win32Mouse.Y <= Location.Y || win32Mouse.Y >= (Location.Y + Height)) return false; return true; } private void timerMouseMove_Tick(object sender, EventArgs e) { if (CollisionCheck()) Location = new Point(Location.X + 1, Location.Y + 1); } } So this works out nicely, at least I have the collision checking working and whatnot. But now, how should I go about figuring which side of the form the mouse has collided with, so that I can update its location to move in the opposite direction the mouse collides with it? And such halp

    Read the article

  • correcting fisheye distortion programmatically

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; } Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?

    Read the article

  • Google Website Optimizer - Multi Variant Testing - Make a specific page a test page for two experime

    - by wawawowo
    Im having a little issue with setting up Multi Variant Tests in Google Website Optimizer. I wish to have two tests. One being which is a header banner which appears on every page and the conversion for example would be if the visitor lands on the contact us page. This was very easy to set up. However when I intend to add another test, again this will be on a element which appears on every page and the conversion page is if the visitor lands on the checkout page. But I am now having problems installing the control script. I get the error: Expected to find: }(function(){var k='0651116117',d=docum Found on line 7: (function(){var k='2666211118',d=docum Im assuming I have this error because I now have two control scripts in the header - one for each experiment. However I cannot combine each variation into just one experiment because each one is different and has a different conversion page? Please advise, thanks.

    Read the article

  • How do I utilize REST to post GPS data from an Android device into a Ruby on Rails application?

    - by joecan
    I am a student in the process a building an Android app that can post a GPS track into a Rails application. I would like to do things the "Rails" way and take advantage of the REST. My rails application basically has 3 models at this point: users, tracks, and points. A user has_many tracks and a track has_many points. A track also has a total distance. Points have a latitude and longitude. I have successfully been able to create an empty track with: curl -i -X POST -H 'Content-Type: application/xml' -d '<track><distance>100</distance></track>' http://localhost:3000/users/1/tracks Whoo hoo! That is pretty cool. I am really impressed that rails do this. Just to see what would happen I tried the following: curl -i -X POST -H 'Content-Type: application/xml -d '<track><distance>100</distance><points><point><lat>3</lat><lng>2</lng></point></points></track>' http://localhost:3000/users/1/tracks Fail! The server spits back: Processing TracksController#create (for 127.0.0.1 at 2010-04-14 00:03:25) [POST] Parameters: {"track"={"points"={"point"={"lng"="2", "lat"="3"}}, "distance"="100"}, "user_id"="1"} User Load (0.6ms) SELECT * FROM "users" WHERE ("users"."id" = 1) ActiveRecord::AssociationTypeMismatch (Point(#-620976268) expected, got Array(#-607740138)): app/controllers/tracks_controller.rb:47:in `create' It seems my tracks_controller doesn't like or understand what it's getting from the params object in my tracks_controller.rb: def create @track = @user.tracks.build(params[:track]) My xml might be wrong, but at least Rails seems to be expecting a Point from it. Is there anyway I can fix TracksController.create so that it will be able to parse xml of a track with nested multiple points? Or is there another way I should be doing this entirely?

    Read the article

  • Label on the Chart using Microsoft Chart controls

    - by azamsharp
    I am creating a 3d chart using Microsoft Chart controls. Here is the image: I want to show the point on the top of each bar graph. Like for Exam 1 on top of bar chart it should show 2 (as in 2 points) etc. Here is the code: private void BindData() { var exams = new List<Exam>() { new Exam() { Name = "Exam 1", Point = 10 }, new Exam() { Name = "Exam 2", Point = 12 }, new Exam() { Name = "Exam 3", Point = 15 }, new Exam() { Name = "Exam 4", Point = 2 } }; var series = ExamsChart.Series["ExamSeries"]; series.YValueMembers = "Point"; series.XValueMember = "Name"; //series.MarkerStyle = System.Web.UI.DataVisualization.Charting.MarkerStyle.Circle; //series.MarkerSize = 20; //series.LegendText = "hellow"; //series.Label = "something"; var chartAreas = ExamsChart.ChartAreas["ChartArea1"]; ExamsChart.DataSource = exams; ExamsChart.DataBind(); } and here is the html code: <asp:Chart ID="ExamsChart" Width="600" Height="320" runat="server"> <Titles> <asp:Title Text="Exam Report" /> </Titles> <Series> <asp:Series Name="ExamSeries" ChartType="Column"> </asp:Series> </Series> <ChartAreas> <asp:ChartArea Name="ChartArea1"> <Area3DStyle Enable3D="true" WallWidth="10" /> </asp:ChartArea> </ChartAreas> </asp:Chart>

    Read the article

  • Pros and cons of ways of storing an unsigned int without an unsigned int data type

    - by fields
    I have values that are 64-bit unsigned ints, and I need to store them in mongodb, which has no unsigned int type. I see three main possibilities for storing them in other field types, and converting on going in and out: Using a signed int is probably easiest and most space efficient, but has the disadvantage that they're not human readable and if someone forgets to do the conversion, some of them will work, which may obscure errors. Raw binary is probably most difficult for inexperienced programmers to deal with, and also suffers from non-human-readability. A string representation is the least space efficient (~40 bytes in unicode vs 8 bytes per field), but then at least all of the possible values will map properly, and for querying only a conversion to string is required instead of a more complicated conversion. I need these values to be available from different platforms, so a single driver-specific solution isn't an option. Any major pros and cons I've missed? Which one would you use?

    Read the article

  • What are the precise rules/PHP function for encoding strings into POST arrays?

    - by AlexeyMK
    Greetings, Just getting into PHP web development. I've got an HTML form where a user checks some series of dynamically-generated checkboxes, and submits via POST. On the PHP side, I want to check which of the check-boxes were clicked. I have an array $full_list, and am doing something like $selected_checkboxes = array_filter($full_list, function($item) { array_key_exists($item, $_POST); } I run into problems when a list item is named, for example "Peanut Butter", since in the POST array it is named "Peanut_Butter". I could certainly just str_replace " " with "_" before checking array_key_exists, but I imagine that there is a more fundamental encoding problem here; specifically, I'm not sure of exactly what layer transforms normal strings in HTML Forms (value="Peanut Butter") into "Peanut_Butter". So: what layer is responsible for this conversion? Is it the browser? what are the exact conversion rules, and is there a PHP function out there that will replicate that exact conversion? Thanks!

    Read the article

  • postgres - ERROR: operator does not exist

    - by cino21122
    Again, I have a function that works fine locally, but moving it online yields a big fat error... Taking a cue from a response in which someone had pointed out the number of arguments I was passing wasn't accurate, I double-checked in this situation to be certain that I am passing 5 arguments to the function itself... Query failed: ERROR: operator does not exist: point <@> point HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. The query is this: BEGIN; SELECT zip_proximity_sum('zc', (SELECT g.lat FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT g.lon FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT m.zip FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1) ,10); The PG function is this: CREATE OR REPLACE FUNCTION zip_proximity_sum(refcursor, numeric, numeric, character, numeric) RETURNS refcursor AS $BODY$ BEGIN OPEN $1 FOR SELECT r.zip, point($2,$3) <@> point(g.lat, g.lon) AS distance FROM geocoded g LEFT JOIN masterfile r ON g.recordid = r.id WHERE (geo_distance( point($2,$3),point(g.lat,g.lon)) < $5) ORDER BY r.zip, distance; RETURN $1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100;

    Read the article

  • Float addition promoted to double?

    - by Andreas Brinck
    I had a small WTF moment this morning. Ths WTF can be summarized with this: float x = 0.2f; float y = 0.1f; float z = x + y; assert(z == x + y); //This assert is triggered! (Atleast with visual studio 2008) The reason seems to be that the expression x + y is promoted to double and compared with the truncated version in z. (If i change z to double the assert isn't triggered). I can see that for precision reasons it would make sense to perform all floating point arithmetics in double precision before converting the result to single precision. I found the following paragraph in the standard (which I guess I sort of already knew, but not in this context): 4.6.1. "An rvalue of type float can be converted to an rvalue of type double. The value is unchanged" My question is, is x + y guaranteed to be promoted to double or is at the compiler's discretion? UPDATE: Since many people has claimed that one shouldn't use == for floating point, I just wanted to state that in the specific case I'm working with, an exact comparison is justified. Floating point comparision is tricky, here's an interesting link on the subject which I think hasn't been mentioned.

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >