Search Results

Search found 5213 results on 209 pages for 'integrated camera'.

Page 40/209 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Portal View/Projection Matrix near plane

    - by melak47
    For RenderToTexture/Camera based portal rendering, the basics seems simple enough. However, with a free camera, most of the time it is going to be looking at such portals at an angle: Now a regular near clipping plane will not always work here, it will either intersect with the wall the portal is sitting on, or possibly with objects in front of the wall. The desired near clipping plane would be aligned like the portal, producing a view volume more like this: or this in 3D: So here is my question: How does one construct or "truncate" a view/projection matrix to achieve such an off-camera-normal (near) clipping plane?

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Implementing features in an Entity System

    - by Bane
    After asking two questions on Entity Systems (1, 2), and reading some articles on them, I think that I understand them much better than before. But, I still have some uncertainties, and mainly they are about building a Particle Emitter, an Input system, and a Camera. I obviously still have some problems understanding Entity Systems, and they might apply to a whole other range of objects, but I chose these three because they are very different concepts and should cover a pretty big ground, and help me understand Entity Systems and how to handle problems like these myself, as they come along. I am building an engine in Javascript, and I've implemented most of the core features, which include: input handling, flexible animation system, particle emitter, math classes and functions, scene handling, a camera and a render, and a whole bunch of other things that engines usually support. Then, I read Byte56's answer that got me interested into making the engine into an Entity System one. It would still remain an HTML5 game engine with the basic Scene philosophy, but it should support dynamic creation of entities from components. These are some of the definitions from the previous questions, updated: An Entity is an identifier. It doesn't have any data, it's not an object, it's a simple id that represents an index in the Scene's list of all entities (which I actually plan to implement as a component matrix). A Component is a data holder, but with methods that can operate on that data. The best example is a Vector2D, or a "Position" component. It has data: x and y, but also some methods that make operating on the data a bit easier: add(), normalize(), and so on. A System is something that can operate on a set of entities that meet the certain requirements, usually they (the entities) need to have a specified (by the system itself) set of components to be operated upon. The system is the "logic" part, the "algorithm" part, all the functionality supplied by components is purely for easier data management. The problem that I have now is fitting my old engine concept into this new programming paradigm. Lets start with the simplest one, a Camera. The camera has a position property (Vector2D), a rotation property and some methods for centering it around a point. Each frame, it is fed to a renderer, along with a scene, and all the objects are translated according to it's position. Then the scene is rendered. How could I represent this kind of an object in an Entity System? Would the camera be an entity or simply a component? A combination (see my answer)? Another issues that is bothering me is implementing a Particle Emitter. For what exactly I mean by that, you can check out my video of it: http://youtu.be/BObargIMQsE. The problem I have with this is, again, what should be what. I'm pretty sure that particles themselves shouldn't be entities, as I want to support 10k+ of them, and creating that much entities would be a heavy blow on my performance, I believe. Or maybe not? Depends on the implementation, but anyone with experience: please, do answer. The last bit I wan't to talk about, which is also bugging me the most, is how input should be handled. In my current version of the engine, there is a class called Input. It's a handler that subscribes to browser's events, such as keypresses, and mouse position changes, and also it maintains an internal state. Then, the player class has a react() method, which accepts an input object as an argument. The advantage of this is that the input object could be serialized into JSON and then shared over the network, allowing for smooth multiplayer simulations. But how does this translate into an Entity System?

    Read the article

  • Detect multitouch (two fingers touch) on a sprite to apply pinch zoom behaviour

    - by Tahreem
    I am using andengine, want to move, zoom and rotate multiple sprites individually on a scene. I have achieved "move" but for pinch zoom i am unable to get the event to two fingers' touch. Below is the code: public class Main extends SimpleBaseGameActivity { private Camera camera; private BitmapTextureAtlas mBitmapTextureAtlas; private ITextureRegion mFaceTextureRegion; private ITextureRegion mFaceTextureRegion2; Sprite face2; private static final int CAMERA_WIDTH = 800; private static final int CAMERA_HEIGHT = 480; @Override public EngineOptions onCreateEngineOptions() { camera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); EngineOptions engineOptions = new EngineOptions(true, ScreenOrientation.LANDSCAPE_FIXED, new RatioResolutionPolicy( CAMERA_WIDTH, CAMERA_HEIGHT), camera); return engineOptions; } @Override protected void onCreateResources() { BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/"); this.mBitmapTextureAtlas = new BitmapTextureAtlas( this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST); BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(this.mBitmapTextureAtlas, // this, "ui_ball_1.png", 0, 0, 1, 1), // this.getVertexBufferObjectManager()); this.mFaceTextureRegion = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mFaceTextureRegion2 = BitmapTextureAtlasTextureRegionFactory .createFromAsset(this.mBitmapTextureAtlas, this, "ui_ball_1.png", 0, 0); this.mBitmapTextureAtlas.load(); this.mEngine.getTextureManager().loadTexture(this.mBitmapTextureAtlas); } @Override protected Scene onCreateScene() { this.mEngine.registerUpdateHandler(new FPSLogger()); final Scene scene = new Scene(); scene.setBackground(new Background(0.09804f, 0.6274f, 0.8784f)); final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion .getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion .getHeight()) / 2; final Sprite face = new Sprite(centerX, centerY, this.mFaceTextureRegion, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth() / 2, pSceneTouchEvent.getY() - this.getHeight() / 2); return true; } }; face.setScale(2); scene.attachChild(face); scene.registerTouchArea(face); face2 = new Sprite(200, 200, this.mFaceTextureRegion2, this.getVertexBufferObjectManager()) { @Override public boolean onAreaTouched(final TouchEvent pSceneTouchEvent, final float pTouchAreaLocalX, final float pTouchAreaLocalY) { switch(pSceneTouchEvent.getAction()){ case TouchEvent.ACTION_DOWN: int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; for(int i= 0; i <count; i++) { int id = pSceneTouchEvent.getMotionEvent().getPointerId(i); } break; case TouchEvent.ACTION_MOVE: this.setPosition(pSceneTouchEvent.getX() -this.getWidth() / 2, pSceneTouchEvent.getY()-this.getHeight() / 2); break; case TouchEvent.ACTION_UP: break; } return true; } }; face2.setScale(2); scene.attachChild(face2); scene.registerTouchArea(face2); scene.setTouchAreaBindingOnActionDownEnabled(true); return scene; } } This line int count = pSceneTouchEvent.getMotionEvent().getPointerCount() ; should set 2 to the count variable if i touch the sprite with to fingers, then i can apply zooming functionality (setScale method) on the sprite by getting the distance between the coordinates of two fingers. Can anyone help me? why it does not detect two fingers? and without this how can i zoom the sprite on pinch of two fingers? I am very new to game development, any help would be appreciated. Thanks in advance.

    Read the article

  • My frustum culling is culling from the wrong point [SOLVED]

    - by Xbetas
    I'm having problems with my frustum being in the wrong origin. It follows the rotation of my camera but not the position. In my camera class I'm generating a view-matrix: void Camera::Update() { UpdateViewMatrix(); glMatrixMode(GL_MODELVIEW); //glLoadIdentity(); glLoadMatrixf(GetViewMatrix().m); } Then extracting the planes using the projection matrix and modelview matrix: void UpdateFrustum() { Matrix4x4 projection, model, clip; glGetFloatv(GL_PROJECTION_MATRIX, projection.m); glGetFloatv(GL_MODELVIEW_MATRIX, model.m); clip = model * projection; m_Planes[RIGHT][0] = clip.m[ 3] - clip.m[ 0]; m_Planes[RIGHT][1] = clip.m[ 7] - clip.m[ 4]; m_Planes[RIGHT][2] = clip.m[11] - clip.m[ 8]; m_Planes[RIGHT][3] = clip.m[15] - clip.m[12]; NormalizePlane(RIGHT); m_Planes[LEFT][0] = clip.m[ 3] + clip.m[ 0]; m_Planes[LEFT][1] = clip.m[ 7] + clip.m[ 4]; m_Planes[LEFT][2] = clip.m[11] + clip.m[ 8]; m_Planes[LEFT][3] = clip.m[15] + clip.m[12]; NormalizePlane(LEFT); m_Planes[BOTTOM][0] = clip.m[ 3] + clip.m[ 1]; m_Planes[BOTTOM][1] = clip.m[ 7] + clip.m[ 5]; m_Planes[BOTTOM][2] = clip.m[11] + clip.m[ 9]; m_Planes[BOTTOM][3] = clip.m[15] + clip.m[13]; NormalizePlane(BOTTOM); m_Planes[TOP][0] = clip.m[ 3] - clip.m[ 1]; m_Planes[TOP][1] = clip.m[ 7] - clip.m[ 5]; m_Planes[TOP][2] = clip.m[11] - clip.m[ 9]; m_Planes[TOP][3] = clip.m[15] - clip.m[13]; NormalizePlane(TOP); m_Planes[NEAR][0] = clip.m[ 3] + clip.m[ 2]; m_Planes[NEAR][1] = clip.m[ 7] + clip.m[ 6]; m_Planes[NEAR][2] = clip.m[11] + clip.m[10]; m_Planes[NEAR][3] = clip.m[15] + clip.m[14]; NormalizePlane(NEAR); m_Planes[FAR][0] = clip.m[ 3] - clip.m[ 2]; m_Planes[FAR][1] = clip.m[ 7] - clip.m[ 6]; m_Planes[FAR][2] = clip.m[11] - clip.m[10]; m_Planes[FAR][3] = clip.m[15] - clip.m[14]; NormalizePlane(FAR); } void NormalizePlane(int side) { float length = 1.0/(float)sqrt(m_Planes[side][0] * m_Planes[side][0] + m_Planes[side][1] * m_Planes[side][1] + m_Planes[side][2] * m_Planes[side][2]); m_Planes[side][0] *= length; m_Planes[side][1] *= length; m_Planes[side][2] *= length; m_Planes[side][3] *= length; } And check against it with: bool PointInFrustum(float x, float y, float z) { for(int i = 0; i < 6; i++) { if( m_Planes[i][0] * x + m_Planes[i][1] * y + m_Planes[i][2] * z + m_Planes[i][3] <= 0 ) return false; } return true; } Then i render using: camera->Update(); UpdateFrustum(); int numCulled = 0; for(int i = 0; i < (int)meshes.size(); i++) { if(!PointInFrustum(meshCenter.x, meshCenter.y, meshCenter.z)) { meshes[i]->SetDraw(false); numCulled++; } else meshes[i]->SetDraw(true); } Matrices look like (Camera is at (5, 0, 0)): ModelView [0,0,0.99,0] [0,1,0,0] [-0.99,0,0,0] [0,0,-5,1] Projection [0.814,0,0,0] [0,1.303,0,0] [0,0,-1,0] [0,0,-0.02,0] Clip [0,0,-1,-0.999] [0,1.30,0,0] [-0.814,0,0,0] [0,0,4.98,4.99] What am i doing wrong?

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • Where is android.os.SystemProperties

    - by Travis
    I'm looking at the Android Camera code and when I try importing android.os.SystemProperties It cannot be found. here is the file I'm looking at. http://android.git.kernel.org/?p=platform/packages/apps/Camera.git;a=blob;f=src/com/android/camera/VideoCamera.java;h=8effd3c7e2841eb1ccf0cfce52ca71085642d113;hb=refs/heads/eclair I created a new 2.1 project and tried importing this namespace again, but It still cannot be found. I checked developer.android.com and SystemProperties was not listed Did i miss something?

    Read the article

  • Interpolation and Morphing of an image in labview and/or openCV

    - by Marc
    I am working on an image manipulation problem. I have an overhead projector that projects onto a screen, and I have a camera that takes pictures of that. I can establish a 1:1 correspondence between a subset of projector coordinates and a subset of camera pixels by projecting dots on the screen and finding the centers of mass of the resulting regions on the camera. I thus have a map proj_x, proj_y <-- cam_x, cam_y for scattered point pairs My original plan was to regularize this map using the Mathscript function griddata. This would work fine in MATLAB, as follows [pgridx, pgridy] = meshgrid(allprojxpts, allprojypts) fitcx = griddata (proj_x, proj_y, cam_x, pgridx, pgridy); fitcy = griddata (proj_x, proj_y, cam_y, pgridx, pgridy); and the reverse for the camera to projector mapping Unfortunately, this code causes Labview to run out of memory on the meshgrid step (the camera is 5 megapixels, which apparently is too much for labview to handle) I then started looking through openCV, and found the cvRemap function. Unfortunately, this function takes as its starting point a regularized pixel-pixel map like the one I was trying to generate above. However, it made me hope that functions for creating such a map might be available in openCV. I couldn't find it in the openCV 1.0 API (I am stuck with 1.0 for legacy reasons), but I was hoping it's there or that someone has an easy trick. So my question is one of the following 1) How can I interpolate from scattered points to a grid in openCV; (i.e., given z = f(x,y) for scattered values of x and y, how to fill an image with f(im_x, im_y) ? 2) How can I perform an image transform that maps image 1 to image 2, given that I know a scattered mapping of points in coordinate system 1 to coordinate system 2. This could be implemented either in Labview or OpenCV. Note: I am tagging this post delaunay, because that's one method of doing a scattered interpolation, but the better tag would be "scattered interpolation"

    Read the article

  • Flex, Papervision3d: basic scenes, cameras, planes issue

    - by user157880
    // Import Papervision3D import org.papervision3d.Papervision3D; import org.papervision3d.scenes.; import org.papervision3d.cameras.; import org.papervision3d.objects.; import org.papervision3d.materials.; public var scene:Scene3D; public var camera:Camera3D; public var target:DisplayObject3D; public var screenshotArray: Array;//array to store pictures public var radius:Number; public var image:Image; private function initPapervision():void { target= new DisplayObject3D(); camera= new Camera3D(target); 1067: Implicit coercion of a value of type org.papervision3d.objects:DisplayObject3D to an unrelated type Number. scene= new Scene3D(thisContainer); 1137: Incorrect number of arguments. Expected no more than 0. scene.addChild( new DisplayObject3D() , "center" ); } public function handleCreationComplete():void { image = new Image(); image.source = "one_t.jpg"; image.maintainAspectRatio = true; image.scaleContent = false; image.autoLoad = false; image.addEventListener(Event.COMPLETE, handleImageLoad); image.load(); initPapervision(); } public function handleImageLoad(event:Event):void { init3D(); // onEnterFrame addEventListener( Event.ENTER_FRAME, loop3D ); addEventListener( MouseEvent.MOUSE_MOVE , handleMouseMove); } public function init3D():void { addScreenshots(20); } public function handleMouseMove(event:MouseEvent):void { camera.y = Math.max( thisContainer.mouseY*5 , -450 ); } public function addScreenshots(planeCount:Number):void { var obj:DisplayObject3D = scene.getChildByName ("center"); screenshotArray = new Array(planeCount); var texture:BitmapData = new BitmapData(image.content.loaderInfo.width, image.content.loaderInfo.height, false, 0); texture.draw(image); // Create texture with a bitmap from the library var materialSpace :MaterialObject3D = new BitmapMaterial(texture); materialSpace.doubleSided = true; materialSpace.smooth = false; radius = 500; camera.z = radius + 50; camera.y = 50; for (var i:Number = 0; i < planeCount; i++ ) { screenshotArray[i] = new Object() <b>screenshotArray[i].plane = new Plane( materialSpace, 100, 100, 2, 2 );</b> 1180: Call to a possibly undefined method Plane. // Position plane var rotation:Number = (360/planeCount)* i ; var rotationRadians:Number = (rotation-90) * (Math.PI/180); screenshotArray[i].rotation = rotation; screenshotArray[i].plane.z = (radius * Math.cos(rotationRadians) ) * -1; screenshotArray[i].plane.x = radius * Math.sin(rotationRadians) * -1; screenshotArray[i].plane.y = 100; screenshotArray[i].plane.lookAt(obj); // Add to scene scene.addChild( screenshotArray[i].plane ); } <b>scene.renderCamera(camera);</b> <i>1061: Call to a possibly undefined method renderCamera through a reference with static type org.papervision3d.scenes:Scene3D.</i> } public function loop3D(event:Event):void { var obj:DisplayObject3D = scene.getChildByName ("center"); for (var i:Number = 0; i < screenshotArray.length ; i++ ) { var rotation:Number = screenshotArray[i].rotation; rotation += (thisContainer.mouseX/50)/(screenshotArray.length/10); var rotationRadians:Number = (rotation-90) * (Math.PI/180); screenshotArray[i].rotation = rotation; screenshotArray[i].plane.z = (radius * Math.cos(rotationRadians) ) * -1; screenshotArray[i].plane.x = radius * Math.sin(rotationRadians) * -1; screenshotArray[i].plane.lookAt(obj); } //now lets render the scene <b>scene.renderCamera(camera);</b> <i>1061: Call to a possibly undefined method renderCamera through a reference with static type org.papervision3d.scenes:Scene3D.</i> } ]]

    Read the article

  • Where can I find an iPhone OpenGL ES Example that responds to touch?

    - by Jamey McElveen
    I would like to find an iPhone OpenGL ES Example that responds to touch. Ideally it would meet these requirements: Displays a 3D object in the center of the screen like a cube Maps a texture to the cube surfaces Should move the camera around the cube as you drag your finger Should zoom the camera in and out on the cube by pinching Optionally has a background behind the cube that wraps around the back of the camera.(for example this could create the effect of the cube being in space) Has anyone seen one or more examples that can do these or at least render the cube with the texture?

    Read the article

  • How to directly rotate CVImageBuffer image in IOS 4 without converting to UIImage?

    - by Ian Charnas
    I am using OpenCV 2.2 on the iPhone to detect faces. I'm using the IOS 4's AVCaptureSession to get access to the camera stream, as seen in the code that follows. My challenge is that the video frames come in as CVBufferRef (pointers to CVImageBuffer) objects, and they come in oriented as a landscape, 480px wide by 300px high. This is fine if you are holding the phone sideways, but when the phone is held in the upright position I want to rotate these frames 90 degrees clockwise so that OpenCV can find the faces correctly. I could convert the CVBufferRef to a CGImage, then to a UIImage, and then rotate, as this person is doing: Rotate CGImage taken from video frame However that wastes a lot of CPU. I'm looking for a faster way to rotate the images coming in, ideally using the GPU to do this processing if possible. Any ideas? Ian Code Sample: -(void) startCameraCapture { // Start up the face detector faceDetector = [[FaceDetector alloc] initWithCascade:@"haarcascade_frontalface_alt2" withFileExtension:@"xml"]; // Create the AVCapture Session session = [[AVCaptureSession alloc] init]; // create a preview layer to show the output from the camera AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session]; previewLayer.frame = previewView.frame; previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [previewView.layer addSublayer:previewLayer]; // Get the default camera device AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; // Create a AVCaptureInput with the camera device NSError *error=nil; AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error]; if (cameraInput == nil) { NSLog(@"Error to create camera capture:%@",error); } // Set the output AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init]; videoOutput.alwaysDiscardsLateVideoFrames = YES; // create a queue besides the main thread queue to run the capture on dispatch_queue_t captureQueue = dispatch_queue_create("catpureQueue", NULL); // setup our delegate [videoOutput setSampleBufferDelegate:self queue:captureQueue]; // release the queue. I still don't entirely understand why we're releasing it here, // but the code examples I've found indicate this is the right thing. Hmm... dispatch_release(captureQueue); // configure the pixel format videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil]; // and the size of the frames we want // try AVCaptureSessionPresetLow if this is too slow... [session setSessionPreset:AVCaptureSessionPresetMedium]; // If you wish to cap the frame rate to a known value, such as 10 fps, set // minFrameDuration. videoOutput.minFrameDuration = CMTimeMake(1, 10); // Add the input and output [session addInput:cameraInput]; [session addOutput:videoOutput]; // Start the session [session startRunning]; } - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // only run if we're not already processing an image if (!faceDetector.imageNeedsProcessing) { // Get CVImage from sample buffer CVImageBufferRef cvImage = CMSampleBufferGetImageBuffer(sampleBuffer); // Send the CVImage to the FaceDetector for later processing [faceDetector setImageFromCVPixelBufferRef:cvImage]; // Trigger the image processing on the main thread [self performSelectorOnMainThread:@selector(processImage) withObject:nil waitUntilDone:NO]; } }

    Read the article

  • MinGW and "declaration does not declare anything"

    - by Bob Somers
    I'm working on converting a Linux project of mine to compile on Windows using MinGW. It compiles and runs just fine on Linux, but when I attempt to compile it with MinGW it bombs out with the following error message: camera.h:11: error: declaration does not declare anything camera.h:12: error: declaration does not declare anything I'm kind of baffled why this is happening, because I'm using the same version of g++ (4.4) on both Linux and Windows (via MinGW). The contents of camera.h is absurdly simple. Here's the code. It's choking on lines 11 and 12 where float near; and float far; are defined. #include "Vector.h" #ifndef _CAMERA_H_ #define _CAMERA_H_ class Camera{ public: Vector eye; Vector lookAt; float fov; float near; float far; }; #endif Thanks for your help.

    Read the article

  • Video Capture API remove dialog when capturing from two cameras

    - by sqlmaster
    Hi , I am trying to capture images from two cameras using avicap32.dll messages.I am able to capture images with the first camera but when the second camera start o capture it shows me the dialog "Select a video device".My application dosn't have interaction with the user so I need to select the second camera programatically. can anyone help me?

    Read the article

  • how to write MPEG4 video files from stream in vc++ directshow

    - by maxy
    hai all... Am writing simple application for capturing video from camera and save in .WMV files using vc++ Directshow.i done this task.bt i need to write file as MPEG4 file type. can anyone help me. CAMERA---->SAMPLEGRABBER---->getting streams from sample graaper.. i get stream from camera like this. kndly help me thanks

    Read the article

  • MinGW and "delcaration does not declare anything"

    - by Bob Somers
    I'm working on converting a Linux project of mine to compile on Windows using MinGW. It compiles and runs just fine on Linux, but when I attempt to compile it with MinGW it bombs out with the following error message: camera.h:11: error: declaration does not declare anything camera.h:12: error: declaration does not declare anything I'm kind of baffled why this is happening, because I'm using the same version of g++ (4.4) on both Linux and Windows (via MinGW). The contents of camera.h is absurdly simple. Here's the code. It's choking on lines 11 and 12 where float near; and float far; are defined. #include "Vector.h" #ifndef _CAMERA_H_ #define _CAMERA_H_ class Camera{ public: Vector eye; Vector lookAt; float fov; float near; float far; }; #endif Thanks for your help.

    Read the article

  • Using setSourceType with image picker up hides status bar

    - by Aaron
    I am pretty sure this is a bug but I thought I would check. I used the camera overlay to add a button so that on the iphone a user can switch from the camera view to the photo library. When button is taped the source type switches fine but the status bar is missing from the photo library. Throughout the rest of that session the status bar remains missing from all views evet thougth isStatusBarHidden reports NO. This is how the camera overlay view is created if (cameraOverlayView == nil) { [[NSBundle mainBundle] loadNibNamed:@"CameraOverlayView" owner:self options:nil]; } If camera is available this is when I set source type and add overlay if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { [imagePicker setSourceType:UIImagePickerControllerSourceTypeCamera]; [imagePicker setCameraOverlayView:cameraOverlayView]; } Here is the action statement to change source type. (IBAction)selectImage; { [imagePicker setSourceType:UIImagePickerControllerSourceTypePhotoLibrary]; } If I don't tap the button on the overlay there is no problem with the status bar. PS. I did submit a bug report on this.

    Read the article

  • Recreating "presentModalViewController"?

    - by Benjohn Barnes
    I'm using a UIImagePickerController set up as a camera with an overlay view. I want to present a modal view controller on top on this. When I do so, though, the camera view "closes". This would be okay, but when I dismissModalViewControllerAnimated, I see the closed camera, and there is a long and annoying delay before it reopens. I would like to avoid this. Unless someone has a better approach, I am planning to simply perform the transition that presentModalViewController would perform myself. However, if I take my modal view from it's controller and add it as a sub view of the camera overlay view, like this: [[_imagePickerController cameraOverlayView] addSubview: [viewController view]]; then the modal view doesn't show up at all, and the application crashes with an EXC_BAD_ACCESS in my "modal" view's layoutSubViews. Where as, if I present it with presentModalViewController, everything works fine. Clearly, presentModalViewController is also doing some other stuff. Does anyone know what this is so that I can recreate it?

    Read the article

  • Android: Capturing the return of an activity.

    - by Chrispix
    I have a question regarding launching new activities. It boils down to this. I have 3 tabs on a view A) contains gMap activity B) camera activity C) some random text fields. Requirement is that the application runs in Portrait mode. All 3 tabs work as expected w/ the exception of the Camera Preview Surface (B). It is rotated 90 degrees. They only way to make it correct is to set the app to landscape which throws all my tabs around, and is pretty much unworkable. My solution is this : replace my camera activity with a regular activity that is empty w/ the exception of Intent i = new Intent(this,CameraActivity.class); startActivity(i); This launches my CameraActivity. And that works fine. I had to do a linear layout and include 3 images that look like real tabs, so I can try and mimic the operation of the tabs while rotating the screen to landscape and keep the visuals as portrait. The user can click one of the images(buttons) to display the next tab. This is my issue. It should exit my 'camera activity' returning to the 'blank activity' in a tab, where it should be interpreted to click the desiered tab from my image. The main thing is, when it returns, it returns to a blank (black) page under a tab (because it is 'empty'). How can I capture the return event back to the page that called the activity, and then see what action they performed? I can set an onclicklistener where I can respond to the fake tabs (images) being clicked to exit out of the camera activity. On exit, the tab should update so that is where you return. any Suggestions? Thanks,

    Read the article

  • Quaternion Cameras and projectile vectors

    - by Tom J Nowell
    In our software we have a camera based on mouse movement, and a quarternion at its heart. We want to fire projectiles from this position, which we can do, however we want to use the camera to aim. The projectile takes a vector which it will add to its position each game frame. How do we acquire such a vector from a given camera/quaternion?

    Read the article

  • Connection String Incorrectly Formatted [migrated]

    - by Randy E
    I'm running into some issues. Every time I launch into debug mode and hit the "Create User" button, I'm getting an exception being thrown that is due to the Connection String either being in the wrong syntax or just wrong. Using Visual Studio 2010, project is .NET 3.5 with SQL 2008 Express. This is just a personal project that I'm testing some other things with, I know this generally isn't the recommended format but for a personal small project, I don't see the point in doing it any other way. The things I'm testing actually work :) Data Source=.\SQLEXPRESS;AttachDbFilename=|Data Directory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True That doesn't work. Data Source=.\SQLEXPRESS;AttachDbFilename="|Data Directory|ASPAppDatabase.mdf";Integrated Security=True;User Instance=True Neither does that. Data Source=.\SQLEXPRESS;AttachDbFilename='|Data Directory|ASPAppDatabase.mdf';Integrated Security=True;User Instance=True And again, neither does that :/ However, rather than using "|Data directory| if I use the full path to the local DB file it works just fine and no exception is thrown, and I can read and write as I need. And just to cover all my bases, here is the button click event that creates the User. protected void btnAddUser_Click(object sender, EventArgs e) { Membership.CreateUser(txtUserName.Text, txtPassword.Text); btnLogin_Click(sender, e); } So, what am I missing in regards to the |Data Directory|? Here is an example of the above not working correctly..taken directly from web.config. <add name="ASPAppDatabaseConnection" connectionString='Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True'/>

    Read the article

  • Desktop goes blurry / granulated

    - by Bonfire
    After few minutes using, desktop and fonts in unity-menu goes blurry. See screenshot.! In all applications (browser etc.) graphics seem to be allright. After rebooting problem disappears for few minutes. Changing the wallpaper also makes the desktop clear (for a while) but fonts are still blurry. I have ubuntu 12.04 and windows xp dualbooted. lspci: 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) 00:02.1 Display controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) lshw -c video: *-display:0 description: VGA compatible controller product: 82945G/GZ Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:16 memory:d0100000-d017ffff ioport:30c0(size=8) memory:c0000000-cfffffff memory:d0180000-d01bffff *-display:1 UNCLAIMED description: Display controller product: 82945G/GZ Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 02 width: 32 bits clock: 33MHz capabilities: cap_list configuration: latency=0 resources: memory:40000000-4007ffff Have you got any ideas to get this solved? Thank you very much! Bonfire

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >