Search Results

Search found 2495 results on 100 pages for 'camera hacks'.

Page 87/100 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Which techniques to study?

    - by Djentleman
    Just to give you some background info, I'm studying a programming major at a tertiary level and am in my third year, so I'm not a newbie off the street. However, I am still quite new to game programming as a subset of programming. One of my personal projects for next semester is to design and create a 2D platformer game with emphasis on procedural generation and "neato" effects (think metroidvania). I've written up a list of some techniques to help me improve my personal skills (using XNA for the time being). The list is as follows: QuadTrees: Build a basic program in XNA that moves basic 2D sprites (circles and squares) around a set path and speed and changes their colour when they collide. Add functionality to add and delete objects of different sizes (select a direction and speed when adding and just drag and drop them in). Particles: Build a basic program in XNA in which you can select different colours and create particle effects of those colours on screen by clicking and dragging the mouse around (simple particles emerging from where the mouse is clicked). Add functionality where you can change the amount of particles to be drawn and the speed at which they travel and when they expire. Possibly implement gravity and wind after part 3 is complete. Physics: Build a basic program in XNA where you have a ball in a set 2D environment, a wind slider, and a gravity slider (can go to negative for reverse gravity). You can click to drag the ball around and release to throw it and, depending on what you do, the ball interacts with the environment. Implement other shapes afterwards. Random 2D terrain generation: Build a basic program in XNA that randomly generates terrain (including hills, caves, etc) created from 2D tiles. Add functionality that draws the tiles from a tileset and places different tiles depending on where they lie on the y-axis (dirt on top, then rock, then lava, etc). Randomised objects: Build a basic program in XNA that, when a button is clicked, displays a randomised item sprite based on parameters (type, colour, etc) with the images pulled from tilesets. Add the ability to save the item as an object, which stores it in a side-pane where it can be selected for viewing. Movement: Build a basic program in XNA where you can move an object around in an environment (tile-based) with a camera that pans with it. No gravity. Implement gravity and wind, allow the character to jump and fall with some basic platforms. So my question is this: Are there any other commonly used techniques that I should research, and can I get some suggestions as to the effectiveness of the techniques I've chosen to work on (e.g., don't do QuadTree stuff because [insert reason here], or, do [insert technique here] before you start working on particles because [insert reason here])? I hope this is clear enough and please let me know if I can further clarify anything!

    Read the article

  • How to Quickly Add Multiple IP Addresses to Windows Servers

    - by Sysadmin Geek
    If you have ever added multiple IP addresses to a single Windows server, going through the graphical interface is an incredible pain as each IP must be added manually, each in a new dialog box. Here’s a simple solution. Needless to say, this can be incredibly monotonous and time consuming if you are adding more than a few IP addresses. Thankfully, there is a much easier way which allows you to add an entire subnet (or more) in seconds. Adding an IP Address from the Command Line Windows includes the “netsh” command which allows you to configure just about any aspect of your network connections. If you view the accepted parameters using “netsh /?” you will be presented with a list of commands each which have their own list of commands (and so on). For the purpose of adding IP addresses, we are interested in this string of parameters: netsh interface ipv4 add address Note: For Windows Server 2003/XP and earlier, “ipv4″ should be replaced with just “ip” in the netsh command. If you view the help information, you can see the full list of accepted parameters but for the most part what you will be interested in is something like this: netsh interface ipv4 add address “Local Area Connection” 192.168.1.2 255.255.255.0 The above command adds the IP Address 192.168.1.2 (with Subnet Mask 255.255.255.0) to the connection titled “Local Area Network”. Adding Multiple IP Addresses at Once When we accompany a netsh command with the FOR /L loop, we can quickly add multiple IP addresses. The syntax for the FOR /L loop looks like this: FOR /L %variable IN (start,step,end) DO command So we could easily add every IP address from an entire subnet using this command: FOR /L %A IN (0,1,255) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 This command takes about 20 seconds to run, where adding the same number of IP addresses manually would take significantly longer. A Quick Demonstration Here is the initial configuration on our network adapter: ipconfig /all Now run netsh from within a FOR /L loop to add IP’s 192.168.1.10-20 to this adapter: FOR /L %A IN (10,1,20) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0 After the above command is run, viewing the IP Configuration of the adapter now shows: Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • The View-Matrix and Alternative Calculations

    - by P. Avery
    I'm working on a radiosity processor in DirectX 9. The process requires that the camera be placed at the center of a mesh face and a 'screenshot' be taken facing 5 different directions...forward...up...down...left...right... ...The problem is that when the mesh face is facing up( look vector: 0, 1, 0 )...a view matrix cannot be determined using standard trigonometry functions: Matrix4 LookAt( Vector3 eye, Vector3 target, Vector3 up ) { // The "look-at" vector. Vector3 zaxis = normal(target - eye); // The "right" vector. Vector3 xaxis = normal(cross(up, zaxis)); // The "up" vector. Vector3 yaxis = cross(zaxis, xaxis); // Create a 4x4 orientation matrix from the right, up, and at vectors Matrix4 orientation = { xaxis.x, yaxis.x, zaxis.x, 0, xaxis.y, yaxis.y, zaxis.y, 0, xaxis.z, yaxis.z, zaxis.z, 0, 0, 0, 0, 1 }; // Create a 4x4 translation matrix by negating the eye position. Matrix4 translation = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -eye.x, -eye.y, -eye.z, 1 }; // Combine the orientation and translation to compute the view matrix return ( translation * orientation ); } The above function comes from http://3dgep.com/?p=1700... ...Is there a mathematical approach to this problem? Edit: A problem occurs when setting the view matrix to up or down directions, here is an example of the problem when facing down: D3DXVECTOR4 vPos( 3, 3, 3, 1 ), vEye( 1.5, 3, 3, 1 ), vLook( 0, -1, 0, 1 ), vRight( 1, 0, 0, 1 ), vUp( 0, 0, 1, 1 ); D3DXMATRIX mV, mP; D3DXMatrixPerspectiveFovLH( &mP, D3DX_PI / 2, 1, 0.5f, 2000.0f ); D3DXMatrixIdentity( &mV ); memcpy( ( void* )&mV._11, ( void* )&vRight, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._21, ( void* )&vUp, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._31, ( void* )&vLook, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._41, ( void* )&(-vEye), sizeof( D3DXVECTOR3 ) ); D3DXVec4Transform( &vPos, &vPos, &( mV * mP ) ); Results: vPos = D3DXVECTOR3( 1.5, -6, -0.5, 0 ) - this vertex is not properly processed by shader as the homogenous w value is 0 it cannot be normalized to a position within device space...

    Read the article

  • Push back rectangle where collision happens

    - by Tifa
    I have a tile collision on a game I am creating but the problem is once a collision happens for example a collision happens in right side my sprite cant move to up and bottom :( thats because i set the speed to 0. I thinks its wrong. here is my code: int startX, startY, endX, endY; float pushx = 0,pushy = 0; // move player if(Gdx.input.isKeyPressed(Input.Keys.LEFT)){ dx=-1; currentWalk = leftWalk; } if(Gdx.input.isKeyPressed(Input.Keys.RIGHT)){ dx=1; currentWalk = rightWalk; } if(Gdx.input.isKeyPressed(Input.Keys.DOWN)){ dy=-1; currentWalk = downWalk; } if(Gdx.input.isKeyPressed(Input.Keys.UP)){ dy=1; currentWalk = upWalk; } sr.setProjectionMatrix(camera.combined); sr.begin(ShapeRenderer.ShapeType.Line); Rectangle koalaRect = rectPool.obtain(); koalaRect.set(player.getX(), player.getY(), pw, ph /2 ); float oldX = player.getX(), oldY = player.getY(); // THIS LINE WAS ADDED player.setXY(player.getX() + dx * Gdx.graphics.getDeltaTime() * 4f, player.getY() + dy * Gdx.graphics.getDeltaTime() * 4f); // THIS LINE WAS MOVED HERE FROM DOWN BELOW if(dx> 0) { startX = endX = (int)(player.getX() + pw); } else { startX = endX = (int)(player.getX() ); } startY = (int)(player.getY()); endY = (int)(player.getY() + ph); getTiles(startX, startY, endX, endY, tiles); for(Rectangle tile: tiles) { sr.rect(tile.x,tile.y,tile.getWidth(),tile.getHeight()); if(koalaRect.overlaps(tile)) { //dx = 0; player.setX(oldX); // THIS LINE CHANGED Gdx.app.log("x","hit " + player.getX() + " " + oldX); break; } } if(dy > 0) { startY = endY = (int)(player.getY() + ph ); } else { startY = endY = (int)(player.getY() ); } startX = (int)(player.getX()); endX = (int)(player.getX() + pw); getTiles(startX, startY, endX, endY, tiles); for(Rectangle tile: tiles) { if(koalaRect.overlaps(tile)) { //dy = 0; player.setY(oldY); // THIS LINE CHANGED //Gdx.app.log("y","hit" + player.getY() + " " + oldY); break; } } sr.rect(koalaRect.x,koalaRect.y,koalaRect.getWidth(),koalaRect.getHeight() / 2); sr.setColor(Color.GREEN); sr.end(); I want to push back the sprite when a collision happens but i have no idea how :D pls help

    Read the article

  • Curva de adoção tecnológica.

    - by Fernando Kimura-Oracle
    Diariamente estamos em contato com diversas tecnologias, muitas delas complementares ou realizam tarefas muito semelhantes como o caso dos tablets X smartphones. Não podemos negar o quanto estas tecnologias passaram a fazer parte do hábito diário universalmente, alterando o padrão como consumimos informação, e até mesmo como utilizamos ou utilizávamos o computador.Basicamente existem 2 tipos de inovação:1 – incremental – que ocorre de acordo com as melhorias, ajustas, releituras e evolução de um produto. Este tipo de inovação podemos ver em automóveis, que seguem o mesmo princípio, porém quando comparamos um automóvel atual com um fabricado a 20 anos atrás, podemos perceber as inovações incrementais que alteraram o produto.2 – disruptiva – este tipo de inovação geralmente causar um novo momento, é até uma alteração do hábito de uso dos produtos. Foi o caso da revolução industrial, que automatizou processos de produção, ou da câmera digital que alterou a forma como habitualmente fotos eram tiradas e reveladas.Dentro deste processo existe uma curva de adoção tecnológica, esta curva foi criada americano Everett M. Rogers, PHd em sociologia e estatística.Em seu livro “The diffusion of inovations” (1962) – em português – A difusão das inovações, Rogers apresenta após diversas análises e estudos a curva de adoção tecnológica, Roger´s é o criador do termo Early Adopters muito utilizado nos dias de hoje.Abaixo podemos entender a curva de adoção:2,5% da população são os Innovators/Inovadores – eles possuem acesso á qualquer inovação antes de todos, por questões sociais, influência, conhecimento. São as pessoas que tem acesso a inovação antes que ela esteja disponível no mercado. 13,50 % são os Early Adopters, pessoas e empresa que por uma questão comportamental buscam ter as inovações assim que são lançadas, frente a isso existe uma série de vantagens e desvantagens. Estar à frente do mercado muitas vezes significa utilizar coisas que o mercado ainda não utiliza, por isso este comportamento pode colocar muitas empresas a frente de seus concorrentes mais tradicionais. Há também o risco da inovação não ser 100% aceita, ou passar por algum processo de ajuste, mas certamente os early adopters conseguem explanar melhor sobra visão de futuro.34% são os Early Majority, nesta fase da adoção muitas pessoas/empresas são influenciadas pelos early adopters, bem como inicia-se uma clico “natural” de busca por inovação. 34% são os late majority, ou seja empresas/pessoas que esperam que todos utilizem e adotam quase na última onda.Ao final temos 16% os laggards – retardatários, empresas e pessoas que só adotam inovações porque não possuem mais saída frente as alterações causadas, e precisam de alguma forma sobreviver frente as mudanças.Frente a este cenário onde você este inserido? Onde sua empresa está inserida?Vale pensar e refletir nos benefícios de ser Early adopters ou Early Majority.Aproveite e baixe GRATUITAMENTE o e-book – Simplifique sua MOBILIDADE EMPRESARIAL. E conheça o poder transformacional da mobilidade em seu negócio.http://bit.ly/e-bookmobilidade

    Read the article

  • Desktop Fun: Runic Style Fonts

    - by Asian Angel
    Most of the time regular fonts are just what you need for documents, invitations, or adding text to images. But what if you are in the mood for something unusual or unique to add that perfect touch? If you like older runic style writing, then enjoy finding some new favorites for your collection with our Runic Style Fonts collection. Temple photo by ShinyShiny. Note: To manage the fonts on your Windows 7, Vista, & XP systems see our article here. The Runic Style Fonts Sable Download Worn Manuscript Download JSL Ancient Download Antropos Download Cave Gyrl Download The Roman Runes Alliance Download Ancient Geek Download Troll Download Runish Quill MK *includes two font types Download DS RUNEnglish 2 Download Runes Written *includes two font types Download Wolves And Ravens Download Art Greco Download Dalek Download Glagolitic AOE Download Linear B Download Cartouche Download Greywolf Glyphs *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download Africain *includes 62 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Note: This group represents the numbers 0 – 9. Download Cave Writings *includes 52 individual characters Note: This group represents A – Z in all capital letters. Note: This group represents A – Z in all lower case letters. Download For more great ways to customize your computer be certain to look through our Desktop Fun section. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • Acer Allionone Z5810 Touchscreen Issues 12.04

    - by Johannes
    I have an Acer Allionone Z5810, and I can't get the touchscreen to work after I install 12.04. Here is the lsusb output: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 0596:0508 MicroTouch Systems, Inc. Bus 001 Device 004: ID 04b8:0005 Seiko Epson Corp. Printer Bus 001 Device 005: ID 07ca:1336 AVerMedia Technologies, Inc. Bus 002 Device 003: ID 04ca:0058 Lite-On Technology Corp. Bus 002 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 002 Device 005: ID 04f2:b23f Chicony Electronics Co., Ltd xinput --list Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=9 [slave pointer (2)] ? ? Lite-On Technology Corp. Wireless Device id=10 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Lite-On Technology Corp. Wireless Device id=8 [slave keyboard (3)] ? USB 2.0 camera id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] My xorg.conf contains: nvidia-xconfig: X configuration file generated by nvidia-xconfig nvidia-xconfig: version 304.48 (buildmeister@swio-display x86-rhel47-04.nvidia.com) Sun Sep 9 21:31:39 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" InputDevice "TouchScreen" EndSection Section "Files" EndSection Section "InputDevice" Identifier "TouchScreen" Driver "microtouch" Option "Type" "finger" Option "Device" "/dev/ttyS3" Option "ScreenNo" "0" Option "MinX" "0" Option "MaxX" "16383" Option "MinY" "0" Option "MaxY" "16383" Option "SendCoreEvents" "yes" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • How to Make Objects Fall Faster in a Physics Simulation

    - by David Dimalanta
    I used the collision physics (i.e. Box2d, Physics Body Editor) and implemented onto the java code. I'm trying to make the fall speed higher according to the examples: It falls slower if light object (i.e. feather). It falls faster depending on the object (i.e. pebble, rock, car). I decided to double its falling speed for more excitement. I tried adding the mass but the speed of falling is constant instead of gaining more speed. check my code that something I put under input processor's touchUp() return method under same roof of the class that implements InputProcessor and Screen: @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { // TODO Touch Up Event if(is_Next_Fruit_Touched) { BodyEditorLoader Fruit_Loader = new BodyEditorLoader(Gdx.files.internal("Shape_Physics/Fruity Physics.json")); Fruit_BD.type = BodyType.DynamicBody; Fruit_BD.position.set(x, y); FixtureDef Fruit_FD = new FixtureDef(); // --> Allows you to make the object's physics. Fruit_FD.density = 1.0f; Fruit_FD.friction = 0.7f; Fruit_FD.restitution = 0.2f; MassData mass = new MassData(); mass.mass = 5f; Fruit_Body[n] = world.createBody(Fruit_BD); Fruit_Body[n].setActive(true); // --> Let your dragon fall. Fruit_Body[n].setMassData(mass); Fruit_Body[n].setGravityScale(1.0f); System.out.println("Eggs... " + n); Fruit_Loader.attachFixture(Fruit_Body[n], Body, Fruit_FD, Fruit_IMG.getWidth()); Fruit_Origin = Fruit_Loader.getOrigin(Body, Fruit_IMG.getWidth()).cpy(); is_Next_Fruit_Touched = false; up = y; Gdx.app.log("Initial Y-coordinate", "Y at " + up); //Once it's touched, the next fruit will set to drag. if(n < 50) { n++; }else{ System.exit(0); } } return true; } And take note, at show() method , the view size from the camera is at 720x1280: camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 1280; camera_1.viewportWidth = 720; camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); I know it's a good idea to add weight to make the falling object falls faster once I released the finger from the touchUp() after I picked the object from the upper right of the screen but the speed remains either constant or slow. How can I solve this? Can you help?

    Read the article

  • Android Swipe In Unity 3D World with AR

    - by Christian
    I am working on an AR application using Unity3D and the Vuforia SDK for Android. The way the application works is a when the designated image(a frame marker in our case) is recognized by the camera, a 3D island is rendered at that spot. Currently I am able to detect when/which objects are touched on the model by raycasting. I also am able to successfully detect a swipe using this code: if (Input.touchCount > 0) { Touch touch = Input.touches[0]; switch (touch.phase) { case TouchPhase.Began: couldBeSwipe = true; startPos = touch.position; startTime = Time.time; break; case TouchPhase.Moved: if (Mathf.Abs(touch.position.y - startPos.y) > comfortZoneY) { couldBeSwipe = false; } //track points here for raycast if it is swipe break; case TouchPhase.Stationary: couldBeSwipe = false; break; case TouchPhase.Ended: float swipeTime = Time.time - startTime; float swipeDist = (touch.position - startPos).magnitude; if (couldBeSwipe && (swipeTime < maxSwipeTime) && (swipeDist > minSwipeDist)) { // It's a swiiiiiiiiiiiipe! float swipeDirection = Mathf.Sign(touch.position.y - startPos.y); // Do something here in reaction to the swipe. swipeCounter.IncrementCounter(); } break; } touchInfo.SetTouchInfo (Time.time-startTime,(touch.position-startPos).magnitude,Mathf.Abs (touch.position.y-startPos.y)); } Thanks to andeeeee for the logic. But I want to have some interaction in the 3D world based on the swipe on the screen. I.E. If the user swipes over unoccluded enemies, they die. My first thought was to track all the points in the moved TouchPhase, and then if it is a swipe raycast into all those points and kill any enemy that is hit. Is there a better way to do this? What is the best approach? Thanks for the help!

    Read the article

  • 3D terrain map with Hexagon Grids (XNA)

    - by Rob
    I'm working on a hobby project (I'm a web/backend developer by day) and I want to create a 3D Tile (terrain) engine. I'm using XNA, but I can use MonoGame, OpenGL, or straight DirectX, so the answer does not have to be XNA specific. I'm more looking for some high level advice on how to approach this problem. I know about creating height maps and such, there are thousands of references out there on the net for that, this is a bit more specific. I'm more concerned with is the approach to get a 3D hexagon tile grid out of my terrain (since the terrain, and all 3d objects, are basically triangles). The first approach I thought about is to basically draw the triangles on the screen in the following order (blue numbers) to give me the triangles for terrain (black triangles) and then make hexes out of the triangles (red hex). http://screencast.com/t/ebrH2g5V This approach seems complicated to me since i'm basically having to draw 4 different types of triangles. The next approach I thought of was to use the existing triangles like I did for a square grid and get my hexes from 6 triangles as follows http://screencast.com/t/w9b7qKzVJtb8 This seems like the easier approach to me since there are only 2 types of triangles (i would have to play with the heights and widths to get a "perfect" hexagon, but the idea is the same. So I'm looking for: 1) Any suggestions on which approach I should take, and why. 2) How would I translate mouse position to a hexagon grid position (especially when moving the camera around), for example in the second image if the mouse pointer were the green circle, how would I determine to highlight that hexagon and then translating that into grid coordinates (assuming it is 0,0)? 3) Any references, articles, books, etc - to get me going in the right direction. Note: I've done hex grid's and mouse-grid coordinate conversion before in 2d. looking for some pointers on how to do the same in 3d. The result I would like to achieve is something similar to the following: http :// www. youtube .com / watch?v=Ri92YkyC3fw (sorry about the youtube link, but it will only let me post 2 links in this post... same rep problem i mention below...) Thanks for any help! P.S. Sorry for not posting the images inline, I apparently don't have enough rep on this stack exchange site.

    Read the article

  • Precise Touch Screen Dragging Issue: Trouble Aligning with the Finger due to Different Screen Resolution

    - by David Dimalanta
    Please, I need your help. I'm trying to make a game that will drag-n-drop a sprite/image while my finger follows precisely with the image without being offset. When I'm trying on a 900x1280 (in X [900] and Y [1280]) screen resolution of the Google Nexus 7 tablet, it follows precisely. However, if I try testing on a phone smaller than 900x1280, my finger and the image won't aligned properly and correctly except it still dragging. This is the code I used for making a sprite dragging with my finger under touchDragged(): x = ((screenX + Gdx.input.getX())/2) - (fruit.width/2); y = ((camera_2.viewportHeight * multiplier) - ((screenY + Gdx.input.getY())/2) - (fruit.width/2)); This code above will make the finger and the image/sprite stays together in place while dragging but only works on 900x1280. You'll be wondering there's camera_2.viewportHeight in my code. Here are for two reasons: to prevent inverted drag (e.g. when you swipe with your finger downwards, the sprite moves upward instead) and baseline for reading coordinate...I think. Now when I'm adding another orthographic camera named camera_1 and changing its setting, I recently used it for adjusting the falling object by meter per pixel. Also, it seems effective independently for smartphones that has smaller resolution and this is what I used here: show() camera_1 = new OrthographicCamera(); camera_1.viewportHeight = 280; // --> I set it to a smaller view port height so that the object would fall faster, decreasing the chance of drag force. camera_1.viewportWidth = 196; // --> Make it proportion to the original screen view size as possible. camera_1.position.set(camera_1.viewportWidth * 0.5f, camera_1.viewportHeight * 0.5f, 0f); camera_1.update(); touchDragged() x = ((screenX + (camera_1.viewportWidth/Gdx.input.getX()))/2) - (fruit.width/2); y = ((camera_1.viewportHeight * multiplier) - ((screenY + (camera_1.viewportHeight/Gdx.input.getY()))/2) - (fruit.width/2)); But the result instead of just following the image/sprite closely to my finger, it still has a space/gap between the sprite/image and the finger. It is possibly dependent on coordinates based on the screen resolution. I'm trying to drag the blueberry sprite with my finger. My expectation did not met since I want my finger and the sprite/image (blueberry) to stay close together while dragging until I release it. Here's what it looks like: I got to figure it out how to make independent on all screen sizes by just following the image/sprite closely to my finger while dragging even on most different screen sizes instead.

    Read the article

  • OpenGL position from depth is wrong

    - by CoffeeandCode
    My engine is currently implemented using a deferred rendering technique, and today I decided to change it up a bit. First I was storing 5 textures as so: DEPTH24_STENCIL8 - Depth and stencil RGBA32F - Position RGBA10_A2 - Normals RGBA8 x 2 - Specular & Diffuse I decided to minimize it and reconstruct positions from the depth buffer. Trying to figure out what is wrong with my method currently has not been fun :/ Currently I get this: which changes whenever I move the camera... weird Vertex shader really simple #version 150 layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; out vec2 uv_f; void main(){ uv_f = uv; gl_Position = vec4(position, 1.0); } Fragment shader Where the fun (and not so fun) stuff happens #version 150 uniform sampler2D depth_tex; uniform sampler2D normal_tex; uniform sampler2D diffuse_tex; uniform sampler2D specular_tex; uniform mat4 inv_proj_mat; uniform vec2 nearz_farz; in vec2 uv_f; ... other uniforms and such ... layout(location = 3) out vec4 PostProcess; vec3 reconstruct_pos(){ float z = texture(depth_tex, uv_f).x; vec4 sPos = vec4(uv_f * 2.0 - 1.0, z, 1.0); sPos = inv_proj_mat * sPos; return (sPos.xyz / sPos.w); } void main(){ vec3 pos = reconstruct_pos(); vec3 normal = texture(normal_tex, uv_f).rgb; vec3 diffuse = texture(diffuse_tex, uv_f).rgb; vec4 specular = texture(specular_tex, uv_f); ... do lighting ... PostProcess = vec4(pos, 1.0); // Just for testing } Rendering code probably nothing wrong here, seeing as though it always worked before this->gbuffer->bind(); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Enable(gl::DEPTH_TEST); gl::Enable(gl::CULL_FACE); ... bind geometry shader and draw models and shiz ... gl::Disable(gl::DEPTH_TEST); gl::Disable(gl::CULL_FACE); gl::Enable(gl::BLEND); ... bind textures and lighting shaders shown above then draw each light ... gl::BindFramebuffer(gl::FRAMEBUFFER, 0); gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT); gl::Disable(gl::BLEND); ... bind screen shaders and draw quad with PostProcess texture ... Rinse_and_repeat(); // not actually a function ;) Why are my positions being output like they are?

    Read the article

  • 2D water with dynamic waves

    - by user1103457
    New Super Mario Bros has really cool 2D water that I'd like to learn how to create. Here's a video showing it. When something hits the water, it creates a wave. There are also constant "background" waves. You can get a good look at the constant waves just after 00:50 when the camera isn't moving. I assume the splashes in NSMB work as in the first part of this tutorial. But in NSMB the water also has constant waves on the surface, and the splashes look very different. Another difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. I am referring to the splashes that the player creates when jumping in and out of the water. How do they create the constant waves and the splashes? I am especially interested in the splashes, and how they work together with the constant waves. I am programming in XNA. I've tried this myself, but couldn't really get it all to work well together. Bonus questions: How do they create the light spots just under the surface of the waves and how do they texture the deeper parts of the water? This is the first time I try to create water like this. EDIT: I assume the constant waves are created using a sine function. The splashes are probably created in a way like in the tutorial. (But they are not the same, so I am still interested in how to make this kind of splashes) But I have a lot of trouble combining those things. I know I can use the sine function to set the height of a specific watercolumn but the splashes are using the speed, to determine the new height. I can't figure out how to combine those. Not that I am not asking how the developers of new super mario bros did this exactly. I am just interested in ways to recreate an effect like it. This week I have an examweek so I don't have time to work on the code. After this week I will spend a lot of time on it. But I am constantly thinking about it, so that's why I will be checking comments etc. I just won't be looking at the code since it might be too time-consuming.

    Read the article

  • Isometric layer moving inside map

    - by gronzzz
    i'm created isometric map and now trying to limit layer moving. Main idea, that i have left bottom, right bottom, left top, right top points, that camera can not move outside, so player will not see map out of bounds. But i can not understand algorithm of how to do that. It's my layer scale/moving code. - (void)touchBegan:(UITouch *)touch withEvent:(UIEvent *)event { _isTouchBegin = YES; } - (void)touchMoved:(UITouch *)touch withEvent:(UIEvent *)event { NSArray *allTouches = [[event allTouches] allObjects]; UITouch *touchOne = [allTouches objectAtIndex:0]; CGPoint touchLocationOne = [touchOne locationInView: [touchOne view]]; CGPoint previousLocationOne = [touchOne previousLocationInView: [touchOne view]]; // Scaling if ([allTouches count] == 2) { _isDragging = NO; UITouch *touchTwo = [allTouches objectAtIndex:1]; CGPoint touchLocationTwo = [touchTwo locationInView: [touchTwo view]]; CGPoint previousLocationTwo = [touchTwo previousLocationInView: [touchTwo view]]; CGFloat currentDistance = sqrt( pow(touchLocationOne.x - touchLocationTwo.x, 2.0f) + pow(touchLocationOne.y - touchLocationTwo.y, 2.0f)); CGFloat previousDistance = sqrt( pow(previousLocationOne.x - previousLocationTwo.x, 2.0f) + pow(previousLocationOne.y - previousLocationTwo.y, 2.0f)); CGFloat distanceDelta = currentDistance - previousDistance; CGPoint pinchCenter = ccpMidpoint(touchLocationOne, touchLocationTwo); pinchCenter = [self convertToNodeSpace:pinchCenter]; CGFloat predictionScale = self.scale + (distanceDelta * PINCH_ZOOM_MULTIPLIER); if([self predictionScaleInBounds:predictionScale]) { [self scale:predictionScale scaleCenter:pinchCenter]; } } else { // Dragging _isDragging = YES; CGPoint previous = [[CCDirector sharedDirector] convertToGL:previousLocationOne]; CGPoint current = [[CCDirector sharedDirector] convertToGL:touchLocationOne]; CGPoint delta = ccpSub(current, previous); self.position = ccpAdd(self.position, delta); } } - (void)touchEnded:(UITouch *)touch withEvent:(UIEvent *)event { _isDragging = NO; _isTouchBegin = NO; // Check if i need to bounce _touchLoc = [touch locationInNode:self]; } #pragma mark - Update - (void)update:(CCTime)delta { CGPoint position = self.position; float scale = self.scale; static float friction = 0.92f; //0.96f; if(_isDragging && !_isScaleBounce) { _velocity = ccp((position.x - _lastPos.x)/2, (position.y - _lastPos.y)/2); _lastPos = position; } else { _velocity = ccp(_velocity.x * friction, _velocity.y *friction); position = ccpAdd(position, _velocity); self.position = position; } if (_isScaleBounce && !_isTouchBegin) { float min = fabsf(self.scale - MIN_SCALE); float max = fabsf(self.scale - MAX_SCALE); int dif = max > min ? 1 : -1; if ((scale > MAX_SCALE - SCALE_BOUNCE_AREA) || (scale < MIN_SCALE + SCALE_BOUNCE_AREA)) { CGFloat newSscale = scale + dif * (delta * friction); [self scale:newSscale scaleCenter:_touchLoc]; } else { _isScaleBounce = NO; } } }

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Why is my arrow texture being drawn in odd places?

    - by tyjkenn
    This is a script I wrote that places an arrow on the screen, pointing to an enemy off-screen, or, if the enemy is on-screen, it places an arrow hovering above the enemy. Everything seems to work, except for some odd reason, I see random arrows floating around, often skewed and resized (which I really don't understand, because I only rotate and place in this script). Even when I only have one enemy in the scene, I still see these random arrows. It should only be drawing one per enemy. Note: when all enemies are removed, no arrows appear. var arrow : Texture; var cam : Camera; var dim : int = 30; function OnGUI() { var objects = GameObject.FindGameObjectsWithTag("Enemy"); for(var ob : GameObject in objects) { var pos = cam.WorldToViewportPoint(ob.transform.position); if(gameObject.GetComponent(FollowCamera).target != null){ var tar = gameObject.GetComponent(FollowCamera).target.parent; } if(pos.z>1 && ob.transform != tar){ var xDiff = (pos.x*cam.pixelWidth)-(cam.pixelWidth/2); var yDiff = (pos.y*cam.pixelHeight)-(cam.pixelHeight/2); var angle = Mathf.Rad2Deg*Mathf.Atan(yDiff/xDiff)+180; if(xDiff>0) angle += 180; var dist = Mathf.Sqrt(xDiff*xDiff + yDiff*yDiff); var slope = yDiff/xDiff; var camSlope = cam.pixelHeight/cam.pixelWidth; var theX = -1000.0; var theY = -1000.0; var mult = 0; var temp; if(Mathf.Abs(xDiff)>(cam.pixelWidth/2)||Mathf.Abs(yDiff)>(cam.pixelHeight/2)){ //touching right if(slope<camSlope && slope>-camSlope) { if(xDiff>(cam.pixelWidth/2)) { theX = cam.pixelWidth - (dim/2); mult = -1; }else if(xDiff<-(cam.pixelWidth/2)) { theX = (dim/2); mult = 1; } temp = ((cam.pixelWidth/2)*yDiff)/xDiff; theY =(cam.pixelHeight/2)+(mult*temp); } else{ if(yDiff>(cam.pixelHeight/2)) { theY = (dim/2); mult = 1; }else if(yDiff<-(cam.pixelHeight/2)) { theY = cam.pixelHeight - (dim/2); mult = -1; } temp = ((cam.pixelHeight/2)*xDiff)/yDiff; theX =(cam.pixelWidth/2)+(mult*temp); } } else { angle = -90; theX = (cam.pixelWidth/2)+xDiff; theY = (cam.pixelHeight/2)-yDiff-dim; } GUIUtility.RotateAroundPivot(-angle, Vector2(theX, theY)); Graphics.DrawTexture(Rect(theX-(dim/2),theY-(dim/2),dim,dim),arrow,null); GUIUtility.RotateAroundPivot(angle, Vector2(theX, theY)); } } }

    Read the article

  • Lenovo W520 back usb port not working

    - by jaudette
    The usb port in the back of my laptop (on the right side when viewed from using perspective) is not working. Does anybody know if we can get this port working, and what the port number is. Here is my lsusb, if it can help. % lsusb Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse Bus 003 Device 003: ID 046d:c318 Logitech, Inc. Illuminated Keyboard Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0765:5001 X-Rite, Inc. Huey PRO Colorimeter Bus 001 Device 004: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 005: ID 0a5c:217f Broadcom Corp. Bluetooth Controller Bus 001 Device 006: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP) Bus 002 Device 003: ID 17ef:1003 Lenovo Integrated Smart Card Reader I am running 12.10, upgraded from 12.04 but it did not work either in 12.04. The two usb ports on the left work just fine. EDIT: I just updated my bios from 1.32 to 1.39, no change in behaviour. The port does not even power up my devices. EDIT 2: Booted up windows, and the port is working. I went into the device manager and looked at the USB settings. I found my USB drive on Port 2, Hub 3, i just don't know how that relates to the Bus and Device numbers of linux. In windows, Smart card reader was on USB hub located at port 1 hub 2, fingerprint and bluetooth were on USB hub located at port 1 hub 1 EDIT 3: Went and looked at this SO post. Tried to look at my kern.log file with tail -f /var/log/kern.log. Got some activity when plugging/removing devices in other ports, but nothing happens when connecting a device into that port. It really looks disabled. Looked at my usb1-4 sys/bus/usb/devices/usb1/power/control and they are all set on auto. As expected, usb4 have version 3.00, the others (usb1-usb3) are 2.00.

    Read the article

  • Facial Recognition for Retail

    - by David Dorf
    My son decided to do his science project on how the brain recognizes faces.  Faces are so complicated and important that the brain has a dedicated area for just that purpose.  During our research, we came across some emerging uses for facial recognition in the retail industry. If you believe the movies, recognizing faces as they walk by a camera is easy for computers but that's not the reality.  Huge investments are being made by the U.S. government in this area, with a focus on airport security.  Now, companies like Eye See are leveraging that research for marketing purposes.  They do things like track eyes while viewing newspaper ads to see which ads get more "eye time."  This can help marketers make better placement and color decisions. But what caught my eye (that was too easy) was their new mannequins that watch shoppers.  These mannequins, being tested at European retailers like Benetton, watch shoppers that walk by and identify their gender, race, and age.  This helps the retailer better understand the types of customers being attracted to the outfit on the mannequin.  Of course to be most accurate, the software has pictures of the employees so they can be filtered out.  Since the mannequins are closer to the shoppers and at eye-level, they are more accurate than traditional in-ceiling LP cameras. Marketing agency RedPepper is offering retailers the ability to recognize loyalty shoppers at their doors using Facedeal.  For customers that have opted into the program, when they enter the store their face is recognized and they are checked in.  Then, as a reward, they are sent an offer on their smartphone. It won't be long before retailers begin to listen to shoppers are they walk the aisles, then keywords can be collected and aggregated to give the retailer an idea of what people are saying about their stores and products.  Sentiment analysis based on what's said or even facial expressions can't be far off. Clearly retailers need to be cautions and respect customer privacy.  That's why these technologies are emerging slowly.  But since the next generation of shoppers are less concerned about privacy, I expect these technologies to appear sporadically in the next five years then go mainstream.  Time will tell.

    Read the article

  • Oracle Brings Java to iOS Devices (and Android too)

    - by Shay Shmeltzer
    Java developer, did you ever wish that you can take your Java skills and apply them to building applications for iOS mobile devices? Well, now you can! With the new Oracle ADF Mobile solution, Oracle has created a unique technology that allows developers to use the Java language and develop applications that install and run on both iOS and Android mobile devices. The solution is based on a thin native container that installs as part of your application. The container is able to run the same application you develop unchanged on both Android and iOS devices. One part of the container is a headless lightweight JVM based on the Java ME CDC technology. This allows the execution of Java code on your mobile device. Java is used for building business logic, accessing local SQLite encrypted database, and invoking and interacting with remote services. Java concept on the UI too To further help transition Java developers to mobile developers, ADF Mobile borrows familiar concepts from the world of JSF to make the UI development experience simpler. The user interface layer of Oracle ADF Mobile is rendered with HTML5 which delivers native user experience on the devices, including animations and gesture support. Using a set of rich components, developers can create mobile pages without needing to write low level HTML5 and JavaScript code. The components cover everything from simple controls such as text fields, date pickers, buttons and links, to advanced data visualization components such as graphs, gauges and maps, and including unique mobile UI patterns such as lists, and toggle selectors. Want to see the components in action? Access this demo instance from your mobile device. Need to further customize the look and feel? You can use CSS3 to achieve this. A controller layer - similar in functionality to the JSF controller - allows developer to simplify the way they build navigation between pages. The logic behind the pages is written in managed beans with various scopes – again similar to the JSF approach. Need to interact with device features like camera, SMS, Contacts etc? Oracle conveniently packaged access to these services in a set of services that you can just drag and drop into your pages as buttons and links, or code into your managed beans Java calls to activate. Underneath the covers this layer is implemented using the open source phonegap solution. With the new Oracle ADF Mobile solution, transferring your Java skills into the Mobile world has become much easier. Check out this development experience demo. And then go and download JDeveloper and the ADF Mobile extension and try it out on your own. For more on ADF Mobile, see the ADF Mobile OTN page.

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • ADF Mobile Released!!

    - by Denis T
    ADFmfAnnounce We are pleased to announce the general availability of the newest version of Oracle’s ADF Mobile framework. This new framework provides the much anticipated on-device capabilities that the latest mobile applications require.  Feature Highlights Java - Oracle brings a Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) JDBC - Since we give you Java, we also provide JDBC along with a SQLite driver and engine that also supports encryption out of the box. Multi-Platform - Truly develop your application only once and deploy to multiple platforms. iOS and Android platforms are supported for both phone and tablet. Flexible - You can decide how to implement the UI: (a) Use existing server-based UI framework like JSF. (b) Use your own favorite HTML5 framework like JQuery. (c) Use our declarative HTML5 component set provided with the framework. ADF Mobile XML or AMX for short, provides all the normal input and layout controls you expect and we also add charts/maps/gauges along with it to provide a very comprehensive UI controls. You can also mix and match any of the three for ultimate flexibility! Device Feature Access - You can get access to device features from either Java or JavaScript to invoke features like camera, GPS, email, SMS, contacts, etc. Secure - ADF Mobile provides integrated security that works with your server back-end as well. Whether you’re using remote URLs, local HTML or AMX, you can secure any/all of your features with a single consistent login page. Since we also give you SQLite encryption, we are assured that your data is safe. Rapid - Using the same development techniques that ADF developers are already used to, you can quickly create mobile applications without ever learning another language! Architecture ADF Mobile is a “hybrid” architecture that employs a natively built “container” on each platform that hosts a number of browser windows that are used to display the application content. We add the Java VM as a natively built library to the container for business logic.   How To Get Started ADF Mobile is an extension to the recently released JDeveloper version 11.1.2.3.0. Simple get the latest JDeveloper from Oracle Technology Network and use the Check for Updates feature to get the ADF Mobile extension. Note: ADF Mobile does not require developers to learn any other languages or frameworks but to build/deploy to iOS, you must be on an Apple MacintoshTM and have Xcode installed. To build/deploy to Android™ you must have the Android SDK installed.

    Read the article

  • Windows Phone 7, taking the mobile out of mobile

    - by markritchie
    I’ll start off this blog entry with a little background which is necessary for you to understand my situation. I moved from to the UK to Germany about three years ago and while I can converse in German, English is still very much my first language and the language I want my content delivered in. I happily purchased a HTC HD7 when WP7 was released here in Germany thinking foolishly that Microsoft really would get internationalization and localization as it’s at the heart of their business and developer products. Overall I’m very happy with my WP7, the camera sucks but that’s more HTC’s fault that Microsoft, but other than that it’s a very promising platform. My problems started when I purchased my first app. Initially everything appeared to be fine and things were as smooth as things had been with my previous free applications, however about a month after I received an email from Zune informing me that the credit card that they had registered against my account had expired. No problem I foolishly thought; I’ll simply add a new one. I don’t want to bore you with the details of what I’ve been through other than to take the low-lights: trying numerous websites, posts on Microsoft Answers and a tweet to Microsoft Support all to no avail. Today somebody suggested that I call the Xbox support line in the UK as they had solved their billing problems this way. So I called up the Xbox support line since I hadn’t resolved the problem using the Zune portal to resolve my problem with my WP7 (anybody else thinks Microsoft might need some consolidation here). After being on the phone with a very friendly representative in Ireland I was informed that because my British credit card has a billing address in Germany they are unable to accept it as a credit card. Because my Live ID and Zune Tag are registered in the UK they cannot take my card. Their solution is that I have to create a new Zune Tag that has its locale in Germany and then associate it with my phone. Now, first thing I go to register for a Zune Tag with a German locale it very kindly switches into German for me, nothing quite like reading a EULA in a language you’re not fluent in. I’ve no idea how this will affect the apps that are available to me in the app store, and I’m pretty sure it means that all my Xbox live achievements will become history. And what if I was to move to say France at some point do I have to go through all this again? At the end of the day I’m trying to set something up so I can give them my money and they’re making it VERY difficult for me. Could you imagine walking into a book store when you were on holiday and being told by the clerk that they couldn’t take your credit card because you came from another country?

    Read the article

  • Want to make RTS strategy game, good starting point...?

    - by NOLANDI
    everybody! I find this community because I did research about game developing topics. So I decide to register on this community and ask for your opinion. I am not begineer in computer stuffs, and I know logic of programming language. I have little knowlegde of C, C++, Started to learn C#...but I was always oriented about other areas of IT. Amateur for game making but not in IT general! I am interested to make RTS strategy video game, isometric camera view (like PC GAME COMMANDOS for example)...so I started to read many books about DirectX, XNA, OpenGL, C#, AI programming...so I want to know where is good starting point for making this game genre. I want to use C# for this project, not C++, since I always had trouble with it, because it is too complicated. I see good oportunity for learning C#, since it has more object-oriented concept, with much less code and easy readible code for writing. Beside of that I know that C++ is the more powerful language until today. So I want to use DirectX and C# for my project. But it will required more time for sure, to learn it. I have trouble with finding books for c# & DirectX (I found just one), but in every next book, it is programming in c++! I read many web sites and they mentioned Slim DX, and in generaly, other articles which I read tells that it is possible to make good game with combination c# and directX, but...I don't have enough information about all this stuffs, so I became confused! It is really hard to have wish to learn c++ from scratch, since I started c# and I like it really...I don't try OpenGL, but I am already familiar with DirectX (some beginning stuffs), and I think that it is good and interesting. So, I decide to choose between maybe Unity3d and XNA first. XNA...hm, it sound that it is not bad solution, but I heard that it is already overcome technology, and I think that it is better to wait with XNA, until I try to make something with DirectX and C#, at first. But I dont know it is possible to make strategy game in XNA?! Maybe I am wrong. I also, find Unity 3D game engine and I find that I could programming there in C#, but many all examples are written in Javascript (Digital tutors videos)...But I generaly have wish to learn it too, because it allows to export games to mobile phones, there a lot of documentations, it is free, etc... I want to find best way to learn C# and DirectX, and make this game, but I dont know if it's possible to combine?! But maybe is good to first try Unity and XNA, because I am beginner?! Thank you all! I am willing to hear your expirience.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >