Search Results

Search found 11004 results on 441 pages for 'core animation'.

Page 36/441 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Deploying an EAR to JBOSS times out (org.rhq.core.pc.inventory.TimeoutException:)

    - by rangalo
    Hi, I am trying to deploy an ear file to JBOSS AS (defalut server). The application is the mavenised version of examples of SeamInAction book. When I copy the file to $JBOSS_HOME/server/default/deploy, I don't get any exception but the application doesn't respond, after some time trying to access the application from the browser gives following in the log... While deploying with admin-console (http://localhost:8080/admin-console) I get following error messgae: PS: After this Jboss gets into unusable state. I cannot even access admin-console. I just have to kill it. ErrorMessage in admin-console: Failed to create Resource Open18.ear - cause: org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ApplicationServerComponent.createResource()] with args [[CreateResourceReport: ResourceType=[ResourceType[id=0, category=Service, name=Enterprise Application (EAR), plugin=JBossAS5]], ResourceKey=[null]]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invokeInNewThreadWithLock(ResourceContainer.java:437) at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invoke(ResourceContainer.java:406) at $Proxy266.createResource(Unknown Source) at org.rhq.core.pc.inventory.CreateResourceRunner.call(CreateResourceRunner.java:113) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Error Logs: 4:08:58,555 INFO [TableMetadata] foreign keys: [fkaf42e01ba13c3380, fk_course_ref_facility] 14:08:58,555 INFO [TableMetadata] indexes: [course_pkey] 14:08:58,645 INFO [TableMetadata] table found: public.facility 14:08:58,645 INFO [TableMetadata] columns: [zip, phone, state, type, uri, city, country, id, price_range, address, county, description, nam e] 14:08:58,645 INFO [TableMetadata] foreign keys: [] 14:08:58,645 INFO [TableMetadata] indexes: [facility_pkey] 14:08:58,705 INFO [TableMetadata] table found: public.hole 14:08:58,705 INFO [TableMetadata] columns: [id, m_par, l_handicap, name, l_par, number, course_id, m_handicap] 14:08:58,705 INFO [TableMetadata] foreign keys: [fk_hole_ref_course, fk30f4c09c3f1200] 14:08:58,705 INFO [TableMetadata] indexes: [hole_pkey, uniq_hole_number] 14:08:58,764 INFO [TableMetadata] table found: public.tee 14:08:58,764 INFO [TableMetadata] columns: [hole_id, distance, tee_set_id] 14:08:58,764 INFO [TableMetadata] foreign keys: [fk1c014f8de7677, fk_tee_ref_hole, fk1c014c69de560, fk_tee_ref_tee_set] 14:08:58,764 INFO [TableMetadata] indexes: [tee_pkey] 14:08:58,826 INFO [TableMetadata] table found: public.tee_set 14:08:58,826 INFO [TableMetadata] columns: [id, color, m_slope_rating, l_slope_rating, name, course_id, m_course_rating, l_course_rating, p os] 14:08:58,826 INFO [TableMetadata] foreign keys: [fk_tee_set_ref_course, fkaa6881b79c3f1200] 14:08:58,826 INFO [TableMetadata] indexes: [tee_set_pkey, uniq_tee_set_pos, uniq_tee_set_color] 14:08:58,827 INFO [SchemaUpdate] schema update complete 14:08:58,829 INFO [NamingHelper] JNDI InitialContext properties:{java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java. naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces} 14:08:58,850 INFO [TomcatDeployment] deploy, ctxPath=/Open18 14:15:53,969 WARN [DiscoveryComponentProxyFactory] The discovery component for resource type [ResourceType[id=0, category=Service, name=Connector, plugin=JBossAS5]] has been blacklisted 14:15:53,970 WARN [InventoryManager] Failure during discovery for [Connector] Resources - failed after 300002 ms. org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ConnectorDiscoveryComponent.discoverResources()] with args [[org.rhq.core.pluginapi.inventory.ResourceDiscoveryContext@96db1]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invokeInNewThread(DiscoveryComponentProxyFactory.java:208) at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invoke(DiscoveryComponentProxyFactory.java:181) at $Proxy249.discoverResources(Unknown Source) at org.rhq.core.pc.inventory.InventoryManager.invokeDiscoveryComponent(InventoryManager.java:272) at org.rhq.core.pc.inventory.InventoryManager.executeComponentDiscovery(InventoryManager.java:1697) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:218) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:234) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.runtimeDiscover(RuntimeDiscoveryExecutor.java:134) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:94) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 14:15:53,981 WARN [NavigationContent] Unable to find node for deleted resource [Resource[id=-5, type=Connector, key=ajp://127.0.0.1:8009, name=ajp://127.0.0.1:8009, parent=JBoss Web]].

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • C++ FBX Animation Importer Using the FBX SDK

    - by Mike Sawayda
    Does anyone have any experience using the FBX SDK to load in animations. I got the meshes loaded in correctly with all of their verts, indices, UV's, and normals. I am just now trying to get the Animations working correctly. I have looked at the FBX SDK documentation with little help. If someone could just help me get started or point me in the right direction I would greatly appreciate it. I added some code so you can kinda get an idea of what I am doing. I should be able to place that code anywhere in the load FBX function and have it work. //GETTING ANIMAION DATA for(int i = 0; i < scene->GetSrcObjectCount<FbxAnimStack>(); ++i) { FbxAnimStack* lAnimStack = scene->GetSrcObject<FbxAnimStack>(i); FbxString stackName = "Animation Stack Name: "; stackName += lAnimStack->GetName(); string sStackName = stackName; int numLayers = lAnimStack->GetMemberCount<FbxAnimLayer>(); for(int j = 0; j < numLayers; ++j) { FbxAnimLayer* lAnimLayer = lAnimStack->GetMember<FbxAnimLayer>(j); FbxString layerName = "Animation Stack Name: "; layerName += lAnimLayer->GetName(); string sLayerName = layerName; queue<FbxNode*> nodes; FbxNode* tempNode = scene->GetRootNode(); while(tempNode != NULL) { FbxAnimCurve* lAnimCurve = tempNode->LclTranslation.GetCurve(lAnimLayer, FBXSDK_CURVENODE_COMPONENT_X); if(lAnimCurve != NULL) { //I know something needs to be done here but I dont know what. } for(int i = 0; i < tempNode->GetChildCount(false); ++i) { nodes.push(tempNode->GetChild(i)); } if(nodes.size() > 0) { tempNode = nodes.front(); nodes.pop(); } else { tempNode = NULL; } } } } Here is the full function bool FBXLoader::LoadFBX(ParentMeshObject* _parentMesh, char* _filePath, bool _hasTexture) { FbxManager* fbxManager = FbxManager::Create(); if(!fbxManager) { printf( "ERROR %s : %d failed creating FBX Manager!\n", __FILE__, __LINE__ ); } FbxIOSettings* ioSettings = FbxIOSettings::Create(fbxManager, IOSROOT); fbxManager->SetIOSettings(ioSettings); FbxString filePath = FbxGetApplicationDirectory(); fbxManager->LoadPluginsDirectory(filePath.Buffer()); FbxScene* scene = FbxScene::Create(fbxManager, ""); int fileMinor, fileRevision; int sdkMajor, sdkMinor, sdkRevision; int fileFormat; FbxManager::GetFileFormatVersion(sdkMajor, sdkMinor, sdkRevision); FbxImporter* importer = FbxImporter::Create(fbxManager, ""); if(!fbxManager->GetIOPluginRegistry()->DetectReaderFileFormat(_filePath, fileFormat)) { //Unrecognizable file format. Try to fall back on FbxImorter::eFBX_BINARY fileFormat = fbxManager->GetIOPluginRegistry()->FindReaderIDByDescription("FBX binary (*.fbx)"); } bool importStatus = importer->Initialize(_filePath, fileFormat, fbxManager->GetIOSettings()); importer->GetFileVersion(fileMinor, fileMinor, fileRevision); if(!importStatus) { printf( "ERROR %s : %d FbxImporter Initialize failed!\n", __FILE__, __LINE__ ); return false; } importStatus = importer->Import(scene); if(!importStatus) { printf( "ERROR %s : %d FbxImporter failed to import the file to the scene!\n", __FILE__, __LINE__ ); return false; } FbxAxisSystem sceneAxisSystem = scene->GetGlobalSettings().GetAxisSystem(); FbxAxisSystem axisSystem( FbxAxisSystem::eYAxis, FbxAxisSystem::eParityOdd, FbxAxisSystem::eLeftHanded ); if(sceneAxisSystem != axisSystem) { axisSystem.ConvertScene(scene); } TriangulateRecursive(scene->GetRootNode()); FbxArray<FbxMesh*> meshes; FillMeshArray(scene, meshes); unsigned short vertexCount = 0; unsigned short triangleCount = 0; unsigned short faceCount = 0; unsigned short materialCount = 0; int numberOfVertices = 0; for(int i = 0; i < meshes.GetCount(); ++i) { numberOfVertices += meshes[i]->GetPolygonVertexCount(); } Face face; vector<Face> faces; int indicesCount = 0; int ptrMove = 0; float wValue = 0.0f; if(!_hasTexture) { wValue = 1.0f; } for(int i = 0; i < meshes.GetCount(); ++i) { int vertexCount = 0; vertexCount = meshes[i]->GetControlPointsCount(); if(vertexCount == 0) continue; VertexType* vertices; vertices = new VertexType[vertexCount]; int triangleCount = meshes[i]->GetPolygonVertexCount() / 3; indicesCount = meshes[i]->GetPolygonVertexCount(); FbxVector4* fbxVerts = new FbxVector4[vertexCount]; int arrayIndex = 0; memcpy(fbxVerts, meshes[i]->GetControlPoints(), vertexCount * sizeof(FbxVector4)); for(int j = 0; j < triangleCount; ++j) { int index = 0; FbxVector4 fbxNorm(0, 0, 0, 0); FbxVector2 fbxUV(0, 0); bool texCoordFound = false; face.indices[0] = index = meshes[i]->GetPolygonVertex(j, 0); vertices[index].position.x = (float)fbxVerts[index][0]; vertices[index].position.y = (float)fbxVerts[index][1]; vertices[index].position.z = (float)fbxVerts[index][2]; vertices[index].position.w = wValue; meshes[i]->GetPolygonVertexNormal(j, 0, fbxNorm); vertices[index].normal.x = (float)fbxNorm[0]; vertices[index].normal.y = (float)fbxNorm[1]; vertices[index].normal.z = (float)fbxNorm[2]; texCoordFound = meshes[i]->GetPolygonVertexUV(j, 0, "map1", fbxUV); vertices[index].texture.x = (float)fbxUV[0]; vertices[index].texture.y = (float)fbxUV[1]; face.indices[1] = index = meshes[i]->GetPolygonVertex(j, 1); vertices[index].position.x = (float)fbxVerts[index][0]; vertices[index].position.y = (float)fbxVerts[index][1]; vertices[index].position.z = (float)fbxVerts[index][2]; vertices[index].position.w = wValue; meshes[i]->GetPolygonVertexNormal(j, 1, fbxNorm); vertices[index].normal.x = (float)fbxNorm[0]; vertices[index].normal.y = (float)fbxNorm[1]; vertices[index].normal.z = (float)fbxNorm[2]; texCoordFound = meshes[i]->GetPolygonVertexUV(j, 1, "map1", fbxUV); vertices[index].texture.x = (float)fbxUV[0]; vertices[index].texture.y = (float)fbxUV[1]; face.indices[2] = index = meshes[i]->GetPolygonVertex(j, 2); vertices[index].position.x = (float)fbxVerts[index][0]; vertices[index].position.y = (float)fbxVerts[index][1]; vertices[index].position.z = (float)fbxVerts[index][2]; vertices[index].position.w = wValue; meshes[i]->GetPolygonVertexNormal(j, 2, fbxNorm); vertices[index].normal.x = (float)fbxNorm[0]; vertices[index].normal.y = (float)fbxNorm[1]; vertices[index].normal.z = (float)fbxNorm[2]; texCoordFound = meshes[i]->GetPolygonVertexUV(j, 2, "map1", fbxUV); vertices[index].texture.x = (float)fbxUV[0]; vertices[index].texture.y = (float)fbxUV[1]; faces.push_back(face); } meshes[i]->Destroy(); meshes[i] = NULL; int indexCount = faces.size() * 3; unsigned long* indices = new unsigned long[faces.size() * 3]; int indicie = 0; for(unsigned int i = 0; i < faces.size(); ++i) { indices[indicie++] = faces[i].indices[0]; indices[indicie++] = faces[i].indices[1]; indices[indicie++] = faces[i].indices[2]; } faces.clear(); _parentMesh->AddChild(vertices, indices, vertexCount, indexCount); } return true; }

    Read the article

  • Sprite Animation using cocos2dx 2.0.2

    - by Lalit Chattar
    I am new in game development and learning coco2dx framework. I am trying to implement sprite animation using coco2dx. i tried many demo they all are same. But when i tried i got access violation error in my code. CCSpriteFrameCache::sharedSpriteFrameCache()->addSpriteFramesWithFile("AnimBear.plist"); CCSpriteBatchNode *spreetsheet = CCSpriteBatchNode::create("AnimBear.png"); this->addChild(spreetsheet); CCArray *bearArray = new CCArray(); for(int i = 1; i <= 8; i++) { char name[32] = {0}; sprintf(name, "bear%d.png",i); bearArray->addObject(CCSpriteFrameCache::sharedSpriteFrameCache()->spriteFrameByName(name)); } CCAnimation *walkAnim = CCAnimation::animationWithSpriteFrames(bearArray, 0.1f); CCSize size = CCDirector::sharedDirector()->getWinSize(); CCSprite *bear = CCSprite::spriteWithSpriteFrameName("bear1.png"); bear->setPosition(ccp(size.width/2, size.height/2)); CCAction *walkAction = CCRepeatForever::actionWithAction(CCAnimate::actionWithAnimation(walkAnim)); bear->runAction(walkAction); spreetsheet->addChild(bear); error is coming in first line while we passing plist refrence. Plese help me. I a using Visual Basic 2010 and put both files in Resource folder (png and plist).

    Read the article

  • Technique to have screen independent grid based puzzle with sprite animation

    - by Yan Cheng CHEOK
    Hello all, let's say I have a fixed size grid puzzle game (8 x 10). I will be using sprites animation, when the "pieces" in the puzzle is moving from one grid to another grid. I was wondering, what is the technique to have this game being implemented as screen resolution independent. Here is what I plan to do. 1) The data structure coordinate will be represented using double, with 1.0 as max value. // Puzzle grid of 8 x 10 Environment { double width = 0.8; double height = 1.0; } // Location of Sprite at coordinate (1, 1) Sprite { double posX = 0.1; double posY = 0.1; double width = 0.1; double height = 0.1; } // scale = PYSICAL_SCREEN_SIZE drawBitmap ( sprite_image, sprite_image_rect, new Rect(sprite.posX * Scale, sprite.posY * Scale, (sprite.posX + sprite.width) * Scale, (sprite.posY + sprite.Height) * Scale), paint ); 2) A large size sprite image will be used (128x128). As sprite image shall look fine if we scale from large size down to small size, but not vice versa. Besides the above mentioned technique, is there any other consideration I had missed out?

    Read the article

  • Picking Core Language For Large Scale Web Platform

    - by ryanzec
    Now I have work with PHP and ASP.NET quite a bit and also played around few other language for web development. I am now at a point where need to start building a backend platform that will have the ability to support a large set of applications and I am trying to figure out which language I want to choose as my core language. When I say core language I mean the language that the majority of the backend code is going to be in. This is not to say that other languages won't be used because my guess is that they will but I want a large majority of the code (90%-98%) to be in 1 language. While I see to benefit of using the language that is best for the job, having 15% in php, 15% in ASP.NET, 5% in perl, 10% in python, 15% in ruby, etc… seems like a very bad idea to me (not to mention integrating everything seamlessly would probably add a bit of overhead). If you were going to be building a large scale web platform that need to support multiple applications from scratch, what would you choose as your core language and why?

    Read the article

  • iPhone SDK 3.2 UIGestureRecognizer interfering with UIView animations?

    - by Brian Cooley
    Are there known issues with gesture recognizers and the UIView class methods for animation? I am having problems with a sequence of animations on a UIImageView from UIGestureRecognizer callback. If the sequence of animations is started from a standard callback like TouchUpInside, the animation works fine. If it is started via the UILongPressGestureRecognizer, then the first animation jumps to the end and the second animation immediately begins. Here's a sample that illustrates my problem. In the .xib for the project, I have a UIImageView that is connected to the viewToMove IBOutlet. I also have a UIButton connected to the startButton IBOutlet, and I have connected its TouchUpInside action to the startButtonClicked IBAction. The TouchUpInside action works as I want it to, but the longPressGestureRecognizer skips to the end of the first animation after about half a second. When I NSLog the second animation (animateTo200) I can see that it is called twice when a long press starts the animation but only once when the button's TouchUpInside action starts the animation. - (void)viewDidLoad { [super viewDidLoad]; UILongPressGestureRecognizer *longPressRecognizer = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(startButtonClicked)]; NSArray *recognizerArray = [[NSArray alloc] initWithObjects:longPressRecognizer, nil]; [startButton setGestureRecognizers:recognizerArray]; [longPressRecognizer release]; [recognizerArray release]; } -(IBAction)startButtonClicked { if (viewToMove.center.x < 150) { [self animateTo200:@"Right to left" finished:nil context:nil]; } else { [self animateTo100:@"Right to left" finished:nil context:nil]; } } -(void)animateTo100:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context { [UIView beginAnimations:@"Right to left" context:nil]; [UIView setAnimationDuration:4]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(animateTo200:finished:context:)]; viewToMove.center = CGPointMake(100.0, 100.0); [UIView commitAnimations]; } -(void)animateTo200:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context { [UIView beginAnimations:@"Left to right" context:nil]; [UIView setAnimationDuration:4]; viewToMove.center = CGPointMake(200.0, 200.0); [UIView commitAnimations]; }

    Read the article

  • How do I get an imageview to rotate while translating in Android?

    - by Ravedave
    I am trying to make an imageview that rotates while sliding across the screen. I setup a rotate animation for 180 degrees, and it works by itself. I setup a translate animation and it works by itself. When I combine them I get an imageview that makes a big spiral. I would like the imageview to rotate around the center of the imageview while being translated. AnimationSet animSet = new AnimationSet(true); //Translate upwards and to the right. TranslateAnimation anim = new TranslateAnimation( Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, +80.0f, Animation.ABSOLUTE, 0.0f, Animation.ABSOLUTE, -100.0f ); anim.setInterpolator(new DecelerateInterpolator()); anim.setDuration(400); animSet.addAnimation(anim); //Rotate around center of Imageview RotateAnimation ranim = new RotateAnimation(0f, 180f, Animation.RELATIVE_TO_SELF, 0.5f, Animation.RELATIVE_TO_SELF, 0.5f); //, 200, 200); // canvas.getWidth() / 2, canvas.getHeight() / 2); ranim.setDuration(400); ranim.setInterpolator(new DecelerateInterpolator()); animSet.addAnimation(ranim); imageBottom.startAnimation(animSet);

    Read the article

  • Animating translation and scaling of view in Android

    - by hgpc
    I have to animate a view from state A to B with changes to its scale, position and scrolling. I know everything about state A (widthA, heightA, topA, leftA, scrollXA, scrollYA) and state B (widthB, heightB, topB, leftB, scrollXB, scrollYB). So far I wrote the following code: AnimationSet animation = new AnimationSet(true); int toXDelta; // What goes here? int toYDelta; // What goes here? TranslateAnimation translateAnimation = new TranslateAnimation(1, toXDelta, 1, toYDelta); translateAnimation.setDuration(duration); animation.addAnimation(translateAnimation); float scale = (float) widthB / (float) widthA; ScaleAnimation scaleAnimation = new ScaleAnimation(1, scale, 1, scale); scaleAnimation.setDuration(duration); animation.addAnimation(scaleAnimation); animation.setAnimationListener(new AnimationListener() { @Override public void onAnimationEnd(Animation arg0) { view.clearAnimation(); // Change view to state B } @Override public void onAnimationRepeat(Animation arg0) {} @Override public void onAnimationStart(Animation arg0) {} }); view.startAnimation(animation); Is this the right way to do this? If so, how should I calculate the values of toXDelta and toYDelta? I'm having trouble finding the exact formula. Thanks!

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Android AnimationDrawable and knowing when animation ends

    - by LostDroid
    I want to do an animation with several image-files, and for this the AnimationDrawable works very well. However, I need to know when the animation starts and when it ends (i.e add a listener like the Animation.AnimationListener). After having searched for answers, I'm having a bad feeling the AnimationDrawable does not support listeners.. Does anyone know how to do frame-by-frame image animation on Android?

    Read the article

  • iPhone - UIView Animation on UIButton - button unclickable for portion of duration.

    - by Robert
    I am trying to have a button move around the screen and still be clickable. I have it moving around the screen correctly, but the odd thing is that I can't click the button until the final second of the animation. The button is still moving and yet after a certain threshold I can click it. Any idea what is happening? Any idea for some other way I can do what I want? Thanks for any help.

    Read the article

  • Animation not start immediately when the target view is out of window

    - by funnything
    Hi. When I apply some animation to the view, which is out of window, the animation not start immediately. And then, I scroll the screen to show the animation target view, the animation will start. I hope to the animation will start immediately when it apply. Any ideas? Bellow is sample code. Thank you. public class AnimationValidationActivity extends Activity { private ViewSwitcher _viewSwitcher; private Button _button; /** * utility method for animation */ private Animation buildTranslateAnimation( float fromXDelta , float toXDelta , float fromYDelta , float toYDelta ) { Animation ret = new TranslateAnimation( fromXDelta , toXDelta , fromYDelta , toYDelta ); ret.setDuration( 1000 ); return ret; } /** * build view in place of layout.xml */ private View buildView() { ScrollView ret = new ScrollView( this ); ret.setLayoutParams( new LinearLayout.LayoutParams( LinearLayout.LayoutParams.FILL_PARENT , LinearLayout.LayoutParams.WRAP_CONTENT ) ); LinearLayout parent = new LinearLayout( this ); parent.setLayoutParams( new LinearLayout.LayoutParams( LinearLayout.LayoutParams.FILL_PARENT , LinearLayout.LayoutParams.WRAP_CONTENT ) ); parent.setOrientation( LinearLayout.VERTICAL ); ret.addView( parent ); _viewSwitcher = new ViewSwitcher( this ); _viewSwitcher.setLayoutParams( new LinearLayout.LayoutParams( LinearLayout.LayoutParams.FILL_PARENT , 100 ) ); parent.addView( _viewSwitcher ); View spacer = new View( this ); spacer.setLayoutParams( new LinearLayout.LayoutParams( LinearLayout.LayoutParams.FILL_PARENT , getWindow() .getWindowManager().getDefaultDisplay().getHeight() ) ); parent.addView( spacer ); _button = new Button( this ); _button.setText( "button" ); parent.addView( _button ); return ret; } @Override public void onCreate( Bundle savedInstanceState ) { super.onCreate( savedInstanceState ); setContentView( buildView() ); _viewSwitcher.setFactory( new ViewSwitcher.ViewFactory() { @Override public View makeView() { TextView view = new TextView( AnimationValidationActivity.this ); view.setLayoutParams( new ViewSwitcher.LayoutParams( ViewSwitcher.LayoutParams.FILL_PARENT , ViewSwitcher.LayoutParams.FILL_PARENT ) ); view.setBackgroundColor( 0xffffffff ); view.setText( "foobar" ); return view; } } ); _button.setOnClickListener( new View.OnClickListener() { @Override public void onClick( View v ) { _viewSwitcher.setInAnimation( buildTranslateAnimation( _viewSwitcher.getWidth() , 0 , 0 , 0 ) ); _viewSwitcher.setOutAnimation( buildTranslateAnimation( 0 , - _viewSwitcher.getWidth() , 0 , 0 ) ); int color = new Random().nextInt(); _viewSwitcher.getNextView().setBackgroundColor( 0xff000000 | color & 0xffffff ); _viewSwitcher.showNext(); } } ); } }

    Read the article

  • How to safely remove a USB device from 2008 Core server?

    - by Qwerty
    I have a Hyper-V Core server 2008 that I administer via command line and remote tools. We have now got a new backup system in place and it involves me connecting an External USB drive (G:) to backup system state files. My question is how should I safely remove the drive for its weekly offsite swap? I've tried using the devcon tool however it just says the 'removal failed with no devices removed' with no other explanation. I have noticed that there isnt a readily available x64 version of devcon and that might be the cause of the problem. (I have read of people downloading a amd64 version but I have not located it myself, if someone knows where it is please let me know). The devcon command worked on my old 2003 x86 server with the command: devcon remove *3200AVJ_EXTERNAL* I have also looked at using fsutil volume dismount g: but it doesn't seem to work as G: is still listed as a connected volume. I have checked that the volume is not in use via remote tools and the net file command. Both show no open files in the G:\ volume. This could be a decent substitute as it might be used to flush any remaining IO to the volume can anyone clarify? Thanks in advance.

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • 2D Skeletal Animation Transformations

    - by Brad Zeis
    I have been trying to build a 2D skeletal animation system for a while, and I believe that I'm fairly close to finishing. Currently, I have the following data structures: struct Bone { Bone *parent; int child_count; Bone **children; double x, y; }; struct Vertex { double x, y; int bone_count; Bone **bones; double *weights; }; struct Mesh { int vertex_count; Vertex **vertices; Vertex **tex_coords; } Bone->x and Bone->y are the coordinates of the end point of the Bone. The starting point is given by (bone->parent->x, bone->parent->y) or (0, 0). Each entity in the game has a Mesh, and Mesh->vertices is used as the bounding area for the entity. Mesh->tex_coords are texture coordinates. In the entity's update function, the position of the Bone is used to change the coordinates of the Vertices that are bound to it. Currently what I have is: void Mesh_update(Mesh *mesh) { int i, j; double sx, sy; for (i = 0; i < vertex_count; i++) { if (mesh->vertices[i]->bone_count == 0) { continue; } sx, sy = 0; for (j = 0; j < mesh->vertices[i]->bone_count; j++) { sx += (/* ??? */) * mesh->vertices[i]->weights[j]; sy += (/* ??? */) * mesh->vertices[i]->weights[j]; } mesh->vertices[i]->x = sx; mesh->vertices[i]->y = sy; } } I think I have everything I need, I just don't know how to apply the transformations to the final mesh coordinates. What tranformations do I need here? Or is my approach just completely wrong?

    Read the article

  • How to Generate Spritesheet from a 'problematic' animated Symbol in Flash Pro CS6?

    - by Arthur Wulf White
    In the new Flash Pro CS6 there is an option to generate spriteheet from a symbol. I used these tutorials: http://www.adobe.com/devnet/flash/articles/using-sprite-sheet-generator.html http://tv.adobe.com/watch/cs6-creative-cloud-feature-tour-for-web/generating-sprite-sheets-using-flash-professional-cs6/ And it works really well! An artist I'm working with created a bunch of assets for a game. One of them is a walking person as seen from a top-down view. You can find the .fla here: https://docs.google.com/folder/d/0B3L2bumwc4onRGhLcGNId1p2Szg/edit (If this does not work let me know, it is the first time I used Google Drive to share files) 1 .When I press ctrl+enter I can see it is moving. When I look for the animation, I do not seem to find it. When I select to create a spritesheet, flash suggest creating a spritesheet with one frame in the base pose and no other (animation) frames. What is causing this and how do I correct it? 2 .I want to convert it to a sprite sheet for 32 angles of movement. Is there any magical easy way to get this done? Is there a workaround without using Flash CS6 to do the same thing?

    Read the article

  • File format for animated scene

    - by stephelton
    I've got a custom OpenGL based rendering engine and I'd like to add support for cinema-type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to [animated] models with keyframe-based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding/removing objects. Is this a sane concept?

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >