Search Results

Search found 841 results on 34 pages for 'angle'.

Page 28/34 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • conflicting info about the running kernel version in FreeBSD

    - by John
    I asked a related question about uname before, now want to ask from another angle because the following simple yet obvious conflicting outputs may mean there is something many people did not think of (me included). I'm running FreeBSD 9 RELEASE, please see the following commands: # sysctl kern.bootfile kern.bootfile: /boot/kernel/kernel # strings /boot/kernel/kernel |grep RELEASE|grep 9 @(#)FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014 FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014 9.2-RELEASE-p7 The above kernel file suggests the running kernel is 9.2-RELEASE-p7. But... # dmesg Copyright (c) 1992-2012 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 ... # uname -a FreeBSD localhost.localdomain 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64 So dmesg and uname says it's 9.1-RELEASE. I also did an extensive find / -type f -exec grep -l "9.1-RELEASE" {} \; but found no possible kernel file that contains 9.1-RELEASE. What could lead to the above conflict, and what kernel I am actually running? Please note I run RELEASE and ran freebsd-update to do binary update, so no compiled kernel is involved. And I have rebooted multiple times after freebsd-update. And the system is not in jail etc, just the only system on that computer.

    Read the article

  • Is there any way to tweak / rotate mouse orientation? Any applications? Registry edits?

    - by calbar
    I've got a very frustrating issue with my new Logitech Marathon Mouse M705. It is absolutely perfect for what I need, with the exception that it tracks on an angle for some reason. What I mean is when you slide the cursor to the left, it trends upward - when you slide to the right, it trends down. Moving the cursor along a flat horizontal line is no longer a natural motion - you need to fight what I suspect is a mechanical error of some kind. Unfortunately, I've already exchanged this mouse once and tested both on different Windows 7 and Mac OS X machines - the problem continues to occur. Is there a software solution for me? I'm incredibly surprised there is no simple way to adjust the orientation... how can every mouse manufacturer possibly adjust their hardware to track to everyone's tastes? What about those who need to flip orientation a full 90 or 180 degrees? I only need to adjust mine a few degrees, but I'm sure that need has arisen as well. Anyway, I'm running the latest SetPoint drivers (6.00) on Windows 7 and there are no orientation options available. I've checked out uberOptions and the M705 isn't supported yet (with the last version update over 6 months ago). MAF Mouse instructions are a very strange series of mouse clicks to activate. This app also seems a little overkill and costs $$ (which I'm willing to pay as a last resort). Is there no universal registry value for mouse orientation? How about for SetPoint drivers specifically? I've done a simple search in regedit without any luck. An XML file somewhere? Anything?

    Read the article

  • Standing and sitting while using a computer at work

    - by Adam Batkin
    I would like to be able to comfortably switch between sitting and standing while at work (I'm a software developer, so I spend most of my day in front of a computer). For the past couple of months I have been using a large elevated stand that sits on my desk (designed expressly for this purpose) containing my keyboard and mouse, and my monitors have been raised as high as possible and aimed upwards. So I can stand all day and I'm pretty comfortable (my right wrist may be at too much of an angle when it's on my mouse, but that's a separate issue). The only problem is that sometimes I want to be able to sit. I can easily place my keyboard and mouse back down under the elevated stand, but I have to look up pretty steeply and that is uncomfortable and makes it difficult to see the screens since they are tilted upwards. My monitor mounts are difficult to adjust quickly/easily, so I can't just re-aim them. I would of course love one of those hydraulic standing/sitting desks (cost isn't the problem). But I'm in a row of "trader-style" desks where it's basically a very long surface with people sitting at 6-foot intervals. What type of equipment do you recommend? I suppose the best thing would be some sort of monitor stand (it must be able to hold 2-3 LCDs) that can easily be lowered and raised. But any other suggestions are also welcome.

    Read the article

  • How can I disable 'natural breaks' in Workrave?

    - by Pixelastic
    I've just discovered Workrave, and was trying to use it along the Pomodoro technique (5mn break every 25mn). But the concept of 'natural breaks' of Workrave seems to interfere with what I'm trying to achieve. Workrave tries to guess that I'm doing a natural break if I stop using my mouse and keyboard for longer than 5s. It then stops the work timer, and start counting time as if I was doing my break. Here is a typical example : I've configured a 5mn rest break every 25mn. I start working. 10mn later, I receive a phone call, or start talking with a colleague, or any work-related action that do not need either keyboard nor mouse. Workrave then stops counting my time as work time, and starts its rest timer. If my phone call is shorter than 5mn, then Workrave will resume its timer where it stopped it. Meaning that my time on the phone is not counted as work time, and so my break time is pushed a few minutes later than it should be. Even worse, if my phone call is longer than 5mn, then Workrave count it as a complete rest break, and when I'll resume working, it will restart its timer completly. I'm looking for either a way to disable the natural breaks, or increase the 'inactivity time' from 5s to maybe ~1mn. Or maybe an other angle to look at the natural breaks that might work with the Pomodoro technique (forced 5mn breaks every 25mn). I'm using Ubuntu 11.10.

    Read the article

  • Workaround for Dell "Power Supply Not Recognised" issue

    - by Haedrian
    So, I have a Dell Inspirion and the power supply port appears to be damaged. Basically when I plug it in I get a nice popup telling me that it couldn't detect that its a Dell power supply so it won't charge the battery and underclocks the system. It still works for other purposes (that is, giving power) I thought it was the actual power supply cable so I bought a new one, that worked for a while, provided I inserted it at JUST THE RIGHT angle. But now that's not working anymore, so I assume its the part which connects to the computer. The battery charging I can live without, the underclocking I can't. I'd like a way around this issue. Things I've tried: Updating the BIOS Replacing the power supply cable Inserting it at different angles Turning it off and on again Swearing at it Twisting it while inserting it So, is there a workaround somehow? I'd like to avoid taking out my soldering kit and risking permanently damaging expensive equipment if that's allright. I'm hoping for a software solution. Added: The exact model is a Del Inspirion N5010

    Read the article

  • Flash AS3 Mysterious Blinking MovieClip

    - by Ben
    This is the strangest problem I've faced in flash so far. I have no idea what's causing it. I can provide a .swf if someone wants to actually see it, but I'll describe it as best I can. I'm creating bullets for a tank object to shoot. The tank is a child of the document class. The way I am creating the bullet is: var bullet:Bullet = new Bullet(); (parent as MovieClip).addChild(bullet); The bullet itself simply moves itself in a direction using code like this.x += 5; The problem is the bullets will trace for their creation and destruction at the correct times, however the bullet is sometimes not visible until half way across the screen, sometimes not at all, and sometimes for the whole traversal. Oddly removing the timer I have on bullet creation seems to solve this. The timer is implemented as such: if(shot_timer == 0) { shoot(); // This contains the aforementioned bullet creation method shot_timer = 10; My enter frame handler for the tank object controls the timer and decrements it every frame if it is greater than zero. Can anyone suggest why this could be happening? EDIT: As requested, full code: Bullet.as package { import flash.display.MovieClip; import flash.events.Event; public class Bullet extends MovieClip { public var facing:int; private var speed:int; public function Bullet():void { trace("created"); speed = 10; addEventListener(Event.ADDED_TO_STAGE,addedHandler); } private function addedHandler(e:Event):void { addEventListener(Event.ENTER_FRAME,enterFrameHandler); removeEventListener(Event.ADDED_TO_STAGE,addedHandler); } private function enterFrameHandler(e:Event):void { //0 - up, 1 - left, 2 - down, 3 - right if(this.x > 720 || this.x < 0 || this.y < 0 || this.y > 480) { removeEventListener(Event.ENTER_FRAME,enterFrameHandler); trace("destroyed"); (parent as MovieClip).removeChild(this); return; } switch(facing) { case 0: this.y -= speed; break; case 1: this.x -= speed; break; case 2: this.y += speed; break; case 3: this.x += speed; break; } } } } Tank.as: package { import flash.display.MovieClip; import flash.events.KeyboardEvent; import flash.events.Event; import flash.ui.Keyboard; public class Tank extends MovieClip { private var right:Boolean = false; private var left:Boolean = false; private var up:Boolean = false; private var down:Boolean = false; private var facing:int = 0; //0 - up, 1 - left, 2 - down, 3 - right private var horAllowed:Boolean = true; private var vertAllowed:Boolean = true; private const GRID_SIZE:int = 100; private var shooting:Boolean = false; private var shot_timer:int = 0; private var speed:int = 2; public function Tank():void { addEventListener(Event.ADDED_TO_STAGE,stageAddHandler); addEventListener(Event.ENTER_FRAME, enterFrameHandler); } private function stageAddHandler(e:Event):void { stage.addEventListener(KeyboardEvent.KEY_DOWN,checkKeys); stage.addEventListener(KeyboardEvent.KEY_UP,keyUps); removeEventListener(Event.ADDED_TO_STAGE,stageAddHandler); } public function checkKeys(event:KeyboardEvent):void { if(event.keyCode == 32) { //trace("Spacebar is down"); shooting = true; } if(event.keyCode == 39) { //trace("Right key is down"); right = true; } if(event.keyCode == 38) { //trace("Up key is down"); // lol up = true; } if(event.keyCode == 37) { //trace("Left key is down"); left = true; } if(event.keyCode == 40) { //trace("Down key is down"); down = true; } } public function keyUps(event:KeyboardEvent):void { if(event.keyCode == 32) { event.keyCode = 0; shooting = false; //trace("Spacebar is not down"); } if(event.keyCode == 39) { event.keyCode = 0; right = false; //trace("Right key is not down"); } if(event.keyCode == 38) { event.keyCode = 0; up = false; //trace("Up key is not down"); } if(event.keyCode == 37) { event.keyCode = 0; left = false; //trace("Left key is not down"); } if(event.keyCode == 40) { event.keyCode = 0; down = false; //trace("Down key is not down") // O.o } } public function checkDirectionPermissions(): void { if(this.y % GRID_SIZE < 5 || GRID_SIZE - this.y % GRID_SIZE < 5) { horAllowed = true; } else { horAllowed = false; } if(this.x % GRID_SIZE < 5 || GRID_SIZE - this.x % GRID_SIZE < 5) { vertAllowed = true; } else { vertAllowed = false; } if(!horAllowed && !vertAllowed) { realign(); } } public function realign():void { if(!horAllowed) { if(this.x % GRID_SIZE < GRID_SIZE / 2) { this.x -= this.x % GRID_SIZE; } else { this.x += (GRID_SIZE - this.x % GRID_SIZE); } } if(!vertAllowed) { if(this.y % GRID_SIZE < GRID_SIZE / 2) { this.y -= this.y % GRID_SIZE; } else { this.y += (GRID_SIZE - this.y % GRID_SIZE); } } } public function enterFrameHandler(Event):void { //trace(shot_timer); if(shot_timer > 0) { shot_timer--; } movement(); firing(); } public function firing():void { if(shooting) { if(shot_timer == 0) { shoot(); shot_timer = 10; } } } public function shoot():void { var bullet = new Bullet(); bullet.facing = facing; //0 - up, 1 - left, 2 - down, 3 - right switch(facing) { case 0: bullet.x = this.x; bullet.y = this.y - this.height / 2; break; case 1: bullet.x = this.x - this.width / 2; bullet.y = this.y; break; case 2: bullet.x = this.x; bullet.y = this.y + this.height / 2; break; case 3: bullet.x = this.x + this.width / 2; bullet.y = this.y; break; } (parent as MovieClip).addChild(bullet); } public function movement():void { //0 - up, 1 - left, 2 - down, 3 - right checkDirectionPermissions(); if(horAllowed) { if(right) { orient(3); realign(); this.x += speed; } if(left) { orient(1); realign(); this.x -= speed; } } if(vertAllowed) { if(up) { orient(0); realign(); this.y -= speed; } if(down) { orient(2); realign(); this.y += speed; } } } public function orient(dest:int):void { //trace("facing: " + facing); //trace("dest: " + dest); var angle = facing - dest; this.rotation += (90 * angle); facing = dest; } } }

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • scale animation for wpf popup

    - by wpf
    I have a nice little popup, when it shows, I d'like it to growth from 0 to 1x scaley, but I don't get it right, when I click multiple times, it looks like i "catch" the animation at various states during the "growth". <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.MouseRightButtonDown" > <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="SimplePopup" Storyboard.TargetProperty="(FrameworkElement.LayoutTransform).(TransformGroup.Children)[0].(ScaleTransform.ScaleY)"> <SplineDoubleKeyFrame KeyTime="00:00:00" Value="0"/> <SplineDoubleKeyFrame KeyTime="00:00:00.3000000" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Window.Triggers> and the popup: <Popup Name="SimplePopup" AllowsTransparency="True" StaysOpen="False"> <Popup.LayoutTransform> <TransformGroup> <ScaleTransform ScaleX="1" ScaleY="1" /> <SkewTransform AngleX="0" AngleY="0" /> <RotateTransform Angle="0" /> <TranslateTransform X="0" Y="0" /> </TransformGroup> </Popup.LayoutTransform> <Border> some Content here </Border> </Popup>

    Read the article

  • NUnit Test with WatiN, runs OK from and Dev10, But not from starting NUnit from "C:\Program Files (x

    - by judek.mp
    I have the following code in an Nunit test ... string url = ""; url = @"http://localhost/ClientPortalDev/Account/LogOn"; ieStaticInstanceHelper = new IEStaticInstanceHelper(); ieStaticInstanceHelper.IE = new IE(url); ieStaticInstanceHelper.IE.TextField(Find.ById("UserName")).TypeText("abc"); ieStaticInstanceHelper.IE.TextField(Find.ById("Password")).TypeText("defg"); ieStaticInstanceHelper.IE.Button(Find.ById("submit")).Click(); ieStaticInstanceHelper.IE.Close(); On right-clicking the project in Dev10 and choosing [Test With][NUnit 2.5], this test code runs with no problems ( Iahve TestDriven installed ...) When opening the NUnit from C:\Program Files (x86)\NUnit 2.5.5\bin\net-2.0\nunit.exe" and then opening my test dll, the following text is reported in NUnit Errors and failures ... Elf.Uat.ClientPortalLoginFeature.LoginAsWellKnownUserShouldSucceed: System.Runtime.InteropServices.COMException : Error HRESULT E_FAIL has been returned from a call to a COM component. As an Aside ... Right-Clicking the source cs file in Dev10 and choosing Run Test, ... works as well. The above test is actually part of TechTalk.SpecFlow 1.3 step, I have NUnit 2.5.5.10112, installed, I have Watin 20.20 installed, I have ... & in my NUnit.exe.config and I have the following App.config for my test dll... (the start angle brakets habe beem removed ) configuration configSections sectionGroup name="NUnit" section name="TestRunner" type="System.Configuration.NameValueSectionHandler"/ /sectionGroup /configSections NUnit TestRunner add key="ApartmentState" value="STA" / /TestRunner /NUnit appSettings add key="configCheck" value="12345" / /appSettings /configuration Anyone hit this before ? The NUnit test obviously runs in NUnit 2.5.5 of TestDriven but not when run outside of Dev10 and TesDriven ?

    Read the article

  • R ggplot2: Arrange facet_grid by non-facet column (and labels using non-facet column)

    - by tommy-o-dell
    I have a couple of questions regarding facetting in ggplot2... Let's say I have a query that returns data that looks like this: (note that it's ordered by Rank asc, Alarm asc and two Alarms have a Rank of 3 because their Totals = 1798 for Week 4, and Rank is set according to Total for Week 4) Rank Week Alarm Total 1 1 BELTWEIGHER HIGH HIGH 1000 1 2 BELTWEIGHER HIGH HIGH 1050 1 3 BELTWEIGHER HIGH HIGH 900 1 4 BELTWEIGHER HIGH HIGH 1800 2 1 MICROWAVE LHS 200 2 2 MICROWAVE LHS 1200 2 3 MICROWAVE LHS 400 2 4 MICROWAVE LHS 1799 3 1 HI PRESS FILTER 2 CLOG SW 1250 3 2 HI PRESS FILTER 2 CLOG SW 1640 3 3 HI PRESS FILTER 2 CLOG SW 1000 3 4 HI PRESS FILTER 2 CLOG SW 1798 3 1 LOW PRESS FILTER 2 CLOG SW 800 3 2 LOW PRESS FILTER 2 CLOG SW 1200 3 3 LOW PRESS FILTER 2 CLOG SW 800 3 4 LOW PRESS FILTER 2 CLOG SW 1798 (duplication code below) Rank = c(rep(1,4),rep(2,4),rep(3,8)) Week = c(rep(1:4,4)) Total = c( 1000,1050,900,1800, 200,1200,400,1799, 1250,1640,1000,1798, 800,1200,800,1798) Alarm = c(rep("BELTWEIGHER HIGH HIGH",4), rep("MICROWAVE LHS",4), rep("HI PRESS FILTER 2 CLOG SW",4), rep("LOW PRESS FILTER 2 CLOG SW",4)) spark <- data.frame(Rank, Week, Alarm, Total) Now when I do this... s <- ggplot(spark, aes(Week, Total)) + opts( panel.background = theme_rect(size = 1, colour = "lightgray"), panel.grid.major = theme_blank(), panel.grid.minor = theme_blank(), axis.line = theme_blank(), axis.text.x = theme_blank(), axis.text.y = theme_blank(), axis.title.x = theme_blank(), axis.title.y = theme_blank(), axis.ticks = theme_blank(), strip.background = theme_blank(), strip.text.y = theme_text(size = 7, colour = "red", angle = 0) ) s + facet_grid(Alarm ~ .) + geom_line() I get this.... Notice that it's facetted according to Alarm and that the facets are arranged alphabetically. Two Questions: How can I can I keep it facetted by alarm but displayed in the correct order? (Rank asc, Alarm asc). Also, how can I keep it facetted by alarm but show labels from Rank instead of Alarm? Note that I can't just facet on Rank because ggplot2 would see only 3 facets to plot where there are really 4 different alarms. Thanks kindly for the help! Tommy

    Read the article

  • Rotating an Image in Silverlight without cropping

    - by Tim Saunders
    I am currently working on a simple Silverlight app that will allow people to upload an image, crop, resize and rotate it and then load it via a webservice to a CMS. Cropping and resizing is done, however rotation is causing some problems. The image gets cropped and is off centre after the rotation. WriteableBitmap wb = new WriteableBitmap(destWidth, destHeight); RotateTransform rt = new RotateTransform(); rt.Angle = 90; rt.CenterX = width/2; rt.CenterY = height/2; //Draw to the Writeable Bitmap Image tempImage2 = new Image(); tempImage2.Width = width; tempImage2.Height = height; tempImage2.Source = rawImage; wb.Render(tempImage2,rt); wb.Invalidate(); rawImage = wb; message.Text = "h:" + rawImage.PixelHeight.ToString(); message.Text += ":w:" + rawImage.PixelWidth.ToString(); //Finally set the Image back MyImage.Source = wb; MyImage.Width = destWidth; MyImage.Height = destHeight; The code above only needs to rotate by 90° at this time so I'm just setting destWidth and destHeight to the height and width of the original image.

    Read the article

  • Using events in an external swf to load a new external swf

    - by wdense51
    Hi I'm trying to get an external swf to load when the flv content of another external swf finishes playing. I've only been using actiosncript 3 for about a week and I've got to this point from tutorials, so my knowledge is limited. This is what I've got so far: Code for External swf (with flv content): import fl.video.FLVPlayback; import fl.video.VideoEvent; motionClip.playPauseButton = player; motionClip.seekBar = seeker; motionClip.addEventListener(VideoEvent.COMPLETE, goNext); function goNext(e:VideoEvent):void { nextFrame(); } And this is the code for the main file: var Xpos:Number=110; var Ypos:Number=110; var swf_MC:MovieClip = new MovieClip(); var loader:Loader = new Loader(); var defaultSWF:URLRequest = new URLRequest("arch_reel.swf"); addChild (swf_MC); swf_MC.x=Xpos swf_MC.y=Ypos loader.load(defaultSWF); swf_MC.addChild(loader); //Btns Universal Function function btnClick(event:MouseEvent):void{ SoundMixer.stopAll(); swf_MC.removeChild(loader); var newSWFRequest:URLRequest = new URLRequest("motion.swf"); loader.load(newSWFRequest); swf_MC.addChild(loader); } function returnSWF(event:Event):void{ swf_MC.removeChild(loader); loader.load(defaultSWF); swf_MC.addChild(loader); } //Btn Listeners motion.addEventListener(MouseEvent.CLICK,btnClick); swf_MC.addEventListener(swf_MC.motionClip.Event.COMPLETE,swf_MC.motionClip.eventClip, returnSWF); I'm starting to get an understanding of how all of this works, but it's all to new to me at the moment, so I'm sure I've approached it from the wrong angle. Any help would be fantastic, as I've been trying at this for a few days now. Thanks

    Read the article

  • Reinforcement learning toy project

    - by Betamoo
    My toy project to learn & apply Reinforcement Learning is: - An agent tries to reach a goal state "safely" & "quickly".... - But there are projectiles and rockets that are launched upon the agent in the way. - The agent can determine rockets position -with some noise- only if they are "near" - The agent then must learn to avoid crashing into these rockets.. - The agent has -rechargable with time- fuel which is consumed in agent motion - Continuous Actions: Accelerating forward - Turning with angle I need some hints and names of RL algorithms that suit that case.. - I think it is POMDP , but can I model it as MDP and just ignore noise? - In case POMDP, What is the recommended way for evaluating probability? - Which is better to use in this case: Value functions or Policy Iterations? - Can I use NN to model environment dynamics instead of using explicit equations? - If yes, Is there a specific type/model of NN to be recommended? - I think Actions must be discretized, right? I know it will take time and effort to learn such a topic, but I am eager to.. You may answer some of the questions if you can not answer all... Thanks

    Read the article

  • Putting a MovieMaterial behind a DAE model in Papervision3D

    - by didibus
    Hi, I'm doing a project using FLARManager augmented reality and the Papervision3D library. Unfortunately, Papervision is giving me a lot of problems. My scene3D contains a DAE model and a plane. The plane has a MovieMaterial and is playing a video through FLVPlayback. The DAE and the plane are both inside the same DisplayObject3D container. FLARManager transforms the container so that everything appears through the angle of the marker. My DAE model is a TV, the screen of the TV is transparent. I want to have my Plane inside of my DAE model, so that the Movie playing on the plane material appears to be what is playing on the TV. The problem is that, even if the plane has a lower Z index then the TV, it always appears in front of the TV. How do I have my plane and its MovieMaterial appear behind the TV, so that some of its corners are cut out by the TV and the part of the TV thats transparent let me see the Movie? If its impossible, anyone has an idea of how I could get the desired effect of having a movie play on the screen of my DAE tv model? Thank You.

    Read the article

  • toggling proximity sensor on iPhone loses an event

    - by slugolicious
    I'm using setProximitySensingEnabled and implemented proximityStateChanged in my UIApplication subclass. It looks like if sensing is toggled, that the first "off" event is being lost. My UIApplication class is pretty basic... -(void)proximityStateChanged:(BOOL)state { NSLog(state ? @"ON" : @"OFF"); } In my application delegate, I have a UISwitch that enables/disables the proximity sensor. -(IBAction)toggleProxy:(id)sender { [UIApplication sharedApplication].proximitySensingEnabled = prox.on; } "prox" is my UISwitch. The test works fine when it first starts. I tap the switch to turn it on and then put my hand over the sensor for a second then move it away and get: 2009-03-11 12:43:00.465 Proximity[324:20b] ON 2009-03-11 12:43:02.514 Proximity[324:20b] OFF 2009-03-11 12:43:04.046 Proximity[324:20b] ON 2009-03-11 12:43:05.621 Proximity[324:20b] OFF I then tap the switch to turn it off then tap again to turn it on. Now I get: 2009-03-11 12:43:12.005 Proximity[324:20b] ON 2009-03-11 12:43:14.789 Proximity[324:20b] ON 2009-03-11 12:43:16.467 Proximity[324:20b] OFF 2009-03-11 12:43:17.516 Proximity[324:20b] ON 2009-03-11 12:43:19.077 Proximity[324:20b] OFF Notice I get two ON's before an OFF. The OFF is lost somewhere. I can't replicate this behavior using Google's mobile app so I'm wondering if they're resetting something in between proximity enabling. They don't have the proximity sensor on all the time because if you cover the sensor, the screen doesn't go blank. You have to tilt the phone up and angle it back (to simulate the position it would be in at your ear) and then covering the sensor works. Anyone else playing with the sensor? In my particular app, I'm recording a voice message and when you move the phone away from your ear, I want to pause the recording (when I get an OFF). The first time I move the phone away from my ear, the recording is not paused. However, if I put it to my ear and move it away again, it is paused.

    Read the article

  • Looking for ideas how to refactor my (complex) algorithm

    - by _simon_
    I am trying to write my own Game of Life, with my own set of rules. First 'concept', which I would like to apply, is socialization (which basicaly means if the cell wants to be alone or in a group with other cells). Data structure is 2-dimensional array (for now). In order to be able to move a cell to/away from a group of another cells, I need to determine where to move it. The idea is, that I evaluate all the cells in the area (neighbours) and get a vector, which tells me where to move the cell. Size of the vector is 0 or 1 (don't move or move) and the angle is array of directions (up, down, right, left). This is a image with representation of forces to a cell, like I imagined it (but reach could be more than 5): Let's for example take this picture: Forces from lower left neighbour: down (0), up (2), right (2), left (0) Forces from right neighbour : down (0), up (0), right (0), left (2) sum : down (0), up (2), right (0), left (0) So the cell should go up. I could write an algorithm with a lot of if statements and check all cells in the neighbourhood. Of course this algorithm would be easiest if the 'reach' parameter is set to 1 (first column on picture 1). But what if I change reach parameter to 10 for example? I would need to write an algorithm for each 'reach' parameter in advance... How can I avoid this (notice, that the force is growing potentialy (1, 2, 4, 8, 16, 32,...))? Can I use specific design pattern for this problem? Also: the most important thing is not speed, but to be able to extend initial logic. Things to take into consideration: reach should be passed as a parameter i would like to change function, which calculates force (potential, fibonacci) a cell can go to a new place only if this new place is not populated watch for corners (you can't evaluate right and top neighbours in top-right corner for example)

    Read the article

  • accelerating Wheel - psychtoolbox in matlab

    - by ariel
    Hi I am trying to write a code that will show an acceleration wheel. as long as the user press 'a' the wheel should accelerate counterclock wize. the thing is that it turn in the right direction but it doesn't accelerate. This is the code I am using . PTB-3 windows XP. img=imread('c:\images.jpg'); [yimg,ximg,z]=size(img); rot_spd = 1; larrow = KbName('a'); % modify this for Windows rarrow = KbName('b'); [w,rect]=Screen('OpenWindow',0,[0 0 0]); sx = 400; % desired x-size of image (pixels) sy = yimg*sx/ximg; % desired y-size--keep proportional t = Screen('MakeTexture',w,img); bdown=0; th = 0; % initial rotation angle (degrees) HideCursor while(~any(bdown)) % exit loop if mouse button is pressed [x,y,bdown]=GetMouse; [keyisdown,secs,keycode] = KbCheck; if(keycode(larrow)) th = th - rot_spd-1; % accelerate counterclockwise th end if(keycode(rarrow)) th = th + rot_spd+1; % accelerate clockwise th end destrect=[x-sx/2,y-sy/2,x+sx/2,y+sy/2]; Screen('DrawTexture',w,t,[],destrect,th); Screen('Flip',w); end Screen('Close',w) ShowCursor If anyone has an odea why it doesn't accelerate I'd appriciate it very much. Ariel

    Read the article

  • Cocos2D: Problem Rotating a CCMenu

    - by srikanth rongali
    If I try to do actions over menuItems but the actions are not running as expected. I think code below should make the menuItem rotate by 90 degrees but when I run it, the menuItem translates from its coordinates to another coordinate then returns to its original coordinate. The complete translation takes 3 seconds. What I need is for the menuItem to rotate by 90 degrees in place within a 3 seconds duration. Please explain where I have done wrong? CCMenuItemImage *targetE;//Globally declared CCMenu *menu;//Globally declared -(id)init { if( (self = [super init]) ) { isTouchEnabled = YES; CGSize windowSize = [[CCDirector sharedDirector] winSize]; targetE = [CCMenuItemImage itemFromNormalImage:@"grossinis_sister1.png" selectedImage:@"grossinis_sister1.png" target:self selector:@selector(touch:)]; menu = [CCMenu menuWithItems:targetE,nil]; id action4 = [CCRotateBy actionWithDuration:3.0 angle:90]; [menu runAction: [CCSequence actions: action4, nil]]; menu.position = ccp(windowSize.width/2 + 200, windowSize.height/2); [self addChild: menu z:10]; } return self; } @end Thank You.

    Read the article

  • Android ignores scrollbarsize

    - by Maragues
    Hi, I'm trying to modify a ListView scrollbar's width without success <ListView android:id="@+id/android:list" android:layout_width="fill_parent" android:layout_height="wrap_content" android:choiceMode="singleChoice" android:scrollbars="vertical" android:scrollbarTrackVertical="@drawable/scrollbar_vertical_track" android:scrollbarThumbVertical="@drawable/scrollbar_vertical_thumb" android:scrollbarSize="4px" android:clickable="true"/> First I tried using a drawable image 4px wide, but the .png was resized. Then I tried using a shape extracted from SamplesApi, without success. <shape xmlns:android="http://schemas.android.com/apk/res/android" android:width="40px"> <gradient android:startColor="#505050" android:endColor="#C0C0C0" android:angle="0"/> <corners android:radius="0dp" /> I've tried with and without the android:width attribute. There's a question on the same topic (http://stackoverflow.com/questions/2565083/width-of-a-scroll-bar-in-android), but it doesn't try anything different that what I'm already trying. As far as I know, creating my own theme shouldn't change the output. There's an example in SamplesApi (Views/ScrollBars). I tried modifying the scrollbarSize attribute without result. I know about ninepatch images, but there's an attribute which should do what I want. Any hint? Thanks in advance.

    Read the article

  • UIView coordinate transforms on rotation during keyboard appearance

    - by SG
    iPad app; I'm trying to resize my view when the keyboard appears. It amounts to calling this code at appropriate times: CGRect adjustedFrame = self.frame; adjustedFrame.size.height -= keyboardFrame.size.height; [self setFrame:adjustedFrame]; Using this technique for a view contained in a uisplitview-based app works in all 4 orientations, but I've since discovered that a vanilla uiview-based app does not work. What happens is that apparently the uisplitview is smart enough to convert the coordinates of its subviews (their frame) such that the origin is in the "viewer's top left" regardless of the orientation. However, a uiview is not able to correctly report these coordinates. Though the origin is reported as (0,0) in all orientations, the view's effective origin is always as if the ipad were upright. What is weird about this is that the view correctly rotates and draws, but it always originates in the literal device top left. How can I get the view to correctly make its origin the "top left" to the viewer, not the device's fixed top left? What am I missing? Please, for something so trivial I've spent about 6 hours on this already with every brute force technique and research angle I could think of. This is the original source which doesn't work in this case: http://stackoverflow.com/questions/1951826/move-up-uitoolbar

    Read the article

  • Parsing a string, Grammar file.

    - by defn
    How would I separate the below string into its parts. What I need to separate is each < Word including the angle brackets from the rest of the string. So in the below case I would end up with several strings 1. "I have to break up with you because " 2. "< reason " (without the spaces) 3. " . But Let's still " 4. "< disclaimer " 5. " ." I have to break up with you because <reason> . But let's still <disclaimer> . below is what I currently have (its ugly...) boolean complete = false; int begin = 0; int end = 0; while (complete == false) { if (s.charAt(end) == '<'){ stack.add(new Terminal(s.substring(begin, end))); begin = end; } else if (s.charAt(end) == '>') { stack.add(new NonTerminal(s.substring(begin, end))); begin = end; end++; } else if (end == s.length()){ if (isTerminal(getSubstring(s, begin, end))){ stack.add(new Terminal(s.substring(begin, end))); } else { stack.add(new NonTerminal(s.substring(begin, end))); } complete = true; } end++;

    Read the article

  • How can I store HTML in a Doctrine YML fixture

    - by argibson
    I am working with a CMS-type site in Symfony 1.4 (Doctrine 1.2) and one of the things that is frustrating me is not being able to store HTML pages in YML fixtures. Instead I have to create SQL backups of the data if I want to drop and rebuild which is a bit of a pest when Symfony/Doctrine has a fantastic mechanism for doing exactly this. I could write a mechanism that reads in a set of HTML files for each page and fills the data in that way (or even write it as a task). But before I go down that road I am wondering if there is any way for HTML to be stored in a YML fixture so that Doctrine can simply import it into the database. Update: I have tried using symfony doctrine:data-dump and symfony doctrine:data-load however despite the dump correctly creating the fixture with the HTML, the load task appears to 'skip' the value of the column with the HTML and enters everything else into the row. In the database the field doesn't show up as 'NULL' but rather empty so I believe Doctrine is adding the value of the column as ''. Below is a sample of the YML fixture that symfony doctrine:data-dump created. I have tried running symfony doctrine:data-load against various forms of this including removing all the escaped characters (new lines and quotes leaving only angle brackets) but it still doesn't work. Product_69: name: 'My Product' Developer: Developer_30 tagline: 'Text that briefly describes the product' version: '2008' first_published: '' price_code: A79 summary: '' box_image: '' description: "<div id=\"featureSlider\">\n <ul class=\"slider\">\n <li class=\"sliderItem\" title=\"Summary\">\n <div class=\"feature\">\n Some text goes in here</div>\n </li>\n </ul>\n </div>\n" is_visible: true

    Read the article

  • Draw an Inset NSShadow and Inset Stroke

    - by Alexsander Akers
    I have an NSBezierPath and I want to draw in inset shadow (similar to Photoshop) inside the path. Is there anyway to do this? Also, I know you can -stroke paths, but can you stroke inside a path (similar to Stroke Inside in Photoshop)? Update This is the code I'm using. The first part makes a white shadow downwards. The second part draws the gray gradient. The third part draws the black inset shadow. Assume path is an NSBezierPath instance and that clr(...) returns an NSColor from a hex string. NSShadow * shadow = [NSShadow new]; [shadow setShadowColor: [NSColor colorWithDeviceWhite: 1.0f alpha: 0.5f]]; [shadow setShadowBlurRadius: 0.0f]; [shadow setShadowOffset: NSMakeSize(0, 1)]; [shadow set]; [shadow release]; NSGradient * gradient = [[NSGradient alloc] initWithColorsAndLocations: clr(@"#262729"), 0.0f, clr(@"#37383a"), 0.43f, clr(@"#37383a"), 1.0f, nil]; [gradient drawInBezierPath: path angle: 90.0f]; [gradient release]; [NSGraphicsContext saveGraphicsState]; [path setClip]; shadow = [NSShadow new]; [shadow setShadowColor: [NSColor redColor]]; [shadow setShadowBlurRadius: 0.0f]; [shadow setShadowOffset: NSMakeSize(0, -1)]; [shadow set]; [shadow release]; [path stroke]; [NSGraphicsContext restoreGraphicsState]; Here you can see a gradient fill, a white drop shadow downwards, and a black inner shadow downwards.

    Read the article

  • Receiving DB update events in .NET from SQLite

    - by Dan Tao
    I've recently discovered the awesomeness of SQLite, specifically the .NET wrapper for SQLite at http://sqlite.phxsoftware.com/. Now, suppose I'm developing software that will be running on multiple machines on the same network. Nothing crazy, probably only 5 or 6 machines. And each of these instances of the software will be accessing an SQLite database stored in a file in a shared directory (is this a bad idea? If so, tell me!). Is there a way for each instance of the app to be notifiied if one instance updates the database file? One obvious way would be to use the FileSystemWatcher class, read the entire database into a DataSet, and then ... you know ... enumerate through the entire thing to see what's new ... but yeah, that seems pretty idiotic, actually. Is there such a thing as a provider of SQLite updates? Does this even make sense as a question? I'm also pretty much a newbie when it comes to ADO.NET, so I might be approaching the problem from the entirely wrong angle.

    Read the article

  • How to test an application for correct encoding (e.g. UTF-8)

    - by Olaf
    Encoding issues are among the one topic that have bitten me most often during development. Every platform insists on its own encoding, most likely some non-UTF-8 defaults are in the game. (I'm usually working on Linux, defaulting to UTF-8, my colleagues mostly work on german Windows, defaulting to ISO-8859-1 or some similar windows codepage) I believe, that UTF-8 is a suitable standard for developing an i18nable application. However, in my experience encoding bugs are usually discovered late (even though I'm located in Germany and we have some special characters that along with ISO-8859-1 provide some detectable differences). I believe that those developers with a completely non-ASCII character set (or those that know a language that uses such a character set) are getting a head start in providing test data. But there must be a way to ease this for the rest of us as well. What [technique|tool|incentive] are people here using? How do you get your co-developers to care for these issues? How do you test for compliance? Are those tests conducted manually or automatically? Adding one possible answer upfront: I've recently discovered fliptitle.com (they are providing an easy way to get weird characters written "u?op ?pisdn" *) and I'm planning on using them to provide easily verifiable UTF-8 character strings (as most of the characters used there are at some weird binary encoding position) but there surely must be more systematic tests, patterns or techniques for ensuring UTF-8 compatibility/usage. Note: Even though there's an accepted answer, I'd like to know of more techniques and patterns if there are some. Please add more answers if you have more ideas. And it has not been easy choosing only one answer for acceptance. I've chosen the regexp answer for the least expected angle to tackle the problem although there would be reasons to choose other answers as well. Too bad only one answer can be accepted. Thank you for your input. *) that's "upside down" written "upside down" for those that cannot see those characters due to font problems

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34  | Next Page >