Search Results

Search found 28898 results on 1156 pages for 'turn based tactics'.

Page 23/1156 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Xubuntu and other Debian based distros slow

    - by William V
    I have a Compaq Presario SR1950NX desktop computer with the AMD64 3800+ processor and 1GB ram and it seems that Ubunutu, Xubuntu and Lubuntu are all laggy. Things seem to be slow such as clicking on menus and opening programs and the UI renders in peices. When using the browser the system slows down considerably. I ran the TOP command and I do notice that xorg hits 30 to 40 percent cpu when running the browsers. I have tried these distros on a spare P4 machine and it is even worse. As long as I don't have a several things open at one time I can manage to get around although sluggishly. I also notice that I can't get debian based distros to install in 64bit (crtc6 failure) only in 32bit. Can anyone tell me what is it that I might be doing wrong? I have an integrated Nvidia card and have tried several of the recommended drivers which sometimes result in no boot screen upon reboot. Thanks

    Read the article

  • Advise some swing based (open source) project to join

    - by user592704
    I am looking for some open source Swing based projects which wanted volunteer Java developers to join and the projects which show their products authors' names. I watched many links but most projects for some reason hide their authors names (showing some nick names or something...) and all developing process relative information... For example this one project it seems fine but still I couldn't find any information concerning some current project task(s), its developers group, some chronicles (tips, milestones, feedbacks etc) :( I googled a lot but found less :S So I wondering maybe you know some? I dearly hope you can give me a piece of advice Any useful comment is much appreciated

    Read the article

  • Color based collision detection

    - by user1486826
    I am making a game where you fly a ship around some randomly generated planets. Since I am using a for loop to draw over 5000 planets, using the rectangle class or an oval-type class for this is not an option, since creating many objects will severely affect performance. Bitmasking each planet will likely result in performance issues too, so the only candidate is color based collision detection, because I don't need to apply some sort of object to everything I want to check for collisions. Is any way to check the perimeter around the ship for a certain color?

    Read the article

  • STUN, TURN, ICE library for Java.

    - by Hemeroc
    I need to establish a P2P UDP and TCP Connection between two Users. Both of them are behind a NAT. A little research leads me to STUN, TURN and ICE. Is there any Java solution (library) except jSTUN which seems to work only on UDP. And TURN, ICE is much better for the symmetric NAT Problem.

    Read the article

  • Turn off Jquery UI Accordion animation only during page load

    - by Matt H
    I'd like to turn off the accordian animation during page load, and then turn it back on after the page is loaded. Basically I've got multiple forms inside the accordion sections and when submitted the page gets reloaded and the relevant section is reloaded. But during the reload the animation is triggered which looks a little ugly. But I like it when the page is not being loaded. How do I achieve this effect?

    Read the article

  • How should an object that uses composition set its composed components?

    - by Casey
    After struggling with various problems and reading up on component-based systems and reading Bob Nystrom's excellent book "Game Programming Patterns" and in particular the chapter on Components I determined that this is a horrible idea: //Class intended to be inherited by all objects. Engine uses Objects exclusively. class Object : public IUpdatable, public IDrawable { public: Object(); Object(const Object& other); Object& operator=(const Object& rhs); virtual ~Object() =0; virtual void SetBody(const RigidBodyDef& body); virtual const RigidBody* GetBody() const; virtual RigidBody* GetBody(); //Inherited from IUpdatable virtual void Update(double deltaTime); //Inherited from IDrawable virtual void Draw(BITMAP* dest); protected: private: }; I'm attempting to refactor it into a more manageable system. Mr. Nystrom uses the constructor to set the individual components; CHANGING these components at run-time is impossible. It's intended to be derived and be used in derivative classes or factory methods where their constructors do not change at run-time. i.e. his Bjorne object is just a call to a factory method with a specific call to the GameObject constructor. Is this a good idea? Should the object have a default constructor and setters to facilitate run-time changes or no default constructor without setters and instead use a factory method? Given: class Object { public: //...See below for constructor implementation concerns. Object(const Object& other); Object& operator=(const Object& rhs); virtual ~Object() =0; //See below for Setter concerns IUpdatable* GetUpdater(); IDrawable* GetRenderer(); protected: IUpdatable* _updater; IDrawable* _renderer; private: }; Should the components be read-only and passed in to the constructor via: class Object { public: //No default constructor. Object(IUpdatable* updater, IDrawable* renderer); //...remainder is same as above... }; or Should a default constructor be provided and then the components can be set at run-time? class Object { public: Object(); //... SetUpdater(IUpdater* updater); SetRenderer(IDrawable* renderer); //...remainder is same as above... }; or both? class Object { public: Object(); Object(IUpdater* updater, IDrawable* renderer); //... SetUpdater(IUpdater* updater); SetRenderer(IDrawable* renderer); //...remainder is same as above... };

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Intel D2500HN Atom D2500 Doesn't turn on

    - by David W
    I recently bought parts from Amazon to build an embedded PC, and have assembled everything. I have: Intel D2500HN Mini-ITX Motherboard Mini-Box Pico-PSU 80 M350 Universal Min-ITX Enclosure 2GB DDR3 Memory Kinamax AD-LCD12 LCD Monitors 12V 6A 72W AC Adapter Power Supply The motherboard gets a light (on the motherboard, not on the Pico-PSU) when I plug it into the power adapter. Furthermore, I see the power switch light come on when I press the power button. However, the display doesn't turn on, and it doesn't seem that the PC is actually turned on. Since I'm seeing these lights, I know that the motherboard is getting power. Furthermore, the display VGA port is embedded into the motherboard, so that's not the issue. I'm just trying to figure out what COULD be the issue aside from a faulty motherboard. I have a diagram of the D2500HN motherboard which labels everything, and have ensured that the power LED as well as the On/Off cables are plugged into the right spots, although to be sure I've tried flipping these two cables around, and also plugging 1 cable into the other cable's spot & vice-versa. Is there anything else you folks think I may be missing, or anything else I can do to try to troubleshoot this issue before sending the motherboard back?

    Read the article

  • Turn off gzip for a location in Nginx

    - by Nyxynyx
    How can gzip be turned off for a particular location and all its sub-directories? My main site is at http://mydomain.com and I want to turn gzip off for both http://mydomain.com/foo and http://mydomain.com/foo/bar. gzip is turned on in nginx.conf. I tried turning off gzip as shown below, but the Response Headers in Chrome's dev tools shows that Content-Encoding:gzip. How should gzip/output buffering be disabled properly? Attempt: server { listen 80; server_name www.mydomain.com mydomain.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /var/www/mydomain/public; index index.php index.html; location / { gzip on; try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_read_timeout 300; } location /foo/ { gzip off; try_files $uri $uri/ /index.php?$args ; } }

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • How to turn off Tomcat logging in Eclipse?

    - by kirdie
    I develop a Vaadin project in Eclipse that I start through Tomcat 6 which gets started directly by Eclipse. Tomcat prints an enormous amount of log messages though on each start which makes it hard to see the output of my own Application. I have already replaced all log levels in tomcat6/conf/logging.properties by WARNING (e.g. java.util.logging.ConsoleHandler.level = WARNING) but I still get many INFO messages. How can I turn this off or restrict the log messages to WARNING? An example of the messages Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: Loaded APR based Apache Tomcat Native library 1.1.24. Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. Okt 26, 2012 12:16:36 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:saim' did not find a matching property. Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol init INFO: Initializing Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 879 ms Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.32 Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol start INFO: Starting Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 568 ms

    Read the article

  • Excel-based Performance Reviews transformed into Web Application for Performance Management

    - by Webgui
    HR TMS provides enterprise talent management solutions for healthcare, retail and corporate customers, focusing on performance management, compensation management and succession planning. As the competency of nurses and other healthcare workers is critical, the government, via the Joint Commission (JCAHO), tightly monitors their performances. On a regular basis, accredited healthcare organizations are required to review employee performance using a complex set of position dependent job descriptions and competencies. Middlesex Hospital managed their performance reviews for 2500 employees manually with Excel spreadsheets. This was a labor intensive process that proved to be error prone and difficult to manage. Reviews were not always where they belonged and the job descriptions and competencies for healthcare workers were difficult to keep accurate and up to date. As a result, when the Joint Commission visited and requested to see specific review documentation, there was intense stress. Middlesex Hospital needed to automate their review process, pull in the position information from those spreadsheets and be able to deliver reviews online. Users needed to have online access to those reviews from a standard browser. Although the manual system had its issues, it did have the advantage of being very comprehensive and familiar to users. The decision was made to provide a web-based solution that leveraged the look and feel of those spreadsheets in order to insure user acceptance of the system and minimize the training needed. Read the full article here >

    Read the article

  • Trying to make a custom Arch-based Linux distro with larch

    - by strangeronyourtrain
    I'm all set up with a spiffy Arch installation in a virtual machine, and now I want to turn it into a live ISO. When I heard about larch, I thought it would be the perfect tool to turn my existing installation into something I could distribute. However, I can't get larch to install properly. I followed the installation instructions on the website, which said to download and run the larch-setup script. When I run it, though, it installs the larch profiles and libraries but doesn't install the executable programs. Here's a screenshot of the errors I get when larch-setup tries to install the executables. I'd greatly appreciate any clues to what is going wrong here, or suggestions for alternative ways to turn my customized Arch installation into an ISO! Thanks!

    Read the article

  • Enable compiz on intel core i5 (Nvidia GT330M) based laptop

    - by Eshwar
    Hi, I am trying to enable compiz on my laptop via Desktop Effects but it does not allow it. I modified the xorg.conf file as on the compiz wiki but still no luck. So can someone just tell me how to enable compiz desktop on an Intel i5 based system. This is an Arrandale processor so its got the graphics bit on the processor itself. My system also has a discrete graphics card (Nvidia GT330M - yup its those hybrid graphics combos n- not Optimus). As far as i know the nvidia gpu is not being used since the intel one is enabled and there is no bios route to disable it. The laptop is a Dell Vostro 3700 with bios version A10 I did lotsa google searches about intel compiz, etc but not a single conclusive guide as to how to enable it. so my guess is it should work out of the box. but it doesn't. glxinfo gives me: name of display: :0.0 Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". 3 GLXFBConfigs: visual x bf lv rg d st colorbuffer ax dp st accumbuffer ms cav id dep cl sp sz l ci b ro r g b a bf th cl r g b a ns b eat ---------------------------------------------------------------------- Segmentation fault lsbusb gives me: 00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 18) 01:00.0 VGA compatible controller [0300]: nVidia Corporation GT216 [GeForce GT 330M] [10de:0a29] (rev a2)

    Read the article

  • Ask the Readers: Social Websites – Browser-Based Interface versus Desktop Clients

    - by Asian Angel
    Most people have a favorite social website that they are active on each day, but have different methods for interacting with their friends there. This week we would like to know if you prefer using a browser-based interface or a desktop client to interact with your chosen social services. Photo by Asian Angel. Social services can be a lot of fun unless your method of access comes with more frustrations than perks. Perhaps your favorite social service has changed the layout or the website itself is just too busy or full of “junk” for your tastes. Then there are the times when the website may experience problems and fail to work smoothly. Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • GoodFil.ms Suggests New Movies Based on Friends’ Picks

    - by Jason Fitzpatrick
    Goodfil.ms is a movie suggestion engine that doesn’t suggest movies based on what the critics say or how many anonymous internet points a movie has received, but instead takes into account your personal tastes and the tastes of your friends. From the Goodfil.ms FAQ: Films are social. The best way to find movies is through the people you know. We’ve designed Goodfilms from the ground up to show you what your existing friends are watching and rating, and to focus on showing you what the people around you think about films instead of a random grab bag of “internet voters” or highly specialised critics. Their FAQ file is filled with links to detailed posts about the specifics of the process, so if you’re the curious type we strongly suggest checking it out. In addition to the social-ranking side of Goodfil.ms there’s an excellent “Recent Releases” section for major streaming services like iTunes, Netflix, and Amazon Prime–even if you don’t sign up for the social side of the site you can still keep an eye on the best new releases across the board. What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It?

    Read the article

  • Reflections based on distance from plane

    - by Andrea Benedetti
    Let's consider, for example, a surface like the volleyball court, we can see that legs and shoes of the players are reflected, with a blur effect, but body and stadium don't (as each object not near to the court). I've already made a reflection effect, but it works as a specular reflection, and I need to achieve an effect like the photo above. So, I would like to make a reflection that is based on the distance between the object and the plane, in this manner a close object would reflect more than an object that is positioned far away from the plane. What is the best way to achieve this effect? My first idea was to use the depth value (taken from the reflected camera), and use that value to blend between reflection and court. But I don't know if it's a correct way. Edit: as rendering engine I use Ogre that already provides a reflections system: reflecting the camera through a plane (obviously I can select the models to draw from the reflected camera). After a render to texture pass I can blend the reflected texture with the original plane. So, if possible, I'm looking for a way that best suits my system.

    Read the article

  • 45° Slopes in a Tile based 2D platformer

    - by xNidhogg
    I want to have simple 45° slopes in my tile based platformer, however I just cant seem to get the algorithm down. Please take a look at the code and video, maybe I'm missing the obvious? //collisionRectangle is the collision rectangle of the player with //origin at the top left and width and height //wantedPosition is the new position the player will be set to. //this is determined elsewhere by checking the bottom center point of the players rect if(_leftSlope || _rightSlope) { //Test bottom center point var calculationPoint = new Vector2(collisionRectangle.Center.X, collisionRectangle.Bottom); //Get the collision rectangle of the tile, origin is top-left Rectangle cellRect = _tileMap.CellWorldRectangle( _tileMap.GetCellByPixel(calculationPoint)); //Calculate the new Y coordinate depending on if its a left or right slope //CellSize = 8 float newY = _leftSlope ? (calculationPoint.X % CellSize) + cellRect.Y : (-1 * (calculationPoint.X % CellSize) - CellSize) + cellRect.Y; //reset variables so we dont jump in here next frame _leftSlope = false; _rightSlope = false; //now change the players Y according to the difference of our calculation wantedPosition.Y += newY - calculationPoint.Y; } Video of what it looks like: http://youtu.be/EKOWgD2muoc

    Read the article

  • Procedural Generation of tile-based 2d World

    - by Matthias
    I am writing a 2d game that uses tile-based top-down graphics to build the world (i.e. the ground plane). Manually made this works fine. Now I want to generate the ground plane procedurally at run time. In other words: I want to place the tiles (their textures) randomised on the fly. Of course I cannot create an endless ground plane, so I need to restrict how far from the player character (on which the camera focuses on) I procedurally generate the ground floor. My approach would be like this: I have a 2d grid that stores all tiles of the floor at their correct x/y coordinates within the game world. When the players moves the character, therefore also the camera, I constantly check whether there are empty locations in my x/y map within a max. distance from the character, i.e. cells in my virtual grid that have no tile set. In such a case I place a new tile there. Therefore the player would always see the ground plane without gaps or empty spots. I guess that would work, but I am not sure whether that would be the best approach. Is there a better alternative, maybe even a best-practice for my case?

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • Ajax-based data loading using jQuery.load() function in ASP.NET

    - by hajan
    In general, jQuery has made Ajax very easy by providing low-level interface, shorthand methods and helper functions, which all gives us great features of handling Ajax requests in our ASP.NET Webs. The simplest way to load data from the server and place the returned HTML in browser is to use the jQuery.load() function. The very firs time when I started playing with this function, I didn't believe it will work that much easy. What you can do with this method is simply call given url as parameter to the load function and display the content in the selector after which this function is chained. So, to clear up this, let me give you one very simple example: $("#result").load("AjaxPages/Page.html"); As you can see from the above image, after clicking the ‘Load Content’ button which fires the above code, we are making Ajax Get and the Response is the entire page HTML. So, rather than using (old) iframes, you can now use this method to load other html pages inside the page from where the script with load function is called. This method is equivalent to the jQuery Ajax Get method $.get(url, data, function () { }) only that the $.load() is method rather than global function and has an implicit callback function. To provide callback to your load, you can simply add function as second parameter, see example: $("#result").load("AjaxPages/Page.html", function () { alert("Page.html has been loaded successfully!") }); Since load is part of the chain which is follower of the given jQuery Selector where the content should be loaded, it means that the $.load() function won't execute if there is no such selector found within the DOM. Another interesting thing to mention, and maybe you've asked yourself is how we know if GET or POST method type is executed? It's simple, if we provide 'data' as second parameter to the load function, then POST is used, otherwise GET is assumed. POST $("#result").load("AjaxPages/Page.html", { "name": "hajan" }, function () { ////callback function implementation });   GET $("#result").load("AjaxPages/Page.html", function () { ////callback function implementation });   Another important feature that $.load() has ($.get() does not) is loading page fragments. Using jQuery's selector capability, you can do this: $("#result").load("AjaxPages/Page.html #resultTable"); In our Page.html, the content now is: So, after the call, only the table with id resultTable will load in our page.   As you can see, we have loaded only the table with id resultTable (1) inside div with id result (2). This is great feature since we won't need to filter the returned HTML content again in our callback function on the master page from where we have called $.load() function. Besides the fact that you can simply call static HTML pages, you can also use this function to load dynamic ASPX pages or ASP.NET ASHX Handlers . Lets say we have another page (ASPX) in our AjaxPages folder with name GetProducts.aspx. This page has repeater control (or anything you want to bind dynamic server-side content) that displays set of data in it. Now, I want to filter the data in the repeater based on the Query String parameter provided when calling that page. For example, if I call the page using GetProducts.aspx?category=computers, it will load only computers… so, this will filter the products automatically by given category. The example ASPX code of GetProducts.aspx page is: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GetProducts.aspx.cs" Inherits="WebApplication1.AjaxPages.GetProducts" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <table id="tableProducts"> <asp:Repeater ID="rptProducts" runat="server"> <HeaderTemplate> <tr> <th>Product</th> <th>Price</th> <th>Category</th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <%# Eval("ProductName")%> </td> <td> <%# Eval("Price") %> </td> <td> <%# Eval("Category") %> </td> </tr> </ItemTemplate> </asp:Repeater> </ul> </div> </form> </body> </html> The C# code-behind sample code is: public partial class GetProducts : System.Web.UI.Page { public List<Product> products; protected override void OnInit(EventArgs e) { LoadSampleProductsData(); //load sample data base.OnInit(e); } protected void Page_Load(object sender, EventArgs e) { if (Request.QueryString.Count > 0) { if (!string.IsNullOrEmpty(Request.QueryString["category"])) { string category = Request.QueryString["category"]; //get query string into string variable //filter products sample data by category using LINQ //and add the collection as data source to the repeater rptProducts.DataSource = products.Where(x => x.Category == category); rptProducts.DataBind(); //bind repeater } } } //load sample data method public void LoadSampleProductsData() { products = new List<Product>(); products.Add(new Product() { Category = "computers", Price = 200, ProductName = "Dell PC" }); products.Add(new Product() { Category = "shoes", Price = 90, ProductName = "Nike" }); products.Add(new Product() { Category = "shoes", Price = 66, ProductName = "Adidas" }); products.Add(new Product() { Category = "computers", Price = 210, ProductName = "HP PC" }); products.Add(new Product() { Category = "shoes", Price = 85, ProductName = "Puma" }); } } //sample Product class public class Product { public string ProductName { get; set; } public decimal Price { get; set; } public string Category { get; set; } } Mainly, I just have sample data loading function, Product class and depending of the query string, I am filtering the products list using LINQ Where statement. If we run this page without query string, it will show no data. If we call the page with category query string, it will filter automatically. Example: /AjaxPages/GetProducts.aspx?category=shoes The result will be: or if we use category=computers, like this /AjaxPages/GetProducts.aspx?category=computers, the result will be: So, now using jQuery.load() function, we can call this page with provided query string parameter and load appropriate content… The ASPX code in our Default.aspx page, which will call the AjaxPages/GetProducts.aspx page using jQuery.load() function is: <asp:RadioButtonList ID="rblProductCategory" runat="server"> <asp:ListItem Text="Shoes" Value="shoes" Selected="True" /> <asp:ListItem Text="Computers" Value="computers" /> </asp:RadioButtonList> <asp:Button ID="btnLoadProducts" runat="server" Text="Load Products" /> <!-- Here we will load the products, based on the radio button selection--> <div id="products"></div> </form> The jQuery code: $("#<%= btnLoadProducts.ClientID %>").click(function (event) { event.preventDefault(); //preventing button's default behavior var selectedRadioButton = $("#<%= rblProductCategory.ClientID %> input:checked").val(); //call GetProducts.aspx with the category query string for the selected category in radio button list //filter and get only the #tableProducts content inside #products div $("#products").load("AjaxPages/GetProducts.aspx?category=" + selectedRadioButton + " #tableProducts"); }); The end result: You can download the code sample from here. You can read more about jQuery.load() function here. I hope this was useful blog post for you. Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Pygame Tile Based Character movement speed

    - by Ryan
    Thanks for taking the time to read this. Right now I'm making a really basic tile based game. The map is a large amount of 16x16 tiles, and the character image is 16x16 as well. My character has its own class that is an extension of the sprite class, and the x and y position is saved in terms of the tile position. To note I am fairly new to pygame. My question is, I am planning to have character movement restricted to one tile at a time, and I'm not sure how to make it so that, even if the player hits the directional key dozens of time quickly, (WASD or arrow keys) it will only move from tile to tile at a certain speed. How could I implement this generally with pygame? (Similar to game movement of like Pokemon or NexusTk). Edit: I should probably note that I want it so that the player can only end a movement in a tile. He couldn't stop moving halfway inbetween a tile for example. Thanks for your time! Ryan

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >