Search Results

Search found 1575 results on 63 pages for 'pixel'.

Page 37/63 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Use ImageMagick to place an image inside a larger canvas

    - by Mike
    Getting started with ImageMagic and trying to find a way to do this... If an image is less than 50 pixels tall or 50 pixels wide, I'd like to place it (un-scaled) in the horizontal/vertical center of a new 50x50 pixel canvas on top of a white background - and save that as the new image. Anyone know if this is possible with ImageMagick? Thanks!

    Read the article

  • Confused with web page layout.

    - by novicedeveloper
    I dont know, if it is right forum to ask such question. I am new in web designing and developemnt. I am confused with page layout and pixels. How can I get right information on page designing, and idea of page layout, and pixel measuremnt.

    Read the article

  • crosshair tool, is there one? Visual studio 2008

    - by flavour404
    Hi, I am doing some image sampling. What my question is, is there a 'crosshair' tool in visual studio? I want to have several instances on a single form, be able to move them around and then sample those points, obviously returning the color of the pixel at the center of the crosshair, is there already a tool that will do this, before I go and write one? Thanks, R.

    Read the article

  • Is canvas security model ignoring access-control-allow-origin headers?

    - by luklatlug
    It seems that even if you set the access-control-allow-origin header to allow access from mydomain.org to an image hosted on domain example.org, the canvas' origin-clean flag gets set to false, and trying to manipulate that image's pixel data will trigger a security exception. Shouldn't canvas' obey the access-control-allow-origin header and allow access to image's data without throwing an exception?

    Read the article

  • image processing problem

    - by riyana
    i'm working on detecting shape of any object.i've a binary image where background is white pixels and foreground/object is black pixel. now i need to detect the shape of the area where there are black pixels.how can i do it?the shape may be of a man/car/box etc. plz help

    Read the article

  • How do I find what text/HTML is on screen in a UIWebview?

    - by Grant M
    I would like to know what the first piece of text/html that is currently showing on screen, or more generally where in pixel location a particular tag or piece of text is in the UIWebview. I know that I can use window.pageYOffset to get the scroll position of the UIwebview, but how do I find out what text or HTML item is there?

    Read the article

  • Dynamic HTML body width (over 100%)

    - by danixd
    I am creating a horizontal webpage and I am trying to make the body dynamically expand according to the content within it. I am building the website here: http://www.obliquo.co.uk/ As you can see it all works, but I am forced to setting a huge body width in pixel value. The content on the page will be changing all the time. If I don't set a width in pixels, the divs start bumping vertically, naturally.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • FFSERVER - streaming an ASF video as Webm output

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Environment Debian 7.5 ffmpeg 2.2 Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://192.168.1.62:8091/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://192.168.1.62:8091/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream.

    Read the article

  • ffserver-2.2 - streaming an ASF video as Webm output with ffserver on Debian 7.5

    - by Emmanuel Brunet
    I'm trying to stream an IP webcam ASF live stream to a ffserver to output a webm video format. The server starts successfully but the ffserver commands used to feed the ffserver fails and generates a core dump. Input stream $ ffprobe http://account:password@webcam/videostream.asf Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, 1 channels, s16p, 32 kb/s ffserver configuration my ffserver configuration is : Port 8091 RTSPPort 554 BindAddress 192.168.1.62 MaxHTTPConnections 1000 MaxClients 100 MaxBandwidth 1000 CustomLog - <Feed webcam.ffm> File /tmp/webcam.ffm FileMaxSize 500M ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Feed> <Stream webcam.webm> # Output stream URL definition Feed webcam.ffm # Feed from which to receive video Format webm # Audio settings AudioCodec vorbis AudioBitRate 64 # Audio bitrate # Video settings VideoCodec libvpx VideoSize 640x480 # Video resolution VideoFrameRate 25 # Video FPS AVOptionVideo flags +global_header # Parameters passed to encoder # (same as ffmpeg command-line parameters) AVOptionVideo cpu-used 0 AVOptionVideo qmin 10 AVOptionVideo qmax 42 AVOptionVideo quality good AVOptionAudio flags +global_header PreRoll 15 StartSendOnKey # VideoBitRate 32 # Video bitrate </Stream> <Stream status.html> Format status # Only allow local people to get the status ACL allow localhost ACL allow 192.168.0.0 192.168.255.255 </Stream> ffmpeg feed I run the following command that fails $ ffmpeg -i http://account:password@webcam/videostream.asf http://ffserver_ip:port/webcam.ffm http://192.168.1.62:8091/webcam.ffm Input #0, asf, from 'http://account:password@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 25 tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x36a80c0] deprecated pixel format used, make sure you did set range correctly Segmentation fault I tryed $ ffmpeg -i http://account:password@webcam/videostream.asf -pix_fmt yuv420p http://ffserver_ip:port/webcam.ffm But it raises the same error. Thanks for your help Edit For an easy testing (I thought), I tried to publish the whole ASF stream as is, meaning connecting the ASF webcam output stream to the ffserver that outputs ASF format too. And thus with mirrored encoding so I changed the ffserver configuration to ... <Stream webcam.asf> Feed webcam.ffm Format asf VideoFrameRate 25 VideoSize 640X480 VideoBitRate 256 VideoBufferSize 1000 VideoGopSize 30 AudioBitRate 32 StartSendOnKey </Stream> ... And the output is now : Input #0, asf, from 'http://admin:alpha1237@webcam/videostream.asf': Duration: N/A, start: 0.000000, bitrate: 32 kb/s Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc), 640x480, 1k tbr, 1k tbn, 1k tbc Stream #0:1: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 8000 Hz, mono, s16p, 32 kb/s [swscaler @ 0x3d620c0] deprecated pixel format used, make sure you did set range correctly Output #0, ffm, to 'http://192.168.1.62:8091/webcam.ffm': Metadata: creation_time : now encoder : Lavf55.40.100 Stream #0:0: Audio: wmav2, 22050 Hz, mono, fltp, 32 kb/s Metadata: encoder : Lavc55.64.100 wmav2 Stream #0:1: Video: msmpeg4v3 (msmpeg4), yuv420p, 640x480, q=2-31, 256 kb/s, 1k fps, 1000k tbn, 1k tbc Metadata: Stream mapping: Stream #0:1 -> #0:0 (adpcm_ima_wav -> wmav2) Stream #0:0 -> #0:1 (mjpeg -> msmpeg4) Press [q] to stop, [?] for help Segmentation fault I can't even forward the stream. Thanks for your help again.

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

  • Problem Implementing Texture on Libgdx Mesh of Randomized Terrain

    - by BrotherJack
    I'm having problems understanding how to apply a texture to a non-rectangular object. The following code creates textures such as this: from the debug renderer I think I've got the physical shape of the "earth" correct. However, I don't know how to apply a texture to it. I have a 50x50 pixel image (in the environment constructor as "dirt.png"), that I want to apply to the hills. I have a vague idea that this seems to involve the mesh class and possibly a ShapeRenderer, but the little i'm finding online is just confusing me. Bellow is code from the class that makes and regulates the terrain and the code in a separate file that is supposed to render it (but crashes on the mesh.render() call). Any pointers would be appreciated. public class Environment extends Actor{ Pixmap sky; public Texture groundTexture; Texture skyTexture; double tankypos; //TODO delete, temp public Tank etank; //TODO delete, temp int destructionRes; // how wide is a static pixel private final float viewWidth; private final float viewHeight; private ChainShape terrain; public Texture dirtTexture; private World world; public Mesh terrainMesh; private static final String LOG = Environment.class.getSimpleName(); // Constructor public Environment(Tank tank, FileHandle sfileHandle, float w, float h, int destructionRes) { world = new World(new Vector2(0, -10), true); this.destructionRes = destructionRes; sky = new Pixmap(sfileHandle); viewWidth = w; viewHeight = h; skyTexture = new Texture(sky); terrain = new ChainShape(); genTerrain((int)w, (int)h, 6); Texture tankSprite = new Texture(Gdx.files.internal("TankSpriteBase.png")); Texture turretSprite = new Texture(Gdx.files.internal("TankSpriteTurret.png")); tank = new Tank(0, true, tankSprite, turretSprite); Rectangle tankrect = new Rectangle(300, (int)tankypos, 44, 45); tank.setRect(tankrect); BodyDef terrainDef = new BodyDef(); terrainDef.type = BodyType.StaticBody; terrainDef.position.set(0, 0); Body terrainBody = world.createBody(terrainDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = terrain; terrainBody.createFixture(fixtureDef); BodyDef tankDef = new BodyDef(); Rectangle rect = tank.getRect(); tankDef.type = BodyType.DynamicBody; tankDef.position.set(0,0); tankDef.position.x = rect.x; tankDef.position.y = rect.y; Body tankBody = world.createBody(tankDef); FixtureDef tankFixture = new FixtureDef(); PolygonShape shape = new PolygonShape(); shape.setAsBox(rect.width*WORLD_TO_BOX, rect.height*WORLD_TO_BOX); fixtureDef.shape = shape; dirtTexture = new Texture(Gdx.files.internal("dirt.png")); etank = tank; } private void genTerrain(int w, int h, int hillnessFactor){ int width = w; int height = h; Random rand = new Random(); //min and max bracket the freq's of the sin/cos series //The higher the max the hillier the environment int min = 1; //allocating horizon for screen width Vector2[] horizon = new Vector2[width+2]; horizon[0] = new Vector2(0,0); double[] skyline = new double[width]; //TODO skyline necessary as an array? //ratio of amplitude of screen height to landscape variation double r = (int) 2.0/5.0; //number of terms to be used in sine/cosine series int n = 4; int[] f = new int[n*2]; //calculating omegas for sine series for(int i = 0; i < n*2 ; i ++){ f[i] = rand.nextInt(hillnessFactor - min + 1) + min; } //amp is the amplitude of the series int amp = (int) (r*height); double lastPoint = 0.0; for(int i = 0 ; i < width; i ++){ skyline[i] = 0; for(int j = 0; j < n; j++){ skyline[i] += ( Math.sin( (f[j]*Math.PI*i/height) ) + Math.cos(f[j+n]*Math.PI*i/height) ); } skyline[i] *= amp/(n*2); skyline[i] += (height/2); skyline[i] = (int)skyline[i]; //TODO Possible un-necessary float to int to float conversions tankypos = skyline[i]; horizon[i+1] = new Vector2((float)i, (float)skyline[i]); if(i == width) lastPoint = skyline[i]; } horizon[width+1] = new Vector2(800, (float)lastPoint); terrain.createChain(horizon); terrain.createLoop(horizon); //I have no idea if the following does anything useful :( terrainMesh = new Mesh(true, (width+2)*2, (width+2)*2, new VertexAttribute(Usage.Position, (width+2)*2, "a_position")); float[] vertices = new float[(width+2)*2]; short[] indices = new short[(width+2)*2]; for(int i=0; i < (width+2); i+=2){ vertices[i] = horizon[i].x; vertices[i+1] = horizon[i].y; indices[i] = (short)i; indices[i+1] = (short)(i+1); } terrainMesh.setVertices(vertices); terrainMesh.setIndices(indices); } Here is the code that is (supposed to) render the terrain. @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. backgroundStage.draw(); backgroundStage.act(delta); uistage.draw(); uistage.act(delta); batch.begin(); debugRenderer.render(this.ground.getWorld(), camera.combined); batch.end(); //Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D); ground.dirtTexture.bind(); ground.terrainMesh.render(GL10.GL_TRIANGLE_FAN); //I'm particularly lost on this ground.step(); }

    Read the article

  • help me improve my sse yuv to rgb ssse3 code

    - by David McPaul
    Hello, I am looking to optimise some sse code I wrote for converting yuv to rgb (both planar and packed yuv functions). i am using SSSE3 at the moment but if there are useful functions from later sse versions thats ok. I am mainly interested in how I would work out processor stalls and the like. Anyone know of any tools that do static analysis of sse code? ; ; Copyright (C) 2009-2010 David McPaul ; ; All rights reserved. Distributed under the terms of the MIT License. ; ; A rather unoptimised set of ssse3 yuv to rgb converters ; does 8 pixels per loop ; inputer: ; reads 128 bits of yuv 8 bit data and puts ; the y values converted to 16 bit in xmm0 ; the u values converted to 16 bit and duplicated into xmm1 ; the v values converted to 16 bit and duplicated into xmm2 ; conversion: ; does the yuv to rgb conversion using 16 bit integer and the ; results are placed into the following registers as 8 bit clamped values ; r values in xmm3 ; g values in xmm4 ; b values in xmm5 ; outputer: ; writes out the rgba pixels as 8 bit values with 0 for alpha ; xmm6 used for scratch ; xmm7 used for scratch %macro cglobal 1 global _%1 %define %1 _%1 align 16 %1: %endmacro ; conversion code %macro yuv2rgbsse2 0 ; u = u - 128 ; v = v - 128 ; r = y + v + v >> 2 + v >> 3 + v >> 5 ; g = y - (u >> 2 + u >> 4 + u >> 5) - (v >> 1 + v >> 3 + v >> 4 + v >> 5) ; b = y + u + u >> 1 + u >> 2 + u >> 6 ; subtract 16 from y movdqa xmm7, [Const16] ; loads a constant using data cache (slower on first fetch but then cached) psubsw xmm0,xmm7 ; y = y - 16 ; subtract 128 from u and v movdqa xmm7, [Const128] ; loads a constant using data cache (slower on first fetch but then cached) psubsw xmm1,xmm7 ; u = u - 128 psubsw xmm2,xmm7 ; v = v - 128 ; load r,b with y movdqa xmm3,xmm0 ; r = y pshufd xmm5,xmm0, 0xE4 ; b = y ; r = y + v + v >> 2 + v >> 3 + v >> 5 paddsw xmm3, xmm2 ; add v to r movdqa xmm7, xmm1 ; move u to scratch pshufd xmm6, xmm2, 0xE4 ; move v to scratch psraw xmm6,2 ; divide v by 4 paddsw xmm3, xmm6 ; and add to r psraw xmm6,1 ; divide v by 2 paddsw xmm3, xmm6 ; and add to r psraw xmm6,2 ; divide v by 4 paddsw xmm3, xmm6 ; and add to r ; b = y + u + u >> 1 + u >> 2 + u >> 6 paddsw xmm5, xmm1 ; add u to b psraw xmm7,1 ; divide u by 2 paddsw xmm5, xmm7 ; and add to b psraw xmm7,1 ; divide u by 2 paddsw xmm5, xmm7 ; and add to b psraw xmm7,4 ; divide u by 32 paddsw xmm5, xmm7 ; and add to b ; g = y - u >> 2 - u >> 4 - u >> 5 - v >> 1 - v >> 3 - v >> 4 - v >> 5 movdqa xmm7,xmm2 ; move v to scratch pshufd xmm6,xmm1, 0xE4 ; move u to scratch movdqa xmm4,xmm0 ; g = y psraw xmm6,2 ; divide u by 4 psubsw xmm4,xmm6 ; subtract from g psraw xmm6,2 ; divide u by 4 psubsw xmm4,xmm6 ; subtract from g psraw xmm6,1 ; divide u by 2 psubsw xmm4,xmm6 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,2 ; divide v by 4 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g psraw xmm7,1 ; divide v by 2 psubsw xmm4,xmm7 ; subtract from g %endmacro ; outputer %macro rgba32sse2output 0 ; clamp values pxor xmm7,xmm7 packuswb xmm3,xmm7 ; clamp to 0,255 and pack R to 8 bit per pixel packuswb xmm4,xmm7 ; clamp to 0,255 and pack G to 8 bit per pixel packuswb xmm5,xmm7 ; clamp to 0,255 and pack B to 8 bit per pixel ; convert to bgra32 packed punpcklbw xmm5,xmm4 ; bgbgbgbgbgbgbgbg movdqa xmm0, xmm5 ; save bg values punpcklbw xmm3,xmm7 ; r0r0r0r0r0r0r0r0 punpcklwd xmm5,xmm3 ; lower half bgr0bgr0bgr0bgr0 punpckhwd xmm0,xmm3 ; upper half bgr0bgr0bgr0bgr0 ; write to output ptr movntdq [edi], xmm5 ; output first 4 pixels bypassing cache movntdq [edi+16], xmm0 ; output second 4 pixels bypassing cache %endmacro SECTION .data align=16 Const16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 dw 16 Const128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 dw 128 UMask db 0x01 db 0x80 db 0x01 db 0x80 db 0x05 db 0x80 db 0x05 db 0x80 db 0x09 db 0x80 db 0x09 db 0x80 db 0x0d db 0x80 db 0x0d db 0x80 VMask db 0x03 db 0x80 db 0x03 db 0x80 db 0x07 db 0x80 db 0x07 db 0x80 db 0x0b db 0x80 db 0x0b db 0x80 db 0x0f db 0x80 db 0x0f db 0x80 YMask db 0x00 db 0x80 db 0x02 db 0x80 db 0x04 db 0x80 db 0x06 db 0x80 db 0x08 db 0x80 db 0x0a db 0x80 db 0x0c db 0x80 db 0x0e db 0x80 ; void Convert_YUV422_RGBA32_SSSE3(void *fromPtr, void *toPtr, int width) width equ ebp+16 toPtr equ ebp+12 fromPtr equ ebp+8 ; void Convert_YUV420P_RGBA32_SSSE3(void *fromYPtr, void *fromUPtr, void *fromVPtr, void *toPtr, int width) width1 equ ebp+24 toPtr1 equ ebp+20 fromVPtr equ ebp+16 fromUPtr equ ebp+12 fromYPtr equ ebp+8 SECTION .text align=16 cglobal Convert_YUV422_RGBA32_SSSE3 ; reserve variables push ebp mov ebp, esp push edi push esi push ecx mov esi, [fromPtr] mov edi, [toPtr] mov ecx, [width] ; loop width / 8 times shr ecx,3 test ecx,ecx jng ENDLOOP REPEATLOOP: ; loop over width / 8 ; YUV422 packed inputer movdqa xmm0, [esi] ; should have yuyv yuyv yuyv yuyv pshufd xmm1, xmm0, 0xE4 ; copy to xmm1 movdqa xmm2, xmm0 ; copy to xmm2 ; extract both y giving y0y0 pshufb xmm0, [YMask] ; extract u and duplicate so each u in yuyv becomes u0u0 pshufb xmm1, [UMask] ; extract v and duplicate so each v in yuyv becomes v0v0 pshufb xmm2, [VMask] yuv2rgbsse2 rgba32sse2output ; endloop add edi,32 add esi,16 sub ecx, 1 ; apparently sub is better than dec jnz REPEATLOOP ENDLOOP: ; Cleanup pop ecx pop esi pop edi mov esp, ebp pop ebp ret cglobal Convert_YUV420P_RGBA32_SSSE3 ; reserve variables push ebp mov ebp, esp push edi push esi push ecx push eax push ebx mov esi, [fromYPtr] mov eax, [fromUPtr] mov ebx, [fromVPtr] mov edi, [toPtr1] mov ecx, [width1] ; loop width / 8 times shr ecx,3 test ecx,ecx jng ENDLOOP1 REPEATLOOP1: ; loop over width / 8 ; YUV420 Planar inputer movq xmm0, [esi] ; fetch 8 y values (8 bit) yyyyyyyy00000000 movd xmm1, [eax] ; fetch 4 u values (8 bit) uuuu000000000000 movd xmm2, [ebx] ; fetch 4 v values (8 bit) vvvv000000000000 ; extract y pxor xmm7,xmm7 ; 00000000000000000000000000000000 punpcklbw xmm0,xmm7 ; interleave xmm7 into xmm0 y0y0y0y0y0y0y0y0 ; extract u and duplicate so each becomes 0u0u punpcklbw xmm1,xmm7 ; interleave xmm7 into xmm1 u0u0u0u000000000 punpcklwd xmm1,xmm7 ; interleave again u000u000u000u000 pshuflw xmm1,xmm1, 0xA0 ; copy u values pshufhw xmm1,xmm1, 0xA0 ; to get u0u0 ; extract v punpcklbw xmm2,xmm7 ; interleave xmm7 into xmm1 v0v0v0v000000000 punpcklwd xmm2,xmm7 ; interleave again v000v000v000v000 pshuflw xmm2,xmm2, 0xA0 ; copy v values pshufhw xmm2,xmm2, 0xA0 ; to get v0v0 yuv2rgbsse2 rgba32sse2output ; endloop add edi,32 add esi,8 add eax,4 add ebx,4 sub ecx, 1 ; apparently sub is better than dec jnz REPEATLOOP1 ENDLOOP1: ; Cleanup pop ebx pop eax pop ecx pop esi pop edi mov esp, ebp pop ebp ret SECTION .note.GNU-stack noalloc noexec nowrite progbits

    Read the article

  • Brightness keeps changing in Windows 8.1 (on Macbook Pro Retina)

    - by gzak
    Before anyone gets too excited, it's not the "Adaptive Brightness" feature of the OS. I've already turned that off. Also it seems to have nothing to do with ambient light. It actually seems to do with the average "color" of the display. If I'm working in dark-themed Visual Studio, the brightness "pops" brighter. When I switch to the browser, it "pops" darker. So it's kind of adaptive brightness based on average pixel color (or something like that). What makes it rather annoying is that the brightness pops, rather than transitioning gradually. What is this feature, and how do I disable it (or at least make it smoother)?

    Read the article

  • Issue with Ivan Heckman's allSnap

    - by karl
    For the longest time I have used Ivan Heckman's allSnap program to better manage Windows on my pc by making them easily snap together, instead of overlapping. However on Windows 8 I cannot seem to get this to work. I suspect it has something to do with how Win8 boarders seem to have a transparent pixel around the outside of the window padding boarder, but overall I would love to get the snapping functionality back if it is at all possible. It's very hard trying to find information about this online as all I find are posts talking about snapping Metro apps to the side of the screen in Desktop Mode.

    Read the article

  • Monitor for HD video editing

    - by Kato
    I have been researching for days and nights on a good monitor to buy for a Mac Pro with an ATI Radeon 2600 XT (256mb). It will be used extensively for HD video editing (1080p) and photo editing, and likely also digital/3D animation next year(a lot of FCP + CS4). I am a student, so money is a little bit of an issue, but I want something that I'll be able to use semi-professionally after I'm done school, and am willing to finance something if it is worth the cost. I'm HOPING for something under $1000 though. The IPS Ultrasharps from Dell seem to be getting good reviews from other video editors. Accurate colour correction is a concern for me (hopefully something that covers Adobe spectrum), as well as a decent response time, HD resolutions, and DVI port. Also something with good gradient/definition in black areas, as this is difficult for editing on most LCDs. 1X1 pixel, brightness, good DVD playback etc. Hopefully this is not impossible to find for under $2000!

    Read the article

  • Why does QuickTime lag in Firefox if I don't put my mouse over it?

    - by Jim McKeeth
    This has happened for me as long as I can remember. Since the first version of Firefox, on multiple computers and under different versions of Windows. QuickTime plays fine in IE and Chrome (even with Firefox in the background), but in Firefox if my mouse is not over the QuickTime window then it will start to studder, then lag and eventually just stop. To be honest, I do keep quite a few tabs open, but Firefox stays at 1% CPU (even when QuickTime runs) and I have a few gigs of free RAM. It is the same for any resolution of video or audio. If the mouse is just one pixel in the client area of the QuickTime then it usually plays fine. Other video formats typically play fine. Does anyone else notice this behavior? Ultimately I would like a fix besides keeping my mouse over the QuickTime window.

    Read the article

  • Top/left edges of screen, in Virtualbox, act like bottom/right edges

    - by Ken
    I have Virtualbox running on Windows Vista, and Debian running inside Virtualbox. Everything's running great, for the most part. Everything looks correct. But when I'm in full-screen mode, the top edge seems to act (to the mouse) like it's the bottom edge, and the left edge seems to act like the right edge. For example, if I click in the middle of the desktop and drag left, as if to select some icons, when I hit the very leftmost pixel of the screen, the selection (but not the mouse pointer) jumps to the far right edge of the screen). For the left edge, it's not such a big deal, but not having the top edge is kind of annoying: it means I can't select things from the menu in my top panel by slamming the mouse against the top of the screen. Anyone seen this before? Is there some way to make this work? Thanks!

    Read the article

  • Linux Programs for pulling measurements from graphics

    - by Zack
    As a front-end developer, I'm often given graphics of web sites and told pretty much, "Make it work." I've recently started working on Linux 100% of the time and was wondering if there's any programs out there that're good for "digesting" graphics. All I do, pretty much, is draw little selection boxes and takes notes on their dimensions; I also slice out a piece of the graphic (i.e. copy out just the part of the graphic I need for to make the same effect in CSS). Before now I've been very happy with Fireworks, but I need something for Linux, any suggestions? As a note, I mainly deal with pixel based graphics, so the program being vector based isn't a necessity.

    Read the article

  • Inconsistent black levels in windows 7 media center

    - by James G
    I've got a HTPC running windows 7 64bit, hooked up to a Samsung LCD TV. My problem is different types of video are displaying different black levels on the TV. When I play a bluray through Arcsoft Total Media Theater I have to set the "HDMI Black Level" to "normal" in the TV picture options menu. When I play recorded TV through WMC I have to set it to "low" otherwise the black colors on the video are washed out and grey. Is there any way to configure the system so all videos are displayed with the same black level? The hdmi black level setting is deep in Samsung's menus so it's becoming a chore to keep switching it everytime I watch a different type of video. I'm using an ATI 4670 graphics card with HDMI output going straight to the TV. In the ATI catalyst control center I've got pixel format set to RGB 4:4:4 (Full RGB) since the TV wont allow me to change the HDMI black level if I choose one of the other settings.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >