Search Results

Search found 3909 results on 157 pages for 'roger light'.

Page 66/157 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Asus P8P67 Rev. 3.1 Motherboard issues powering on and saving settings

    - by Scott
    Edit: New Information Have some updated information from the old question below: So basically my issue right now is somewhat similar, but I've been able to rule out a couple of things. I don't think this has anything to do with light on the motherboard. No matter what lights are on/off on the motherboard when the computer is off, they don't affect this issue. The main power LED on the Mobo is always lit when the power supply is turned on, and that's what matters anyway. Even when the main power LED is on, the PC will NOT boot up the first time I hit the power switch. I have to go reset the power supply (make all lights turn off on the Mobo and back on), and THEN hit the power switch. Then everything boots up. Also, the BIOS settings are reset every time this happens. Asus Tech Support told me to try jumping the power with something metal to try and rule out that it's a problem with the connectors getting power, or if it's a problem with the case power switch pins - haven't done that yet though. Any ideas? This is a lot simpler than it was before when I thought it had to do with certain LED indicators for RAM, EPU, etc. Original Question So I built my new desktop just about 3 weeks ago. I've been having a few issues which I think are all related to my motherboard, an Asus P8P67 Revision 3.1, but I'm not 100% sure as this is really the first from-scratch build I've ever done. I've posted these questions on the Asus forums, Asus Tech Support, and the Corsair forums as well as I thought it might have something to do with my power supply at one point. None of these avenues have solved my issue until now completely, so I thought I'd come here to see what you guys think. Here's what's happening: My computer is off, and I go to power it on. I press the power switch on the case (Antec Nine Hundred), and nothing seems to happen. Upon further inspection, I see that what this actually does is simply turn on the EPU LED on my motherboard, but doesn't actually boot anything up. I then have to go and flip the main power switch on the power supply off and back on. What this does is turn off all lights on the Motherboard after a few seconds, and turn them all back on (including the EPU LED that was off before I hit the power switch the first time). Now, hitting the power switch works. The machine boots up fine, and starts going through the boot up process. As a side note: My Motherboard is set to "Force BIOS", and every single time I change this to do the opposite, the next time my computer boots up that change reverts itself. I think this may be due to the fact that I am doing the hard reset on the power supply each time, but I'm not sure. I had thought that the Motherboard would keep its BIOS settings unless you did something to the Mobo itself - so this may be a related issue, or something else completely. That's basically it. Once it's on, it's on. It works fine, recognizes all of my hardware, and runs great. All fans/lights in the case work great, and I'm getting standard readings. The next time I go to shut the computer down however, I can expect the same exact process getting it up and running, including being forced to go into BIOS and exit again before I can load Windows. Another side note: If I power on my computer using the power switch DIRECTLY after shutting it down, it powers right back on (I think this is because the EPU LED light doesn't have time to turn off). It looks as if as long as the EPU LED is lit up on the motherboard before I hit the power switch on the case, the thing will boot up fine (although this doesn't explain the "Force BIOS" issue, at least it's something). Any ideas? Thanks guys. P.S. - System Specs Asus P8P67 Rev. 3.1 Motherboard Intel Core i7 2600K Processor 16GB (4x4GB) G-Skill 1600 RAM NVIDIA EVGA GTX 570 Video Card Crucial 128GB SSD HD Corsair 850W Power Supply Seagate 2TB HDD

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • The Windows Browser Ballot Screen Offers Web Browser Choice to European Users

    - by Matthew Guay
    Since March, our friends across the pond in Europe get to decide which browser they want to install with their Windows OS. Today we thought we would take a look at the ballot choices, some are well known, and others you may not have heard of. Windows users in European countries should start seeing the so called “Browser Ballot Screen” after installing the Windows Update KB976002 (link below). The browser ballot offers a dozen different browsers, including some you’ve likely never heard of.  They each have some unique features, and are all free, and here we take a quick look at each of them. Internet Explorer 8 Internet Explorer is the world’s most used web browser, as it’s bundled with Windows. It also includes several unique features, including Accelerators that make it easy to search or find a map of a location, and InPrivate filtering to directly control what sites can get personal information.  Additionally, it offers great integration with Windows Touch and the new taskbar in Windows 7. IE 8 runs on Windows XP and newer, and is bundled with Windows 7. Mozilla Firefox 3.6 Firefox is the most popular browser other than Internet Explorer.  It is the modern descendant of Netscape, and is loved by web developers for its adherence to web standards, openness, and expandability.  It offers thousands of Add-ons and themes to let you customize it to fit your preferences. The most recent version has added Personas, which are quick, lightweight themes to let you personalize the look your browser. It’s open source, and runs on all modern versions of Windows, Mac OS X, and Linux. Of course thanks to Asian Angel, our resident browser expert, you can check out several articles regarding this popular IE alternative. Google Chrome 4 Google Chrome has gained an impressive amount of market share during its short time in the market. It offers a minimalistic interface and fast speeds with intensive web applications. The address bar is also a search bar, so you can enter a search query or web address and quickly get the information you need. With version 4 you can add a growing number of extensions, personalize it with a variety of stylish themes, and automatically translate foreign websites into your own language. Opera 10.50 Although Opera has been around for over a decade, relatively few users have used it. With the new 10.50 release, Opera has many unique features packed in a sleek UI. It integrates great with Aero and the Windows 7 taskbar, and lets you preview the contents of your websites in the tab bar. It also includes Opera Unite, a small personal web server to make file sharing easy, Opera Turbo to speed up your internet when the connection is slow, and Opera Link to keep all your copies of Opera in sync. It’s a popular browser on many mobile devices, and version 10.50 has a lot of enhancements. Apple Safari 4 Safari is the default browser in Mac OS X, and starting with version 3 it has been available for Windows as well. It’s based on Webkit, the popular new rendering engine that provides great speed and standards compatibility.  Safari 4 lets you browse your browsing history in a unique Coverflow interface, and shows your Top Sites in a fancy, 3D interface.  It’s also great for viewing mobile websites for the iPhone and other mobile devices through Developer Tools. Flock 2.5 Based on the popular Firefox core, Flock brings a multitude of social features to your browsing experience. You can view the latest YouTube videos, Flickr pictures, update your favorite social network, and keep up with your webmail thanks to It’s integration with a wide variety of services. You can even post to your blog through the integrated blog editor. If your time online is mostly spent in social services, this may be a browser you want to check out. Maxthon 2.5 Maxthon is a unique browser that builds on Internet Explorer to bring more features with IE’s rendering. Formerly known as MyIE2, Maxthon was popular for bringing tabbed browsing with IE rendering during the days of IE 6.  Today Maxthon supports a wide range of plugins and skins, so you can customize it however you want. It includes mouse gestures, a web accelerator to speed up pokey internet connections, a content blocker to remove unwanted content from sites, an online account to backup your favorites, and a nice download manager. Avant Browser Another nice browser based on Internet Explorer, Avant brings a wide variety of features in a nice brushed-metal interface. It includes an integrated AutoFill for forms, mouse gestures, customizable skins, and privacy protection features. It also includes a Flash blocker that will only load flash in webpages when you select them. You can also integrate Avant with an online account to store your bookmarks, feeds, settings and passwords online. Sleipnir Sleipnir is a customizable browser meant for advance users that is quite popular in Japan. It’s built on the Trident engine and virtually every aspect of is customizable unlike Internet Explorer.   FlashPeak SlimBrowser SlimBrowser from FlashPeak incorporates a lot of features like Popup Killer, Auto Login, site filtering and more. It’s based on Internet Explorer but offers a lot more customizable options out of the box.   K-meleon This basic browser is light on system resources and based on the Gecko engine. It’s been in development for years on SourceForge, and if you like to tweak virtually any aspect of your browser, this might be a good choice for you.   GreenBrowser GreenBrowser is based on Internet Explorer and is available in several languages. It has a large amount of features out of the box and is light on system resources.   Conclusion The European Union asked for more choices in the web browser they could choose from when installing Windows, and with the Browser Ballot Screen, they certainly get a variety to choose from.  If you’ve tried out some of the lesser known browsers, or think some important ones have been left out, leave a comment and tell us about it. Learn More About the Browser Ballot Screen and Download Alternatives to IE Windows Update KB976002 Similar Articles Productive Geek Tips Set the Default Browser on Ubuntu From the Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Hidden Files and Folders in Ubuntu File BrowserSet the Default Browser and Email Client in UbuntuAccess Multiple Browsers from Firefox with Browser View Plus TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Convert a DVD Movie Directly to AVI with FairUse Wizard 2.9

    - by DigitalGeekery
    Are you looking for a way to backup your DVD movie collection to AVI?  Today we’ll show you how to rip a DVD movie directly to AVI with FairUse Wizard. About FairUse Wizard FairUse Wizard 2.9 uses the DivX, Xvid, or h.264 codec to convert DVD to an AVI file. It comes in both a free version and commercial version. The free, or “Light” version, can create files up 700MB while the commercial version can output a 1400MB file. This will allow you to back up your movies to CD, or even multiple movies on a single DVD. FairUse Wizard states that it does not work on copy protected discs, but we’ve seen it work on all but some of the most recent copy protection. For this tutorial we’re using the free Light Edition to convert a DVD to AVI. They also offer a commercial version that you can get for $29.99 and it offers even more encoding possibilities for converting video to you portable digital devices. Installation and Configuration Download and install FairUse Wizard. (Download link below). Once the install is complete, open FairUse Wizard by going to Start > All Programs >  FairUse Wizard 2 >  FairUse Wizard 2.   FairUse Wizard will open on the new project screen. Select “Create a new project” and type a project name into the text box. This will be used as the file output name.  Ex: A project name of Simpsons Movie will give you an output file of Simpsons Movie.avi.   Next, browse for a destination folder for the output file and temp files. Note that you will need a minimum of 6 GB of free disk space for the conversion process. Note: Much of that 6 GB will be used for temporary files that we will delete after the conversion process.   Click on the Options button at the bottom.   Under Preferences, choose your preferred video codec and file output size. XviD and x264 are installed by default. If you prefer to use DivX, you will have to install it separately. Also note the “Two pass” option. Checking the “Two pass” box will encode your video twice for higher quality, but will take more time. Un-checking the box will speed up the conversion process.   Under Audio track, note that English subtitles are enabled by default, so to remove the subtitles, you will need to change the dropdown list so it shows only a dash (-). You can also select “Use TV Mode” if your primary playback will be on a 4:3 TV screen. Click “Next.” Full Auto Mode vs. Manual Mode You should now be back to the initial screen. Next, we’ll need to determine whether or not we can use “Full Auto Mode” to convert the movie. The difference is that “Full Auto Mode” will automatically perform a few steps that you will otherwise have to do manually. If you choose the “Full Auto Mode” option, FairUse Wizard will look for the video on the DVD with the longest duration and assume it is the chain that it should convert to AVI. It’s possible, however, your disc may contain a few chains of similar size, such as a theatrical cut and director’s cut, and the longest chain may not be the one you wish to convert. Make sure that “Full auto mode” is not checked yet, and click “Next.”   FairUse Wizard will parse the IFO files and display all video chains longer than 60  seconds. In most cases, you will only find that the largest chain is the one closely matching the duration of the movie. In these instances, you can use “Full Auto Mode.” If you find more than one chain that are close in duration to the length of the movie, consult the literature on the DVD case, or search online, to find the actual running time of the movie. If the proper file chain is not the longest chain, you won’t be able to use “Full Auto Mode.”   Full Auto Mode To use “Full Auto Mode,” simply click the “Back” button to return to the initial screen Now, place a check in the “Full auto mode” check box. Click “Next.” You will then be prompted to chose your DVD drive, then click “OK.” FairUse Wizard will parse the IFO files… … and then prompt you to Select your drive that contains the DVD one more time before beginning the conversion process. Click “OK.”   Manual Mode If you cannot (or don’t wish to) use Full Auto Mode, choose the appropriate video chain and click “Next.” FairUse Wizard will first go through the process of indexing the video. Note: If you get a runtime error during this portion of the process, it likely means that FairUse Wizard cannot handle the copy protection, and thus cannot convert the DVD. FairUse Wizard will automatically detect a cropping region. If necessary, you can edit the cropping region by adjusting the cropping region settings to the left. Click “Next.” Next, click “Auto Detect” to choose the proper field combination. Click “OK” on the pop up window that displays your Field Mode. Then click “Next.” This next screen is mainly comprised of settings from the Options screen. You can make changes at this point such as codec or output size. Click “Next” when ready.   Video Conversion Now the video conversion process will begin. This may take a few hours depending on your system’s hardware. Note: There is a check box to “Shutdown computer when done” if you choose to run the conversion overnight or before leaving for work. The first phase will be video encoding… Then the audio… If you chose the “Two Pass” option, your video video will be encoded again on 2nd pass. Then you’re finished. Unfortunately, FairUse Wizard doesn’t clean up after itself very well. After the process is complete, you’ll want to browse to your output directory and delete all the temporary files as they take up a considerable amount of hard drive space. Now you’re ready to enjoy your movie. Conclusion FairUse Wizard is a nice way to backup your DVD movies to good quality .avi files. You can store them on your hard drive, watch them on a media PC, or burn them to disc. Many DVD players even allow for playback of DivX or XviD encoded video from a CD or DVD. For those of you with children, you can burn that AVI file to CD for your kids, and keep your original DVDs stored safely out of harms way. Download Download FairUse Wizard 2.9 LE Similar Articles Productive Geek Tips Kantaris is a Unique Media Player Based on VLCHow to Make/Edit a movie with Windows Movie Maker in Windows VistaAutomatically Mount and View ISO files in Windows 7 Media CenterTune Your ClearType Font Settings in Windows VistaAdd Images and Metadata to Windows 7 Media Center Movie Library TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • CodePlex Daily Summary for Monday, June 14, 2010

    CodePlex Daily Summary for Monday, June 14, 2010New ProjectsBD File Hash: BD File Hash is a convenient file hash and hash compare tool for Windows which currently works with MD5, SHA-1, and SHA-256 algorithms. FileScan: This is an application that searches through a drive or directory structure for files matching a filter. This project was converted from VB to ...genesis9: genesis9HeinanOS: HeinanOS is an operating system developed mainly in C++. HeinanOS is a light OS (1.44 MB image) with a lot of capabilites and many more are being ...MediaBrowserWS - Creates a Web Service for the popular MediaBrowser plugin: Creates a web service in Media Center for accessing your MediaBrowser collection. Allows for external devices (Tablets/phones/laptops) to access a ...MME: New Edition of Managed Menu Extensions for Visual Studio 2010 The Main goal of "MME" is to provide easy access to adding Right Click menus in the ...MVMMapper: Generate the ViewModel and its mapping to the Model when implementing MVVM in .NET. Developed using T4 templates. Current version supports Silver...ProjectArDotNet: Si te agarro te parto! Si te agarro te emperno no me importa que seas menor de edad!Scriptagility for DotNetNuke: Scriptagility is a DotNetNuke module for Javascript developers. This module provides dynamic client scripting infrastructure for developing javascr...simpleLinux Distro: SimpleLinux. is a Linux distributions that is easy to use. Simple Linux website: http://simplelinux.tkTag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...thefreeimdb: fsadie qwUppityUp: UppityUp is a simple and light-weight tray application which monitors a remote server and shows a notification when it comes online. This is usefu...Vivid3D 2 - DirectX 10 3D ToolKit: The sequel to my first ever engine wrote several years ago. It is not based on it in anyway. VSIDev: VSI DevXTQXK_WORK: Actionscript 3.0东坡博客: 这是一个ASP。net mvc 2博客。New Releases.NET Extensions - Extension Methods Library: Release 2010.08: Added extension methods for Bitmap manipulation (scaling for now): - Bitmap.ScaleToSize() - Bitmap.ScaleToSizeProportional() - Bitmap.ScaleProport...Black Falcon Software's Database Data-Access-Layers: “SQLHELPER”, “ORAHELPER” - Handling Binary Data: See attached document...BTech Networking Library: BTech Networking Library: Same as pervious just new namespace, extended networking coming soon!!!Community Forums NNTP bridge: Community Forums NNTP Bridge V37: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Generic Entity Model 2: GEM2 build 54383: This is second BETA release of GEM2! Please see source code change sets for updates! Following implementation is not included in this release: My...Hades: Projet Hadès - Official Demo - Version 0.1.0 Beta: ---------------------------------------------------------------------------- - Projet Hadès - Official Demo - Version 0.1.0 Beta ------------------...HeinanOS: HeinanOS M1 Source Code: You can download HeinanOS M1 Source Code and contribute to HeinanOS development! Be aware that you should not use this code for your own systems! ...HeinanOS: Milestone 1: This is the first major release for HeinanOS 1.0 Please note this is a PRE-RELEASE! This release includes the following features: -Bootable DOS-...HKGolden Express: HKGoldenExpress (Build 201006131900): New features: (None) Bug fix: Incorrect message submit date of message/ replies. (Note: Showing message submit date is enabled since Build 20100...HKGolden Express: HKGoldenExpress (Build 201006140110): New features: (None) Bug fix: (None) Improvements: (None) Other changes: Set time zone of message date as Hong Kong. Adjusted the format of messa...MediaCoder.NET: MediaCoder.NET v1.0 Beta 1.5: Installer file for MediaCoder.NET v1.0 beta 1.5. Now converts multiple files.MME: First release: Features of this release 1. One installer MME.msi. However you can also install MMEMenuManagerSetup.vsix which installs a project template that e...MSBuild Launch Pad (mPad): 1.1 Beta 1: Platform selection box is added.MVMMapper: MVMMapper Release v 1.0.1: This release has no downloadable documentation. Please use the Documentation section to get started.NginxTray: NginxTray 0.7 RC2: NginxTray 0.7 RC2PowerAuras: PowerAuras-3.0.0K-beta3: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...PowerAuras: PowerAuras-3.0.0K-beta4: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...Scriptagility for DotNetNuke: Scriptagility 1.0 (Beta): Initial public release please evaluate and feedbackSharpDevelop: SharpDevelop 4.0 Beta 1: Release notes: http://community.sharpdevelop.net/forums/t/11388.aspxsimpleLinux Distro: Project X3: This is an example of download for simpleLinuxSOAPI - StackOverflow API Parser/Wrapper Generator: SOAPI Beta 3: The SOAPI Beta 3 download will be made availabe later today when the initial documentation is complete. The previously available Beta 1 download h...Sofa: Initial release V1.0: This is the first release of Sofa. As it is made of code being previously used, as we tested it is a stable release. But bugs are always possible,...Tag Cloud Control for asp.net: Tag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...UppityUp: UppityUp v0.1: First functional version, supports monitoring availability by ping (ICMP) requests. Fit for general use. Consists of one standalone .exe file - no...VCC: Latest build, v2.1.30613.0: Automatic drop of latest buildWindStyle ExifInfo for Windows Live Writer: 1.1.0.0: Add: Multiple Language(English and Simplified Chinese); Add: Insert multiple files; Fix: Error when insert pictures without Exif info; Update: Icon...Work Recorder - Hold on own time!: WorkRecorder 1.2: +Add a whole day chartXsltDb - DotNetNuke Module Builder: 01.01.24: Syntax highlighting delivered!New samples for RadControls. On single page you can find RadTreeView, RadRating, RadChart, RadFormDecorator, RadEdito...xUnit.net Contrib: xunitcontrib 0.4 (ReSharper 5.0 RTM + dotCover): xunitcontrib release 0.4 (ReSharper runner) This release provides a test runner plugin for Resharper 5.0, 4.5 and 4.1, targetting all versions of x...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework)Agile Personal Development Methodology.NET Transactional File ManagerSOLID by exampleASP.NET MVC Time PlannerWEI ShareSiverlight ProjectMost Active ProjectsjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRhyduino - Arduino and Managed CodeCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETLightweight Fluent WorkflowMediaCoder.NETAndrew's XNA Helpers

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Sunday, September 02, 2012

    CodePlex Daily Summary for Sunday, September 02, 2012Popular ReleasesThisismyusername's codeplex page.: HTML5 Multitouch Example - Fruit Ninja in HTML5: This is an example of how you could create a game such as Fruit Ninja using HTML5's multitouch capabilities. This example isn't responsive enough, so I will be working on that, and it doesn't have great graphics, either. If I had my own webpage, I could store some graphics and upload the game there and it might look halfway decent, but here the fruits are just circles. I hope you enjoy reading the source code anyway.GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixRuminate XNA 4.0 GUI: Release 1.1.1: Fixed bugs with Slider and TextBox. Added ComboBox.Confuser: Confuser build 76542: This is a build of changeset 76542.SharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerMihmojsos OS: Mihmojsos OS 3 (Smart Rabbit): !Mihmojsos OS 3 Smart Rabbit Mihmojsos Smart Rabbit is now availableDotNetNuke Translator: 01.00.00 Beta: First release of the project.YNA: YNA 0.2 alpha: Wath's new since 0.1 alpha ? A lot of changes but there are the most interresting : StateManager is now better and faster Mouse events for all YnObjects (Sprites, Images, texts) A really big improvement for YnGroup Gamepad support And the news : Tiled Map support (need refactoring) Isometric tiled map support (need refactoring) Transition effect like "FadeIn" and "FadeOut" (YnTransition) Timers (YnTimer) Path management (YnPath, need more refactoring) Downloads All downloads...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;Sofire Suite: Sofire v1.5.0.0: Sofire v1.5.0.0 ?? ???????? ?????: 1、?? 2、????EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0.0901.1830 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install ...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....New ProjectsATSV: this is a student project for making a new silverlight UI Bookmark Collector: This project is a best practice example of how to use content items in DotNetNuke. It allows you to quickly and easily manage a listing of external links.BPVotingmachine: BP Vote SystemClean My Space: Sort your files in a fun and fast! With Clean My Space you can!CutePlatform: CutePlatform is a platform game based around the PlanetCute graphics pack from Daniel cook, make him a visit in www.lostgardem.comDancTeX: This project is targeting the integration of LaTeX into VisusalStudio. Epi Info™ Companion for Android: A mobile companion to the Epi Info™ 7 desktop tool for epidemiologic data collection and analysis.Flucene: Object Document Mapper for Lucene.Netfluentserializer: FluentSerializer is a library for .NET usable to create serialize/deserialize data through its fluent interface. The methods it creates are compiled.hongjiapp: hongjiappidealthings educational comprehensive administration system: ?????????????????????????????????????????????.Java Accounting Library: The project aims at providing a Financial Accounting Java Library which may be integrated to any other Java Application independent of its Backend Database.mycnblogs: mycnblogsNETPack: Lightweight and flexible packer for .NETRandom Useful Code: This project is where I will store various useful classes I have built over time. Only the code will be provided, no Library or the like.Suleymaniye Tavimi: Namaz vakitleri hesaplama uygulamasidir. Istenilen yer için hesaplama yapar.

    Read the article

  • My vertex shader doesn't affect texture coords or diffuse info but works for position

    - by tina nyaa
    I am new to 3D and DirectX - in the past I have only used abstractions for 2D drawing. Over the past month I've been studying really hard and I'm trying to modify and adapt some of the shaders as part of my personal 'study project'. Below I have a shader, modified from one of the Microsoft samples. I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? // // Skinned Mesh Effect file // Copyright (c) 2000-2002 Microsoft Corporation. All rights reserved. // float4 lhtDir = {0.0f, 0.0f, -1.0f, 1.0f}; //light Direction float4 lightDiffuse = {0.6f, 0.6f, 0.6f, 1.0f}; // Light Diffuse float4 MaterialAmbient : MATERIALAMBIENT = {0.1f, 0.1f, 0.1f, 1.0f}; float4 MaterialDiffuse : MATERIALDIFFUSE = {0.8f, 0.8f, 0.8f, 1.0f}; // Matrix Pallette static const int MAX_MATRICES = 100; float4x3 mWorldMatrixArray[MAX_MATRICES] : WORLDMATRIXARRAY; float4x4 mViewProj : VIEWPROJECTION; /////////////////////////////////////////////////////// struct VS_INPUT { float4 Pos : POSITION; float4 BlendWeights : BLENDWEIGHT; float4 BlendIndices : BLENDINDICES; float3 Normal : NORMAL; float3 Tex0 : TEXCOORD0; }; struct VS_OUTPUT { float4 Pos : POSITION; float4 Diffuse : COLOR; float2 Tex0 : TEXCOORD0; }; float3 Diffuse(float3 Normal) { float CosTheta; // N.L Clamped CosTheta = max(0.0f, dot(Normal, lhtDir.xyz)); // propogate scalar result to vector return (CosTheta); } VS_OUTPUT VShade(VS_INPUT i, uniform int NumBones) { VS_OUTPUT o; float3 Pos = 0.0f; float3 Normal = 0.0f; float LastWeight = 0.0f; // Compensate for lack of UBYTE4 on Geforce3 int4 IndexVector = D3DCOLORtoUBYTE4(i.BlendIndices); // cast the vectors to arrays for use in the for loop below float BlendWeightsArray[4] = (float[4])i.BlendWeights; int IndexArray[4] = (int[4])IndexVector; // calculate the pos/normal using the "normal" weights // and accumulate the weights to calculate the last weight for (int iBone = 0; iBone < NumBones-1; iBone++) { LastWeight = LastWeight + BlendWeightsArray[iBone]; Pos += mul(i.Pos, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; Normal += mul(i.Normal, mWorldMatrixArray[IndexArray[iBone]]) * BlendWeightsArray[iBone]; } LastWeight = 1.0f - LastWeight; // Now that we have the calculated weight, add in the final influence Pos += (mul(i.Pos, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); Normal += (mul(i.Normal, mWorldMatrixArray[IndexArray[NumBones-1]]) * LastWeight); // transform position from world space into view and then projection space //o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Pos = mul(float4(Pos.xyz, 1.0f), mViewProj); o.Diffuse.x = 0.0f; o.Diffuse.y = 0.0f; o.Diffuse.z = 0.0f; o.Diffuse.w = 0.0f; o.Tex0 = float2(0,0); return o; } technique t0 { pass p0 { VertexShader = compile vs_3_0 VShade(4); } } I am currently using the SlimDX .NET wrapper around DirectX, but the API is extremely similar: public void Draw() { var device = vertexBuffer.Device; device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0); device.SetRenderState(RenderState.Lighting, true); device.SetRenderState(RenderState.DitherEnable, true); device.SetRenderState(RenderState.ZEnable, true); device.SetRenderState(RenderState.CullMode, Cull.Counterclockwise); device.SetRenderState(RenderState.NormalizeNormals, true); device.SetSamplerState(0, SamplerState.MagFilter, TextureFilter.Anisotropic); device.SetSamplerState(0, SamplerState.MinFilter, TextureFilter.Anisotropic); device.SetTransform(TransformState.World, Matrix.Identity * Matrix.Translation(0, -50, 0)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(-200, 0, 0), Vector3.Zero, Vector3.UnitY)); device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); var material = new Material(); material.Ambient = material.Diffuse = material.Emissive = material.Specular = new Color4(Color.White); material.Power = 1f; device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.VertexDeclaration = vertexDeclaration; device.Indices = indexBuffer; device.Material = material; device.SetTexture(0, texture); var param = effect.GetParameter(null, "mWorldMatrixArray"); var boneWorldTransforms = bones.OrderedBones.OrderBy(x => x.Id).Select(x => x.CombinedTransformation).ToArray(); effect.SetValue(param, boneWorldTransforms); effect.SetValue(effect.GetParameter(null, "mViewProj"), Matrix.Identity);// Matrix.PerspectiveFovLH((float)Math.PI / 4, (float)device.Viewport.Width / device.Viewport.Height, 10, 10000000)); effect.SetValue(effect.GetParameter(null, "MaterialDiffuse"), material.Diffuse); effect.SetValue(effect.GetParameter(null, "MaterialAmbient"), material.Ambient); effect.Technique = effect.GetTechnique(0); var passes = effect.Begin(FX.DoNotSaveState); for (var i = 0; i < passes; i++) { effect.BeginPass(i); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, skin.Vertices.Length, 0, skin.Indicies.Length / 3); effect.EndPass(); } effect.End(); } Again, I set diffuse and tex0 vertex shader outputs to zero, but my model still shows the full texture and lighting as if I hadn't changed the values from the vertex buffer. Changing the position of the model works, but nothing else. Why is this? Also, whatever I set in the bone transformation matrices doesn't seem to have an effect on my model. If I set every bone transformation to a zero matrix, the model still shows up as if nothing had happened, but changing the Pos field in shader output makes the model disappear. I don't understand why I'm getting this kind of behaviour. Thank you!

    Read the article

  • Can grub handle same release (3.6) but new rc (rc5)?

    - by hhoyt
    can grub handle newer kerner rc ? I am running 3.6.0-rc4 ok, grub update definitely recognizes all required files for rc5, but edit of grub.cfg only shows rc4 after grub-update. D/N matter whether I generate kernel 3.6.0-rc5 or whether I install the .deb files. Generating grub.cfg ... using custom appearance settings Found background image: /usr/share/peppermint/wallpapers/Peppermint.jpg Found linux image: /boot/vmlinuz-3.6.0-030600rc5-generic Found linux image: /boot/vmlinuz-3.6.0-030600rc4-generic Found initrd image: /boot/initrd.img-3.6.0-030600rc4-generic Found linux image: /boot/vmlinuz-3.6.0-rc5 Found initrd image: /boot/initrd.img-3.6.0-rc5 Found linux image: /boot/vmlinuz-3.6.0-rc5.old Found initrd image: /boot/initrd.img-3.6.0-rc5 Found linux image: /boot/vmlinuz-3.5.3 Found initrd image: /boot/initrd.img-3.5.3 Found linux image: /boot/vmlinuz-3.5.3.old Found initrd image: /boot/initrd.img-3.5.3 Found linux image: /boot/vmlinuz-3.5.0-13-generic Found initrd image: /boot/initrd.img-3.5.0-13-generic Found Ubuntu 10.04.1 LTS (10.04) on /dev/sda1 Found Ubuntu 10.04.4 LTS (10.04) on /dev/sda10 Found Peppermint Two (2) on /dev/sda15 Found Ubuntu 10.10 (10.10) on /dev/sda16 Found Windows 7 (loader) on /dev/sda3 Found Ubuntu 11.04 (11.04) on /dev/sda5 Found Ubuntu 12.04.1 LTS (12.04) on /dev/sda6 Found Linux Mint 12 LXDE (12) on /dev/sda8 Found MS-DOS 5.x/6.x/Win3.1 on /dev/sdc1 If I press 'e' on boot startup of rc4 and manually change it to rc5 and ctrl-x, it comes up fine. I just cannot get grub.cfg to update such that rc4 is included. Thanks, Howard # DO NOT EDIT THIS FILE # It is automatically generated by grub-mkconfig using templates from /etc/grub.d and settings from /etc/default/grub # BEGIN /etc/grub.d/00_header if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="Windows 7 (loader) (on /dev/sda3)" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi END /etc/grub.d/00_header BEGIN /etc/grub.d/05_debian_theme insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 insmod jpeg if background_image /usr/share/peppermint/wallpapers/Peppermint.jpg; then set color_normal=light-gray/black set color_highlight=magenta/black else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi END /etc/grub.d/05_debian_theme BEGIN /etc/grub.d/10_linux_proxy menuentry "Peppermint, with Linux 3.6.0-030600rc4-generic" --class peppermint --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 218e9f6f-c21e-4c50-90a5-5a40be639b66 linux /boot/vmlinuz-3.6.0-030600rc4-generic root=UUID=218e9f6f-c21e-4c50-90a5-5a40be639b66 ro initrd /boot/initrd.img-3.6.0-030600rc4-generic } END /etc/grub.d/10_linux_proxy BEGIN /etc/grub.d/30_os-prober_proxy menuentry "Peppermint, with Linux 3.6.0-030600rc4-generic (on /dev/sda15)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos15)' search --no-floppy --fs-uuid --set=root 21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 linux /boot/vmlinuz-3.6.0-030600rc4-generic root=UUID=21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 ro splash quiet splash vt.handoff=7 initrd /boot/initrd.img-3.6.0-030600rc4-generic } menuentry "Ubuntu, with Linux 3.0.0-24-generic (on /dev/sda10)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos10)' search --no-floppy --fs-uuid --set=root 6c9a0149-3045-4335-83fa-a2513ca3a250 linux /boot/vmlinuz-3.0.0-24-generic root=UUID=6c9a0149-3045-4335-83fa-a2513ca3a250 ro crashkernel=384M-2G:64M,2G-:128M splash initrd /boot/initrd.img-3.0.0-24-generic } menuentry "Ubuntu, with Linux 3.5.0-030500rc7-generic (on /dev/sda10)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos10)' search --no-floppy --fs-uuid --set=root 6c9a0149-3045-4335-83fa-a2513ca3a250 linux /boot/vmlinuz-3.5.0-030500rc7-generic root=UUID=6c9a0149-3045-4335-83fa-a2513ca3a250 ro crashkernel=384M-2G:64M,2G-:128M splash initrd /boot/initrd.img-3.5.0-030500rc7-generic } menuentry "Peppermint, with Linux 3.3.0-030300rc2-generic (on /dev/sda15)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos15)' search --no-floppy --fs-uuid --set=root 21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 linux /boot/vmlinuz-3.3.0-030300rc2-generic root=UUID=21a3d91a-ae43-4f51-b8d6-7f3dc80967d7 ro splash quiet splash vt.handoff=7 initrd /boot/initrd.img-3.3.0-030300rc2-generic } menuentry "Ubuntu, with Linux 2.6.39-rc5-candela (on /dev/sda16)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos16)' search --no-floppy --fs-uuid --set=root 48fcb5ec-b51b-4afd-b0e5-a2aace66f6e1 linux /boot/vmlinuz-2.6.39-rc5-candela root=/dev/sda7 ro splash initrd /boot/initrd.img-2.6.39-rc5-candela } menuentry "Windows 7 (loader) (on /dev/sda3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set=root EA3EFABB3EFA7FBD chainloader +1 } menuentry "Ubuntu, with Linux 2.6.38-13-generic (on /dev/sda5)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root bcfe855e-a449-429d-b204-c667e129a4bd linux /boot/vmlinuz-2.6.38-13-generic root=UUID=bcfe855e-a449-429d-b204-c667e129a4bd ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-13-generic } menuentry "Ubuntu, with Linux 3.2.0-29-generic-pae (on /dev/sda6)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 linux /boot/vmlinuz-3.2.0-29-generic-pae root=UUID=369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-29-generic-pae } menuentry "Ubuntu, with Linux 3.2.0-23-generic-pae (on /dev/sda6)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos6)' search --no-floppy --fs-uuid --set=root 369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=369605ad-1a92-4b7d-abb5-ce75cbdfc9c1 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic-pae } menuentry "Linux Mint 12 LXDE, 3.0.0-12-generic (/dev/sda8) (on /dev/sda8)" --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd0,msdos8)' search --no-floppy --fs-uuid --set=root ccdc67ed-e81c-4f85-9b75-fe0c24c65bb8 linux /boot/vmlinuz-3.0.0-12-generic root=UUID=ccdc67ed-e81c-4f85-9b75-fe0c24c65bb8 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-3.0.0-12-generic } menuentry "MS-DOS 5.x/6.x/Win3.1 (on /dev/sdc1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root A8F0DE02F0DDD6A2 drivemap -s (hd0) ${root} chainloader +1 } END /etc/grub.d/30_os-prober_proxy BEGIN /etc/grub.d/40_custom This file provides an easy way to add custom menu entries. Simply type the menu entries you want to add after this comment. Be careful not to change the 'exec tail' line above. END /etc/grub.d/40_custom BEGIN /etc/grub.d/41_custom if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi END /etc/grub.d/41_custom

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • CodePlex Daily Summary for Monday, September 03, 2012

    CodePlex Daily Summary for Monday, September 03, 2012Popular ReleasesMetodología General Ajustada - MGA: 03.01.03: Cambios Aury: Ajuste del margen del reporte. Visualización de la columna de Supuestos en la parte del módulo de Decisión. Cambios John: Integración de código con cambios enviados por Aury Niño. Generación de instaladores. Soporte técnico por correo electrónico y telefónico.Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...Thisismyusername's codeplex page.: HTML5 Mulititouch Fruit Ninja Proof of Concept: This is an example of how you could create a game such as Fruit Ninja using HTML5's multitouch capabilities. Sorry this example doesn't have great graphics. If I had my own webpage, I could store some graphics and upload the game there and it might look halfway decent, but since I'm only using a Codeplex page and most mobile devices can't open .zip files, the fruits are just circles. I hope you enjoy reading the source code anyway.GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.TSQL Code Smells Finder: POC 1.01: Proof of concept 1.01 TSQLDomTest.ps1 and Errors.Txt are requiredConfuser: Confuser build 76542: This is a build of changeset 76542.Reactive State Machine: ReactiveStateMachine-beta: TouchStateMachine now supports Microsoft Surface 2.0 SDK. The TouchStateMachine is an extension to the Reactive State Machine. Reactive State Machine uses NuGet for dependency managementSharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerMihmojsos OS: Mihmojsos OS 3 (Smart Rabbit): !Mihmojsos OS 3 Smart Rabbit Mihmojsos Smart Rabbit is now availableDotNetNuke Translator: 01.00.00 Beta: First release of the project.YNA: YNA 0.2 alpha: Wath's new since 0.1 alpha ? A lot of changes but there are the most interresting : StateManager is now better and faster Mouse events for all YnObjects (Sprites, Images, texts) A really big improvement for YnGroup Gamepad support And the news : Tiled Map support (need refactoring) Isometric tiled map support (need refactoring) Transition effect like "FadeIn" and "FadeOut" (YnTransition) Timers (YnTimer) Path management (YnPath, need more refactoring) Downloads All downloads...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;Sofire Suite: Sofire v1.5.0.0: Sofire v1.5.0.0 ?? ???????? ?????: 1、?? 2、????EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...New ProjectsBPVote4PPT: BPVote For PowerPointCosmo OS: La semplicità in un OSFinancial Analytic Tools: C#.Net Financial Analytic ToolsGeminiMVC: An Open Source CMS written in ASP.net MVC 4 with speed, extensibility, and ease-of-us in mind.JQuery SharePoint Autocomplete People Picker: This JQUery bundle provides an autocomplete people picker based on SharePoint profiles. It can be hosted on the SharePoint itself or on remote applications.Kerbal Space Program PartModule Library: This project is designed to add various functionalities to custom parts for the space program simulation game Kerbal Space Program.KeyboardRemapper: This tool to remaps keys in the keyboard. If you have more than one keyboard or an additional keypad, you can remap the keys of the each keyboard independentlyKHStudent: ??????Localized DataAnnotations with T4 templates: Simplified DataAnnotations localization using T4 templates.MfcLightToolkit: Supports development for small and simple MFC application. Provides asynchronous programming model like .NET, file download, easy control resizing, and so on.Müslüm ÖZTÜRK Code Lib: Test amaçli olusturulan projemdirPolska: Testproject in how a polish grammerprogram can look like.QueueLessApp: Here is the codeRusIS.CMS: aaaSGPS: Projeto de controle de produtos e serviçosStemmersNet: Stemmers pack for .Net FrameworkTrabajo Final de Ingenieria - Javier Vallejos: Tesis Final de la carrera de Ingenieria - Universidad Abierta Interamericana.TSQL Code Smells Finder: TSQL 'smells' findersXNA and Data Driven Design: This project includes links for XNA and Data Driven DesignXNA and System Testing: This project includes code for XNA and System TestingYUGI-AR Project: an open source project for yugioh based augmented reality???????? ? ?????????????: ???? ??????? ??????? ?????????????? ??????????? ?????????? ??? ? ????? ?????? ? ? ??? ??? ????? ? ??? ?????????? ????????????.

    Read the article

  • Why is my producer-consumer blocking?

    - by User007
    My code is here: http://pastebin.com/Fi3h0E0P Here is the output 0 Should we take order today (y or n): y Enter order number: 100 More customers (y or n): n Stop serving customers right now. Passing orders to cooker: There are total of 1 order(s) 1 Roger, waiter. I am processing order #100 The goal is waiter must take orders and then give them to the cook. The waiter has to wait cook finishes all pizza, deliver the pizza, and then take new orders. I asked how P-V work in my previous post here. I don't think it has anything to do with \n consuming? I tried all kinds of combination of wait(), but none work. Where did I make a mistake? The main part is here: //Producer process if(pid > 0) { while(1) { printf("0"); P(emptyShelf); // waiter as P finds no items on shelf; P(mutex); // has permission to use the shelf waiter_as_producer(); V(mutex); // cooker now can use the shelf V(orderOnShelf); // cooker now can pickup orders wait(); printf("2"); P(pizzaOnShelf); P(mutex); waiter_as_consumer(); V(mutex); V(emptyShelf); printf("3 "); } } if(pid == 0) { while(1) { printf("1"); P(orderOnShelf); // make sure there is an order on shelf P(mutex); //permission to work cooker_as_consumer(); // take order and put pizza on shelf printf("return from cooker"); V(mutex); //release permission printf("just released perm"); V(pizzaOnShelf); // pizza is now on shelf printf("after"); wait(); printf("4"); } } So I imagine this is the execution path: enter waiter_as_producer, then go to child process (cooker), then transfer the control back to parent, finish waiter_as_consumer, switch back to child. The two waits switch back to parent (like I said I tried all possible wait() combination...).

    Read the article

  • CSS Ease-in-out to full screen

    - by Aditya Singh
    I have a black background div of a size which contains an image. <div id="Banner"> <img onclick="expand();" src="hola.jpg"> </div> #Banner { position:relative; height:50px; width:50px; margin:0 auto; background-color:#000000; -webkit-transition: all 0.5s ease-in-out 0.5s; -moz-transition: all 0.5s ease-in-out 0.5s; -o-transition: all 0.5s ease-in-out 0.5s; transition: all 0.5s ease-in-out 0.5s; } <script type="text/javascript"> function expand(){ document.getElementById('Banner').style['height'] = '250'; document.getElementById('Banner').style['width'] = '250'; } </script> So when the user clicks on the image, the div transitions to 250, 250. My problem is that, i want it to to transition to full screen. The following javascript function does expand to fullscreen but the transition effect doesn't come. I need to do it from a javascript code without jquery. function expand(){ document.getElementById('Banner').style['position'] = 'absolute'; document.getElementById('Banner').style['height'] = '100%'; document.getElementById('Banner').style['width'] = '100%'; document.getElementById('Banner').style['top'] = '0'; document.getElementById('Banner').style['left'] = '0'; } Please advice. Update : Solution Roger below has provided with an alternative solution. This takes care if the document has already been scrolled and is another place. Will expand the div to full browser screen. sz=getSize(); //function returns screen width and height in pixels currentWidth=200; currentHeight=200; scalex=sz.W/currentWidth; scaley=sz.H/currentHeight; transx=0-((expandingDiv.offsetLeft+(currentWidth/2))-(sz.W/2))+document.body.scrollLeft; transy=0-((expandingDiv.offsetTop+(cuttentHeight/2))-(sz.H/2))+document.body.scrollTop; transx = transx.toString(); transy = transy.toString(); document.getElementById("Banner").style['-webkit-transform'] = 'translate('+transx+'px,'+transy+'px) scale('+scalex+','+scaley+')';

    Read the article

  • PyQt4: Scrollbar doesn't show in scrollarea when resizing dockWidget

    - by Whospal
    I created a python test program (Test_InfoPanel.py) that has MainWindow with dockWidget, and within it, a tabWidget with scrollArea widget. However, when I resize the MainWindow, the vertical scrollbar doesn't auto-appear when the Similarly, when I undock the dockWidget & resize, the vertical scrollbar doesn't auto-appear. Pls help! Test Program (Test_InfoPanel.py): #!/usr/bin/env python # Filename: Test_InfoPanel.py # Date: 2012-Sep-18 ''' This program test the scrollarea to show scrollbars for the InfoPanel_UI. ''' import sys from PyQt4 import QtCore, QtGui if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) # Look and feel changed to 'Cleanlooks'. app.setStyle('Cleanlooks') from InfoPanel_UI import Ui_MainWindow_InfoPanel AppWindow = QtGui.QMainWindow() ui = Ui_MainWindow_InfoPanel() ui.setupUi(AppWindow) ui.tabWidget_Info_Panel.setCurrentWidget(ui.scrollArea_Info_Panel) AppWindow.show() sys.exit(app.exec_()) Generated *.ui script (InfoPanel_UI.py): # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'InfoPanel.ui' # # Created: Wed Sep 19 13:11:06 2012 # by: PyQt4 UI code generator 4.9.4 # # WARNING! All changes made in this file will be lost! from PyQt4 import QtCore, QtGui try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: _fromUtf8 = lambda s: s class Ui_MainWindow_InfoPanel(object): def setupUi(self, MainWindow_InfoPanel): MainWindow_InfoPanel.setObjectName(_fromUtf8("MainWindow_InfoPanel")) MainWindow_InfoPanel.resize(602, 263) MainWindow_InfoPanel.setDocumentMode(False) self.centralwidget = QtGui.QWidget(MainWindow_InfoPanel) self.centralwidget.setObjectName(_fromUtf8("centralwidget")) MainWindow_InfoPanel.setCentralWidget(self.centralwidget) self.statusbar = QtGui.QStatusBar(MainWindow_InfoPanel) self.statusbar.setObjectName(_fromUtf8("statusbar")) MainWindow_InfoPanel.setStatusBar(self.statusbar) self.dockWidget_Info_Panel = QtGui.QDockWidget(MainWindow_InfoPanel) self.dockWidget_Info_Panel.setMinimumSize(QtCore.QSize(300, 140)) font = QtGui.QFont() font.setBold(True) font.setItalic(True) font.setWeight(75) self.dockWidget_Info_Panel.setFont(font) self.dockWidget_Info_Panel.setLayoutDirection(QtCore.Qt.LeftToRight) self.dockWidget_Info_Panel.setAllowedAreas(QtCore.Qt.LeftDockWidgetArea|QtCore.Qt.RightDockWidgetArea) self.dockWidget_Info_Panel.setObjectName(_fromUtf8("dockWidget_Info_Panel")) self.dockWidgetContents_Info_Panel = QtGui.QWidget() self.dockWidgetContents_Info_Panel.setObjectName(_fromUtf8("dockWidgetContents_Info_Panel")) self.tabWidget_Info_Panel = QtGui.QTabWidget(self.dockWidgetContents_Info_Panel) self.tabWidget_Info_Panel.setGeometry(QtCore.QRect(0, 0, 300, 215)) font = QtGui.QFont() font.setBold(False) font.setItalic(False) font.setWeight(50) self.tabWidget_Info_Panel.setFont(font) self.tabWidget_Info_Panel.setObjectName(_fromUtf8("tabWidget_Info_Panel")) self.tab_1 = QtGui.QWidget() self.tab_1.setObjectName(_fromUtf8("tab_1")) self.scrollArea_Info_Panel = QtGui.QScrollArea(self.tab_1) self.scrollArea_Info_Panel.setGeometry(QtCore.QRect(9, 9, 271, 171)) self.scrollArea_Info_Panel.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) self.scrollArea_Info_Panel.setWidgetResizable(True) self.scrollArea_Info_Panel.setObjectName(_fromUtf8("scrollArea_Info_Panel")) self.scrollAreaWidgetContents = QtGui.QWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 269, 169)) self.scrollAreaWidgetContents.setObjectName(_fromUtf8("scrollAreaWidgetContents")) self.frame_Info_Panel = QtGui.QFrame(self.scrollAreaWidgetContents) self.frame_Info_Panel.setGeometry(QtCore.QRect(0, 0, 261, 161)) self.frame_Info_Panel.setObjectName(_fromUtf8("frame_Info_Panel")) self.label_Eqpt_Model = QtGui.QLabel(self.frame_Info_Panel) self.label_Eqpt_Model.setGeometry(QtCore.QRect(10, 10, 111, 27)) self.label_Eqpt_Model.setObjectName(_fromUtf8("label_Eqpt_Model")) self.lineEdit_Eqpt_Model = QtGui.QLineEdit(self.frame_Info_Panel) self.lineEdit_Eqpt_Model.setEnabled(False) self.lineEdit_Eqpt_Model.setGeometry(QtCore.QRect(120, 10, 111, 27)) palette = QtGui.QPalette() brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Text, brush) self.lineEdit_Eqpt_Model.setPalette(palette) self.lineEdit_Eqpt_Model.setObjectName(_fromUtf8("lineEdit_Eqpt_Model")) self.label_State = QtGui.QLabel(self.frame_Info_Panel) self.label_State.setGeometry(QtCore.QRect(10, 40, 111, 27)) self.label_State.setObjectName(_fromUtf8("label_State")) self.lineEdit_State = QtGui.QLineEdit(self.frame_Info_Panel) self.lineEdit_State.setEnabled(False) self.lineEdit_State.setGeometry(QtCore.QRect(120, 40, 111, 27)) palette = QtGui.QPalette() brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Text, brush) self.lineEdit_State.setPalette(palette) self.lineEdit_State.setObjectName(_fromUtf8("lineEdit_State")) self.groupBox_Current_Position = QtGui.QGroupBox(self.frame_Info_Panel) self.groupBox_Current_Position.setGeometry(QtCore.QRect(10, 70, 241, 91)) palette = QtGui.QPalette() brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.WindowText, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Button, brush) brush = QtGui.QBrush(QtGui.QColor(170, 255, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Light, brush) brush = QtGui.QBrush(QtGui.QColor(127, 255, 63)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Midlight, brush) brush = QtGui.QBrush(QtGui.QColor(42, 127, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Dark, brush) brush = QtGui.QBrush(QtGui.QColor(56, 170, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Mid, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 255)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.BrightText, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.ButtonText, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 255)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Base, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Window, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Shadow, brush) brush = QtGui.QBrush(QtGui.QColor(170, 255, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.AlternateBase, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 220)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.ToolTipBase, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.ToolTipText, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.WindowText, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Button, brush) brush = QtGui.QBrush(QtGui.QColor(170, 255, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Light, brush) brush = QtGui.QBrush(QtGui.QColor(127, 255, 63)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Midlight, brush) brush = QtGui.QBrush(QtGui.QColor(42, 127, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Dark, brush) brush = QtGui.QBrush(QtGui.QColor(56, 170, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Mid, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 255)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.BrightText, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.ButtonText, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 255)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Base, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Window, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Shadow, brush) brush = QtGui.QBrush(QtGui.QColor(170, 255, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.AlternateBase, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 220)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.ToolTipBase, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.ToolTipText, brush) brush = QtGui.QBrush(QtGui.QColor(42, 127, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.WindowText, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Button, brush) brush = QtGui.QBrush(QtGui.QColor(170, 255, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Light, brush) brush = QtGui.QBrush(QtGui.QColor(127, 255, 63)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Midlight, brush) brush = QtGui.QBrush(QtGui.QColor(42, 127, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Dark, brush) brush = QtGui.QBrush(QtGui.QColor(56, 170, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Mid, brush) brush = QtGui.QBrush(QtGui.QColor(42, 127, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 255)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.BrightText, brush) brush = QtGui.QBrush(QtGui.QColor(42, 127, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.ButtonText, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Base, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Window, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Shadow, brush) brush = QtGui.QBrush(QtGui.QColor(85, 255, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.AlternateBase, brush) brush = QtGui.QBrush(QtGui.QColor(255, 255, 220)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.ToolTipBase, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 0)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.ToolTipText, brush) self.groupBox_Current_Position.setPalette(palette) self.groupBox_Current_Position.setObjectName(_fromUtf8("groupBox_Current_Position")) self.label_Current_Position_X = QtGui.QLabel(self.groupBox_Current_Position) self.label_Current_Position_X.setGeometry(QtCore.QRect(20, 20, 41, 27)) self.label_Current_Position_X.setObjectName(_fromUtf8("label_Current_Position_X")) self.lineEdit_Current_Position_X = QtGui.QLineEdit(self.groupBox_Current_Position) self.lineEdit_Current_Position_X.setEnabled(False) self.lineEdit_Current_Position_X.setGeometry(QtCore.QRect(60, 20, 161, 27)) palette = QtGui.QPalette() brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Text, brush) self.lineEdit_Current_Position_X.setPalette(palette) self.lineEdit_Current_Position_X.setObjectName(_fromUtf8("lineEdit_Current_Position_X")) self.label_Current_Position_Y = QtGui.QLabel(self.groupBox_Current_Position) self.label_Current_Position_Y.setGeometry(QtCore.QRect(20, 50, 41, 27)) self.label_Current_Position_Y.setObjectName(_fromUtf8("label_Current_Position_Y")) self.lineEdit_Current_Position_Y = QtGui.QLineEdit(self.groupBox_Current_Position) self.lineEdit_Current_Position_Y.setEnabled(False) self.lineEdit_Current_Position_Y.setGeometry(QtCore.QRect(60, 50, 161, 27)) palette = QtGui.QPalette() brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(60, 60, 60)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Text, brush) brush = QtGui.QBrush(QtGui.QColor(0, 0, 127)) brush.setStyle(QtCore.Qt.SolidPattern) palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Text, brush) self.lineEdit_Current_Position_Y.setPalette(palette) self.lineEdit_Current_Position_Y.setObjectName(_fromUtf8("lineEdit_Current_Position_Y")) self.scrollArea_Info_Panel.setWidget(self.scrollAreaWidgetContents) self.tabWidget_Info_Panel.addTab(self.tab_1, _fromUtf8("")) self.tab_2 = QtGui.QWidget() self.tab_2.setObjectName(_fromUtf8("tab_2")) self.tabWidget_Info_Panel.addTab(self.tab_2, _fromUtf8("")) self.dockWidget_Info_Panel.setWidget(self.dockWidgetContents_Info_Panel) MainWindow_InfoPanel.addDockWidget(QtCore.Qt.DockWidgetArea(2), self.dockWidget_Info_Panel) self.retranslateUi(MainWindow_InfoPanel) self.tabWidget_Info_Panel.setCurrentIndex(0) QtCore.QMetaObject.connectSlotsByName(MainWindow_InfoPanel) def retranslateUi(self, MainWindow_InfoPanel): MainWindow_InfoPanel.setWindowTitle(QtGui.QApplication.translate("MainWindow_InfoPanel", "MainWindow Info Panel", None, QtGui.QApplication.UnicodeUTF8)) self.dockWidget_Info_Panel.setWindowTitle(QtGui.QApplication.translate("MainWindow_InfoPanel", "Info Panel", None, QtGui.QApplication.UnicodeUTF8)) self.label_Eqpt_Model.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "Eqpt Model:", None, QtGui.QApplication.UnicodeUTF8)) self.lineEdit_Eqpt_Model.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "ABC", None, QtGui.QApplication.UnicodeUTF8)) self.label_State.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "State:", None, QtGui.QApplication.UnicodeUTF8)) self.lineEdit_State.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "Working", None, QtGui.QApplication.UnicodeUTF8)) self.groupBox_Current_Position.setTitle(QtGui.QApplication.translate("MainWindow_InfoPanel", "Current Position:", None, QtGui.QApplication.UnicodeUTF8)) self.label_Current_Position_X.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "X =", None, QtGui.QApplication.UnicodeUTF8)) self.lineEdit_Current_Position_X.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "1000.00 m", None, QtGui.QApplication.UnicodeUTF8)) self.label_Current_Position_Y.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "Y =", None, QtGui.QApplication.UnicodeUTF8)) self.lineEdit_Current_Position_Y.setText(QtGui.QApplication.translate("MainWindow_InfoPanel", "1000.00 m", None, QtGui.QApplication.UnicodeUTF8)) self.tabWidget_Info_Panel.setTabText(self.tabWidget_Info_Panel.indexOf(self.tab_1), QtGui.QApplication.translate("MainWindow_InfoPanel", "Info_Pg 1", None, QtGui.QApplication.UnicodeUTF8)) self.tabWidget_Info_Panel.setTabText(self.tabWidget_Info_Panel.indexOf(self.tab_2), QtGui.QApplication.translate("MainWindow_InfoPanel", "Info_Pg 2", None, QtGui.QApplication.UnicodeUTF8)) PS: I initially created the mainWindow as a Dialog, but realized that after undock & redock, the dockWidget doesn't dock properly. Somehow there's an offset. This doesn't seem to be a problem if the mainWindow is a QtGui.QMainWindow instead of a QtGui.QDialog.

    Read the article

  • Using JQuery tabs in an HTML 5 page

    - by nikolaosk
    In this post I will show you how to create a simple tabbed interface using JQuery,HTML 5 and CSS.Make sure you have downloaded the latest version of JQuery (minified version) from http://jquery.com/download.Please find here all my posts regarding JQuery.Also have a look at my posts regarding HTML 5.In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here. Let me move on to the actual example.This is the sample HTML 5 page<!DOCTYPE html><html lang="en">  <head>    <title>Liverpool Legends</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">    <script type="text/javascript" src="jquery-1.8.2.min.js"> </script>     <script type="text/javascript" src="tabs.js"></script>       </head>  <body>    <header>        <h1>Liverpool Legends</h1>    </header>     <section id="tabs">        <ul>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#first-tab">Defenders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#second-tab">Midfielders</a></li>            <li><a href="http://weblogs.asp.net/controlpanel/blogs/posteditor.aspx?SelectedNavItem=Posts§ionid=1153&postid=9143136#third-tab">Strikers</a></li>        </ul>   <div id="first-tab">     <h3>Liverpool Defenders</h3>     <p> The best defenders that played for Liverpool are Jamie Carragher, Sami Hyypia , Ron Yeats and Alan Hansen.</p>   </div>   <div id="second-tab">     <h3>Liverpool Midfielders</h3>     <p> The best midfielders that played for Liverpool are Kenny Dalglish, John Barnes,Ian Callaghan,Steven Gerrard and Jan Molby.        </p>   </div>   <div id="third-tab">     <h3>Liverpool Strikers</h3>     <p>The best strikers that played for Liverpool are Ian Rush,Roger Hunt,Robbie Fowler and Fernando Torres.<br/>      </p>   </div> </div></section>            <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  This is very simple HTML markup. I have styled this markup using CSS.The contents of the style.css file follow* {    margin: 0;    padding: 0;}header{font-family:Tahoma;font-size:1.3em;color:#505050;text-align:center;}#tabs {    font-size: 0.9em;    margin: 20px 0;}#tabs ul {    float: left;    background: #777;    width: 260px;    padding-top: 24px;}#tabs li {    margin-left: 8px;    list-style: none;}* html #tabs li {    display: inline;}#tabs li, #tabs li a {    float: left;}#tabs ul li.active {    border-top:2px red solid;    background: #15ADFF;}#tabs ul li.active a {    color: #333333;}#tabs div {    background: #15ADFF;    clear: both;    padding: 15px;    min-height: 200px;}#tabs div h3 {    margin-bottom: 12px;}#tabs div p {    line-height: 26px;}#tabs ul li a {    text-decoration: none;    padding: 8px;    color:#0b2f20;    font-weight: bold;}footer{background-color:#999;width:100%;text-align:center;font-size:1.1em;color:#002233;}There are some CSS rules that style the various elements in the HTML 5 file. These are straight-forward rules. The JQuery code lives inside the tabs.js file $(document).ready(function(){$('#tabs div').hide();$('#tabs div:first').show();$('#tabs ul li:first').addClass('active'); $('#tabs ul li a').click(function(){$('#tabs ul li').removeClass('active');$(this).parent().addClass('active');var currentTab = $(this).attr('href');$('#tabs div').hide();$(currentTab).show();return false;});}); I am using some of the most commonly used JQuery functions like hide , show, addclass , removeClass I hide and show the tabs when the tab becomes the active tab. When I view my page I get the following result Hope it helps!!!!!

    Read the article

  • How to OpenSSL decrypt smime.p7m

    - by tntu
    I have received an email that has no content, just a file called smime.p7m attached. I was looking into the OpenSSL and it's smime module but I cannot figure out exactly how. I must be doing something wrong. I extracted the certificate chain form the p7m file. # openssl pkcs7 -inform DER -in smime.p7m -out pkcs7.pem # openssl pkcs7 -in pkcs7.pem -print_certs -out certs.pem Then I tried to decrypt: # openssl smime -decrypt -in smime.p7m -signer certs.pem -out smime.eml No recipient certificate or key specified And also with my server's SSL cert: # openssl smime -decrypt -in smime.p7m -recip server.nopass.key.crt.ca.pem -out smime.eml Error reading S/MIME message 140078540371784:error:0D0D40D1:asn1 encoding routines:SMIME_read_ASN1:no content type:asn_mime.c:447: Can anyone shed some light on what steps I need to take to extract the email?

    Read the article

  • Proxmox VE cluster not working

    - by JJ56
    I have been using a Proxmox VE 2.0 cluster (2 servers) for months. Today, I had the "bright idea" to install a GUI on one of the servers. I used tasksel, and selected the graphical desktop environment (Gnome). When I rebooted the server, neither of the servers on the cluster could see each other (shows a red light by the server on the sidebar, no statistics). The VMs on each server are working fine, individually. pvecm status on the broken server shows cman_tool: cannot open connection to cman, is it running ?. Running it on the other server outputs a lot of lines (here: http://pastebin.com/HpQfUHTU), but I assume the important bit is expected votes:2 total votes: 1 Trying pvecm delnode (othernode) on either server outputs: cluster not ready - no quorum? Any suggestions as to how I can fix this? Thanks in advance!

    Read the article

  • Powershell Set-BitsTransfer - access is denied

    - by Rouan van Dalen
    Hi I have the following powershell script: Import-Module BitsTransfer Get-BitsTransfer -AllUsers | Set-BitsTransfer -SetOwnerToCurrentUser which is running on Server2008 R2. I get the error message: Set-BitsTransfer : Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) I have googled around but cannot seem to find the reason why this happens or how to fix this. I suspect it is a permission issue. I am logged in as local administrator and use the 'Run as administrator' option when starting powershell. I have also tried setting the execution policy, but it makes no difference. I am not aware of permissions on commands in powershell. Could someone please shed some light on this. Thanks. Rouan

    Read the article

  • Master/Slave DNS setup vs. rsync'ed DNS servers

    - by Jakobud
    We currently have primary and secondary DNS servers on our corporate network. They are setup in a master/slave type setup, where the slave gets its DNS information from the master. I'm trying to figure out what the real advantage is for the master/slave setup instead of just setting up an automated rsync between the two to keep the DNS settings matched. Can anyone shed some light on this? Or is it just a preferential thing? If that is the case, it seems like the rsync setup would be much easier to setup, maintain and understand.

    Read the article

  • Can't install Windows 7 on Acer Aspire M1100

    - by r0ca
    When I install Windows 7, everything goes smooth but as soon as it's done and Windows needs to reboot for the last time before getting the desktop, the computer stucks to Verify DMI Pool Data............. and then, nothing. I change the CMOS battery, I tried so many setup in BIOS, even load default settings... Nothing worked. The HDD light is not flickering anymore, no HDD activity. CTRL-ALT-DEL doesn't work. It's just impossible to load Windows 7. I tried Windows XP and this works fine. I also tried the Acer (Futureshop) recovery CD and I get an Hexademical error message stating the install cannot continue. Is there a BIOS flash apps somewhere or a fix I can apply to have Windows 7 Ultimate installed on my computer. Any takers?

    Read the article

  • Recommendations on laptops - long battery life, slim, Windows 7

    - by molecule
    Hi, I have been looking at a couple of laptops... The most important requirement is long battery life, slim & light, and runs Windows 7. One that seems to fit the bill is the Sony Vaio Z series laptops. They claim to have long battery life, have some really impressive specs (on paper) and a very nice design. They are not on sale where I am yet but has seen/used them? Second would be the Lenovo Thinkpad IBM x301 For business users, Thinkpad seems to be the way to go. I have also considered Dell Alienware M11x but may not be completely appropriate for business use? What would you recommend with a budget of USD1500-2000?

    Read the article

  • which performance counters mainly matter for windows server performance?

    - by Karl Cassar
    We have a website which is sometimes performing slowly, and / or completely hangs. I have setted up temporarily the default system performance data collector in Performance Monitor, to see if this can shed some light. However, the default Data Collector set collects a huge amount of counters, as well as generates huge logs files. Just 8 hours of data resulted in 4GB of data. Which performance counters matter the most, when judging server load? Also, is it a performance concern if one leaves such data-collectors running indefinitely? Obviously, I will not know when the server will experience slow performance, so I need the logs there so that I can check them out. Any other specific guidelines on monitoring server performance would be greatly appreciated. OS is a Windows Server 2008 R2 (Web Edition).

    Read the article

  • Help me choose a desktop: which of these two should I buy?

    - by Sammy
    I just want the more powerful of the two: Choice 1: http://www.bestbuy.com/site/Gateway+-+Desktop+with+AMD+Phenom%26%23153%3B+II+Quad-Core+Processor/9698936.p?id=1218153428687&skuId=9698936 Choice 2: /site/HP+-+Pavilion+Desktop+with+AMD+Phenom%26%23153%3B+II+Quad-Core+Processor/9694506.p?id=1218150609828&skuId=9694506 I can't post more than one hyperlink since I am a new user, so please add bestbuy domain name before choice 2. The latter choice is a bit more expensive but not by much so I don't care about that. As for what I intend to use my machine for, just regular web surfing, light gaming, web development related work, etc. But that doesn't really matter, of these two I just want to know which is the better more powerful system and which you would buy if you were in my position.

    Read the article

  • Troubleshooting PC

    - by srand
    After playing PC games after a few hours I decided to take a break. When I opened up My Computer in Windows 7 I noticed I one of my drives had disappeared. Thinking it was just a glitch, I tried to restart. Upon restart, the BIOS took forever to get through (I didn't notice my disappeared hard drive in the listed drives) and the computer seemed stuck at the "Starting Windows" screen. I hard shut down everything. Opened up the case, used canned air to clear the dust out and made sure all devices were snugly in place. I hooked everything up and powered on. This time, after a few seconds the computer restarted (nothing showed up on screen either). After its restart, the computer didn't do anything. The hard drive indicator light was on the whole time. What happened? :( PC Specs: Windows 7, 3GB RAM, Core 2 Duo, 3 Hard drives

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >