Search Results

Search found 6079 results on 244 pages for 'define'.

Page 21/244 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • How should I define a composite foreign key for domain constraints in the presence of surrogate keys

    - by Samuel Danielson
    I am writing a new app with Rails so I have an id column on every table. What is the best practice for enforcing domain constraints using foreign keys? I'll outline my thoughts and frustration. Here's what I would imagine as "The Rails Way". It's what I started with. Companies: id: integer, serial company_code: char, unique, not null Invoices: id: integer, serial company_id: integer, not null Products: id: integer, serial sku: char, unique, not null company_id: integer, not null LineItems: id: integer, serial invoice_id: integer, not null, references Invoices (id) product_id: integer, not null, references Products (id) The problem with this is that a product from one company might appear on an invoice for a different company. I added a (company_id: integer, not null) to LineItems, sort of like I'd do if only using natural keys and serials, then added a composite foreign key. LineItems (product_id, company_id) references Products (id, company_id) LineItems (invoice_id, company_id) references Invoices (id, company_id) This properly constrains LineItems to a single company but it seems over-engineered and wrong. company_id in LineItems is extraneous because the surrogate foreign keys are already unique in the foreign table. Postgres requires that I add a unique index for the referenced attributes so I am creating a unique index on (id, company_id) in Products and Invoices, even though id is simply unique. The following table with natural keys and a serial invoice number would not have these issues. LineItems: company_code: char, not null sku: char, not null invoice_id: integer, not null I can ignore the surrogate keys in the LineItems table but this also seems wrong. Why make the database join on char when it has an integer already there to use? Also, doing it exactly like the above would require me to add company_code, a natural foreign key, to Products and Invoices. The compromise... LineItems: company_id: integer, not null sku: integer, not null invoice_id: integer, not null does not require natural foreign keys in other tables but it is still joining on char when there is a integer available. Is there a clean way to enforce domain constraints with foreign keys like God intended, but in the presence of surrogates, without turning the schema and indexes into a complicated mess?

    Read the article

  • Unit testing code paths

    - by Michael
    When unit testing using expectations, you define a set of method calls and corresponding results for those calls. These define the path through the method that you want to test. I have read that unit tests should not duplicate the code. But when you define these expectations, isn't that duplicating the code, or at least the process? How do you know when you're duplicating functionality under test?

    Read the article

  • Use of Business Parameters in BPM12c

    - by Abhishek Mittal-Oracle
    With the release of BPM12c, a new feature to use Business Parameters is introduced through which we can define a business parameter which will behave as a global variable which can be used within BPM project. Business Administrator can be the one responsible to modify the business parameters value dynamically at run-time which may bring change in BPM process flow where it is used.This feature was a part of BPM10g product and was extensively used. In BPM11g, this feature is not present currently.Business Parameters can be defined in 2 ways:1. Using Jdev to define business parameters, and 2. Using BPM workspace to define business parameters.It is important to note that business parameters need to be mapped with a valid organisation unit defined in a BPM project. If the same is not handled, exceptions like 'BPM-70702' will be thrown by BPM Engine. This is because business parameters work along with organisation defined in a BPM project.At the same time, we can use same business parameter across different organisation units with different values. Business Parameters in BPM12c has this capability to handle multiple values with different organisation units defined in a single BPM project. This enables business to re-use same business parameters defined in a BPM project across different organisations.Business parameters can be defined using the below data types:1. int2. string 3. boolean4. double While defining an business parameter, it is mandatory to provide a default value. Below are the steps to define a business parameter in Jdev: Step 1:  Open 'Organization' and click on 'Business Parameters' tab.Step 2:  Click on '+' button.Step 3: Add business parameter name, type and provide default value(mandatory).Step 4: Click on 'OK' button.Step 5: Business parameter is defined. Below are the steps to define a business parameter in BPM workspace: Step 1: Login to BPM workspace using admin-username and password.Step 2: Click on 'Administration' on the right top side of workspace.Step 3: Click on 'Business Parameters' in the left navigation panel under 'Organization'. Step 4:  Click on '+' button.Step 5: Add business parameter name, type and provide default value(mandatory).Step 6: Click on 'OK' button.Step 7: Business parameter is defined. Note: As told earlier in the blog, it is necessary to define and map a valid organization ID with predefined variable 'organizationalUnit' under data associations in an BPM process before the business parameter is used. I have created one sample PoC demonstrating the use of Business Parameters in BPM12c and it can be found here.

    Read the article

  • Where do I define dependency properties shared by the detail views in a master-detail MVVM WPF scena

    - by absence
    I can think of two ways to implement dependency properties that are shared between the detail views: Store them in the master view model and add data bindings to the detail view models when they are created, and bind to them in the detail view. Don't store them in the view models at all, and use FindAncestor to bind directly to properties of the master view instead. What are the pros and cons of each, and are there other/better options?

    Read the article

  • is it incorrect to define an hashcode of an object as the sum, multiplication, whatever, of all clas

    - by devoured elysium
    Let's say I have the following class: class ABC { private int myInt = 1; private double myDouble = 2; private String myString = "123"; private SomeRandomClass1 myRandomClass1 = new ... private SomeRandomClass2 myRandomClass2 = new ... //pseudo code public int myHashCode() { return 37 * myInt.hashcode() * myDouble.hashCode() * ... * myRandomClass.hashcode() } } Would this be a correct implementation of hashCode? This is not how I usually do it(I tend to follow Effective Java's guide-lines) but I always have the temptation to just do something like the above code. Thanks

    Read the article

  • How to define index by several columns in hibernate entity?

    - by foobar
    Morning. I need to add indexing in hibernate entity. As I know it is possible to do using @Index annotation to specify index for separate column but I need an index for several fields of entity. I've googled and found jboss annotation @Table, that allows to do this (by specification). But (I don't know why) this functionality doesn't work. May be jboss version is lower than necessary, or maybe I don't understant how to use this annotation, but... complex index is not created. Why index may not be created? jboss version 4.2.3.GA Entity example: package somepackage; import org.hibernate.annotations.Index; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; @Entity @org.hibernate.annotations.Table(appliesTo = House.TABLE_NAME, indexes = { @Index(name = "IDX_XDN_DFN", columnNames = {House.XDN, House.DFN} ) } ) public class House { public final static String TABLE_NAME = "house"; public final static String XDN = "xdn"; public final static String DFN = "dfn"; @Id @GeneratedValue private long Id; @Column(name = XDN) private long xdn; @Column(name = DFN) private long dfn; @Column private String address; public long getId() { return Id; } public void setId(long id) { this.Id = id; } public long getXdn() { return xdn; } public void setXdn(long xdn) { this.xdn = xdn; } public long getDfn() { return dfn; } public void setDfn(long dfn) { this.dfn = dfn; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } } When jboss/hibernate tries to create table "house" it throws following exception: Reason: org.hibernate.AnnotationException: @org.hibernate.annotations.Table references an unknown table: house

    Read the article

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • Can I give a different id to define Cakephp model associations?

    - by gacrux
    I have an one to many association in which a Thing can have many Statuses defined as below: Status Model: class Status extends AppModel { var $name = 'Status'; var $belongsTo = array( 'Thing' => array( 'className' => 'Thing', 'foreignKey' => 'thing_id', ); } Thing Model: class Thing extends AppModel { var $name = 'Thing'; var $belongsTo = array( // other associations ); var $hasMany = array( 'Status' => array( 'className' => 'Status', 'foreignKey' => 'thing_id', 'dependent' => false, 'order' => 'datetime DESC', 'limit' => '10', ), // other associations ); } This works OK, but I would like Thing to use a different id to connect to Status. E.g. Thing would use 'id' for all of it's other associations but use 'thing_status_id' for it's Status association. How can I best do this?

    Read the article

  • Where can I define Conditional compilation constants for Delphi Prism?

    - by Martijn
    I've just ported a Web service from Delphi.NET 2006 to Delphi Prism 2009 (running in the Visual Studio 2008 IDE). But I can't find where I'm supposed to set (or unset) the conditional compilation constants! Am I blind, has this option been left out, or is it just not supported in VS? [edit: thanks to Mohammed Nasman for the link] MSDN tells me to set them using the Project Designer. First, it took me a while to figure out that the Project menu is only visible when the Solution is selected (and not the web service project). Then, there's still no way to set conditional compilation constants in the Project Designer! I just can't find a way to get to the Project Options in an ASP.NET project... Is it really not possible?

    Read the article

  • Can you define <=> in Ruby and then have ==, >, <, >=, and <= defined automatically?

    - by jeremy Ruten
    Here's part of my Note class: class Note attr_accessor :semitones, :letter, :accidental def initialize(semitones, letter, accidental = :n) @semitones, @letter, @accidental = semitones, letter, accidental end def <=>(other) @semitones <=> other.semitones end def ==(other) @semitones == other.semitones end def >(other) @semitones > other.semitones end def <(other) @semitones < other.semitones end end It seems to me like there should be a module that I could include that could give me my equality and comparison operators based on my <=> method. Is there one? I'm guessing a lot of people run into this kind of problem. How do you usually solve it? (How do you make it DRY?)

    Read the article

  • MS Access 2003 - Is there a way to programmatically define the data for a chart?

    - by Justin
    So I have some VBA for taking charts built with the Form's Chart Wizard, and automatically inserting it into PowerPoint Presentation slides. I use those chart-forms as sub forms within a larger forms that has parameters the user can select to determine what is on the chart. The idea is that the user can determine the parameter, build the chart to his/her liking, and click a button and have it in a ppt slide with the company's background template, blah blah blah..... So it works, though it is very bulky in terms of the amount of objects I have to use to accomplish this. I use expressions such as the following: like forms!frmMain.Month&* to get the input values into the saved queries, which was fine when i first started, but it went over so well and they want so many options, that it is driving the number of saved queries/objects up. I need several saved forms with charts because of the number of different types of charts I need to have this be able to handle. SO FINALLY TO MY QUESTION: I would much rather do all this on the fly with some VBA. I know how to insert list boxes, and text boxes on a form, and I know how to use SQL in VBA to get the values I want from tables/queries using VBA, I just don't know if there is some vba I can use to set the data values of the charts from a resulting recordset: DIM rs AS DAO.Rescordset DIM db AS DAO.Database DIM sql AS String sql = "SELECT TOP 5 Count(tblMain.TransactionID) AS Total, tblMain.Location FROM tblMain WHERE (((tblMain.Month) = """ & me.txtMonth & """ )) ORDER BY Count (tblMain.TransactionID) DESC;" set db = currentDB set rs = db.OpenRecordSet(sql) rs.movefirst some kind of cool code in here to make this recordset the data of chart in frmChart ("Chart01") thanks for your help. apologies for the length of the explanation.

    Read the article

  • Define variable in variable/parameter and use in javascript?

    - by Ross
    I want to defined a variable/parameter in CSS and access it by javascript. What is the best way of doing this? One option I thought of would be to place a cutsom attribute on the body element, then access that attribute via jQuery. A little more info: I'm defining colour sets in CSS. I've created a function with RaphaelJS to draw a gradient background on the page. I want this function to use a highlight colour (which is not used elsewhere on the DOM) which will be defined in the CSS along with the other colour elements. Thanks

    Read the article

  • Writing Device Drivers for Microcontrollers, where to define IO Port pins?

    - by volting
    I always seem to encounter this dilemma when writing low level code for MCU's. I never know where to declare pin definitions so as to make the code as reusable as possible. In this case Im writing a driver to interface an 8051 to a MCP4922 12bit serial DAC. Im unsure how/where I should declare the pin definitions for The CS(chip select) and LDAC(data latch) for the DAC. At the moment there declared in the header file for the driver. Iv done a lot of research trying to figure out the best approach but havent really found anything. Im basically want to know what the best practices... if there are some books worth reading or online information, examples etc, any recommendations would be welcome. Just a snippet of the driver so you get the idea /** @brief This function is used to write a 16bit data word to DAC B -12 data bit plus 4 configuration bits @param dac_data A 12bit word @param ip_buf_unbuf_select Input Buffered/unbuffered select bit. Buffered = 1; Unbuffered = 0 @param gain_select Output Gain Selection bit. 1 = 1x (VOUT = VREF * D/4096). 0 =2x (VOUT = 2 * VREF * D/4096) */ void MCP4922_DAC_B_TX_word(unsigned short int dac_data, bit ip_buf_unbuf_select, bit gain_select) { unsigned char low_byte=0, high_byte=0; CS = 0; /**Select the chip*/ high_byte |= ((0x01 << 7) | (0x01 << 4)); /**Set bit to select DAC A and Set SHDN bit high for DAC A active operation*/ if(ip_buf_unbuf_select) high_byte |= (0x01 << 6); if(gain_select) high_byte |= (0x01 << 5); high_byte |= ((dac_data >> 8) & 0x0F); low_byte |= dac_data; SPI_master_byte(high_byte); SPI_master_byte(low_byte); CS = 1; LDAC = 0; /**Latch the Data*/ LDAC = 1; }

    Read the article

  • How to define an n-m relation in doctrine?

    - by murze
    If got a table "Article" and a table "Tags". Articles can have multiple tags and tags can hang to multiple articles. The class BaseArticle looks like this: abstract class BaseArticle extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('article'); $this->hasColumn('article_id', 'integer', 8, array( 'type' => 'integer', 'length' => 8, 'fixed' => false, 'unsigned' => false, 'primary' => true, 'autoincrement' => true, )); $this->hasColumn('title', 'string', null, array( 'type' => 'string', 'fixed' => false, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, )); $this->hasColumn('text', 'string', null, array( 'type' => 'string', 'fixed' => false, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, $this->hasColumn('url', 'string', 255, array( 'type' => 'string', 'length' => 255, 'fixed' => false, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, )); } public function setUp() { parent::setUp(); $this->hasMany('Tag as Tags', array( 'local' => 'article_id', 'foreign'=>'tag_id', 'refClass'=>'Articletag') ); } } The BaseTag-class like this: abstract class BaseTag extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('tag'); $this->hasColumn('tag_id', 'integer', 4, array( 'type' => 'integer', 'length' => 4, 'fixed' => false, 'unsigned' => false, 'primary' => true, 'autoincrement' => true, )); $this->hasColumn('name', 'string', null, array( 'type' => 'string', 'fixed' => false, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, )); } public function setUp() { parent::setUp(); $this->hasMany('Article as Articles', array( 'local' => 'tag_id', 'foreign'=>'article_id', 'refClass'=>'Articletag') ); } } And the relationship class like this: abstract class BaseArticletag extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('articletag'); $this->hasColumn('article_id', 'integer', 8, array( 'type' => 'integer', 'length' => 8, 'fixed' => false, 'unsigned' => false, 'primary' => true, 'autoincrement' => false, )); $this->hasColumn('tag_id', 'integer', 4, array( 'type' => 'integer', 'length' => 4, 'fixed' => false, 'unsigned' => false, 'primary' => true, 'autoincrement' => false, )); } public function setUp() { parent::setUp(); } } When I try to get a property from the article all goes well by using: $article = Doctrine_Query::create()->from('Article a') ->where('id = ?' , 1) ->fetchOne(); echo $article->title; //gives me the title But when I try this: foreach($article->Tags as $tag) { echo($tag->name) } I get an error: Unknown record property / related component "Tags" on "Article"

    Read the article

  • How to define a varying length of the string in XSD pattern?

    - by infant programmer
    The input XML tag must be validated for a pattern which is like this: type : positive int / decimal, minimum length is 0, max length is 12(before decimal point), fraction digits are optional if exist then precision must be 2. This means both positive integer and Decimal numbers(2 digit precision) are allowed. so the acceptable values can be like, null, 0, 0.00, 1234567890, 123456789012, 123456789012.12, invalid values are: 0.000, 1234567890123(13 digits - invalid), The pattern I have designed is: <xs:pattern value="|([0-9]){12}|([0-9]){12}[.][0-9][0-9]"/> The problem with this pattern is, it doesn't allow the number with string-length less than 12, it says "1234567890" is an invalid value, where as it must be allowed!

    Read the article

  • How to define generic super type for static factory method?

    - by Esko
    If this has already been asked, please link and close this one. I'm currently prototyping a design for a simplified API of a certain another API that's a lot more complex (and potentially dangerous) to use. Considering the related somewhat complex object creation I decided to use static factory methods to simplify the API and I currently have the following which works as expected: public class Glue<T> { private List<Type<T>> types; private Glue() { types = new ArrayList<Type<T>>(); } private static class Type<T> { private T value; /* some other properties, omitted for simplicity */ public Type(T value) { this.value = value; } } public static <T> Glue<T> glueFactory(String name, T first, T second) { Glue<T> g = new Glue<T>(); Type<T> firstType = new Glue.Type<T>(first); Type<T> secondType = new Glue.Type<T>(second); g.types.add(firstType); g.types.add(secondType); /* omitted complex stuff */ return g; } } As said, this works as intended. When the API user (=another developer) types Glue<Horse> strongGlue = Glue.glueFactory("2HP", new Horse(), new Horse()); he gets exactly what he wanted. What I'm missing is that how do I enforce that Horse - or whatever is put into the factory method - always implements both Serializable and Comparable? Simply adding them to factory method's signature using <T extends Comparable<T> & Serializable> doesn't necessarily enforce this rule in all cases, only when this simplified API is used. That's why I'd like to add them to the class' definition and then modify the factory method accordingly. PS: No horses (and definitely no ponies!) were harmed in writing of this question.

    Read the article

  • How to define a multipage environment not interrupted by tables and figures?

    - by Egon Willighagen
    I have defined a new LaTeX environment for excursions in a book chapter I am writing. The environment is multipage and often includes inline images. Moreover, I am using the shaded environment to give the environment a background colour to make it stand out a bit. However, the environment, as shown below, is split up by floating tables and images, which makes the flow of the environment visually more difficult to follow. For example, it is now difficult to see if that floating image or table is part (the missing background colour does not help). So, I like to extend my environment to disallow it to be interrupted by floating elements, but do not know how to get that done. \newcounter{bioclipse} \def\thebioclipse{\thechapter-\arabic{bioclipse}} \newenvironment{bioclipse}[2][]{\begin{small}\begin{shaded}\refstepcounter{bioclipse} \par\medskip\noindent% \textbf{Bioclipse Excursion~\thebioclipse #1: #2 \vspace{0.1cm} \hrule \vspace{0.1cm}} \rmfamily}{\medskip \end{shaded}\end{small}} Any solution to disallow interruption is fine, even if the background colour is done differently.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >