Search Results

Search found 14633 results on 586 pages for 'package explorer'.

Page 194/586 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • Upgrading ATI Catalyst drivers fails

    - by Iain
    I'm trying to upgrade my ATI HD 5770 graphics card drivers, as they keep crashing, but the upgrade keeps failing half way through. I have tried the full driver package and the drivers-only package. 10-5_vista64_win7_64_dd_ccc_enu.exe 10-5_vista64_win7_64_dd.exe I'm on Windows 7 Home Premium 64 bit. How can I upgrade?

    Read the article

  • Puppet and launchd services?

    - by Joel Westberg
    We have a production environment configured with Puppet, and want to be able to set up a similar environment on our development machines: a mix of Red Hats, Ubuntus and OSX. As might be expected, OSX is the odd man out here, and sadly, I'm having a lot of trouble with getting this to work. My first attempt was using macports, using the following declaration: package { 'rabbitmq-server': ensure => installed, provider => macports, } but this, sadly, generates the following error: Error: /Stage[main]/Rabbitmq/Package[rabbitmq-server]: Could not evaluate: Execution of '/opt/local/bin/port -q installed rabbitmq-server' returned 1: usage: cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-d delim] [file ...] while executing "exec dscl -q . -read /Users/$env(SUDO_USER) NFSHomeDirectory | cut -d ' ' -f 2" (procedure "mportinit" line 95) invoked from within "mportinit ui_options global_options global_variations" Next up, I figured I'd give homebrew a try. There is no package provider available by default, but puppet-homebrew seemed promising. Here, I got much farther, and actually managed to get the install to work. package { 'rabbitmq': ensure => installed, provider => brew, } file { "plist": path => "/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist", source => "/usr/local/opt/rabbitmq/homebrew.mxcl.rabbitmq.plist", ensure => present, owner => root, group => wheel, mode => 0644, } service { "homebrew.mxcl.rabbitmq": enable => true, ensure => running, provider => "launchd", require => [ File["/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist"] ], } Here, I don't get any error. But RabbitMQ doesn't start either (as it does if I do a manual load with launchctl) [... snip ...] Debug: Executing '/bin/launchctl list' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /var/db/launchd.db/com.apple.launchd/overrides.plist' Debug: /Schedule[weekly]: Skipping device resources because running on a host Debug: /Schedule[puppet]: Skipping device resources because running on a host Debug: Finishing transaction 2248294820 Debug: Storing state Debug: Stored state in 0.01 seconds Finished catalog run in 25.90 seconds What am I doing wrong?

    Read the article

  • Utility for extracting MIME attachments

    - by tripleee
    I am looking for a command-line tool for Unix (ideally, available in a Debian / Ubuntu package) for extracting all MIME parts from a multipart email message (or the body from a singlepart with an interesting content-type, for that matter). I have been using the mimeexplode tool which ships with the Perl MIME::Tools package, but it's not really production quality (the script is included as an example only, and has issues with what it regards as "evil" character sets) and I could certainly roll my own script based on that, but if this particular wheel has already been innovated, perhaps I shouldn't.

    Read the article

  • Installing softare on Linux

    - by Dimen Shaw
    I'm trying to install the GMP package on Redhat 4, x86_64. The package can only be installed using make, which is not available and should be installed with apt-get/yum, but I don't have either one of them. I tried installing them using rpm, but they each require lots of dependencies themselves, which although finite in amount seem like a VERY tedious job to do. Any help on how I should go about solving this?

    Read the article

  • Can I mix yum and apt-get safely?

    - by leonbloy
    I understand that yum and apt-get operate on top of rpm, so that the data about installed packages in a linux system is responsability of rpm; so that neither yum nor apt-get keep their own data about installed packages . Is this true ? It is safe to install some package using yum and install another (perhaps related) package using apt-get (or viceversa)?

    Read the article

  • Lockdown users on Windows Server 2012

    - by el.severo
    I set up a Active Directory on a server machine with Windows Server 2012 and I'd like to create some users with limitations like Windows Steady State does in Windows XP (locally). Seen already the Windows SteadyState Handbook (with Windows Server 2008), but I'd like to know if anyone has tried this before, the limitations are the following: 1. Prevent locked or roaming user profiles that cannot be found on the computer from logging on 2. Do not cache copies of locked or roaming user profiles for users who have previously logged on to this computer 3. Do not allow Windows to compute and store passwords using LAN Manager Hash values 4. Do not store usernames or passwords used to log on to the Windows Live ID or the domain 5. Prevent users from creating folders and files on drive C:\ 6. Lock profile to prevent the user from making permanent changes 7. Remove the Control Panel, Printer and Network Settings from the Classic Start menu 8. Remove the Favorites icon 9. Remove the My Network Places icon 10. Remove the Frequently Used Program list 11. Remove the Shared documents folder from My Computer 12. Remove control Panel icon 13. Remove the Set Program Access and Defaults icon 14. Remove the Network Connection(Connect To)icon 15. Remove the Printers and Faxes icon 16. Remove the Run icon 17. Prevent access to Windows Explorer features: Folder Options, Customize Toolbar, and the Notification Area 18. Prevent access to the taskbar 19. Prevent access to the command prompt 20. Prevent access to the registry editor 21. Prevent access to the Task Manager 22. Prevent access to Microsoft Management Console utilities 23. Prevent users from adding or removing printers 24. Prevent users from locking the computer 25. Prevent password changes (also requires the Control Panel icon to be removed) 26. Disable System Tools and other management programs 27. Prevent users from saving files to the desktop 28. Hide A Drive 29. Hide B Drive 30. Hide C Drive 31. Prevent changes to Internet Explorer registry settings 32. Empty the Temporary Internet Files folder when Internet Explorer is closed 33. Remove Internet Options 34. Remove General tab in Internet Options 35. Remove Security tab in Internet Options 36. Remove Privacy tab in Internet Options 37. Remove Content tab in Internet Options 38. Remove Connections tab in Internet Options 39. Remove Programs tab in Internet Options 40. Remove Advanced tab in Internet Options 41. Set a home page (Internet Explorer) 42. Restrict the possibility to change desktop image 43. Restrict the possibility to change wallpaper 44. Restrict usb flash drives Any suggestions for this? UPDATE: As @Dan suggested me I'd like to specify that would be applied to a educational scenario where students can login from a computer and want to add some restrictions to them.

    Read the article

  • Tmux installation problems

    - by RayQuang
    hI, I am trying to install the terminal multiplexer tmux on my Debian Lenny server so that I can have multiple terminals through ssh. However I have had a lot of difficulty installing it from the debian package, and by compiling it. When I try the package it says something about the wrong version of libc6, and when I compile it I get the following error: server.o: In function `server_start': server.c:(.text+0x273): undefined reference to `event_reinit' collect2: ld returned 1 exit status make: *** [tmux] Error 1 Help would be very much appreciated, RayQuang

    Read the article

  • Emacs xkcd installation needs JSON 1.4, not found in ELPA

    - by CodeKingPlusPlus
    I tried to install the xkcd emacs package (where you can view an xkcd comic in emacs) and got the following error: Need JSON 1.4, but only 1.2 is available I tried to get JSON 1.4 but I cannot find it in the package manager ELPA. It also says that I have JSON 1.3 built in and installed. A lot of things seem to not work correctly. How can I get xkcd to work inside of emacs? I use Ubuntu 12.04 and Emacs 24.3.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • How to ThinApp a Database Application & SQL Server 2005

    - by dandan78
    To make it easier for potential clients to try out our software without having to go to the trouble of installing SQL Server, a test database, setting up an ODBC connection and so on, we want to package the whole thing with VMWare ThinApp. Ideally, there'd be a single executable and all the user would have to do is run it. I've been reading up on ThinApp on the VMWare website but am still unsure how to go about doing this. Is the correct approach to put SQL Server and the application in separate packages, reverting to a clean version of the virtual enviroment prior to each packaging? That seems somewhat problematic because the app needs the DBMS to run, as well as an ODBC connection. Like I said, we'd prefer to go with a single package (package = executable?), but two executables are also acceptable if an alternative can't be found. I realize that the same result can probably be achieved by way of a custom SQL Server Express installation but ThinApp looks like a better and more elegant solution.

    Read the article

  • How do I translate this Matlab bsxfun call to R?

    - by claytontstanley
    I would also (fingers crossed) like the solution to work with R Sparse Matrices in the Matrix package. >> A = [1,2,3,4,5] A = 1 2 3 4 5 >> B = [1;2;3;4;5] B = 1 2 3 4 5 >> bsxfun(@times, A, B) ans = 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 5 10 15 20 25 >> EDIT: I would like to do a matrix multiplication of these sparse vectors, and return a sparse array: > class(NRowSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > class(NColSums) [1] "dsparseVector" attr(,"package") [1] "Matrix" > NRowSums * NColSums (I think) w/o using a non-sparse variable to temporarily store data.

    Read the article

  • Conflicting PACKAGE_NAME and other macros when using autotools.

    - by baol
    When using autotools (with a config.h file) for both a library and a software built on that library the compiler complains about a redefinition of some macros (PACKAGE_NAME, PACKAGE_TARNAME and so on). How can I prevent this? The config.h file is needed in the library to propagate it's setting to the software that use it. Right now I have a wrapper script library_config.h that includes the original config.h and provides defaults when the user is not using autotools, but even undefining the macros in that package I get the redefinition warning from gcc. #ifndef LIB_CONFIG_H #define LIB_CONFIG_H #ifdef HAVE_CONFIG_H # include "config.h" # undef PACKAGE # undef PACKAGE_BUGREPORT # undef PACKAGE_NAME # undef PACKAGE_STRING # undef PACKAGE_TARNAME # undef PACKAGE_VERSION # undef VERSION #else # if defined (WIN32) # define HAVE_UNORDERED_MAP 1 # define TR1_MIXED_NAMESPACE 1 # elif defined (__GXX_EXPERIMENTAL_CXX0X__) # define HAVE_UNORDERED_MAP 1 # else # define HAVE_TR1_UNORDERED_MAP 1 # endif #endif #endif I believe the best option would be to have a library without that macros: How can I avoid the definition of PACKAGE, PACKAGE_NAME and so on in the library when using autotools?

    Read the article

  • SSIS DTSX File Repair Tool

    - by Eric Ness
    I'm working with an SSIS 2005 file that crashes Visual Studio 2005 on my workstation. This happens when I open the data flow diagram and Visual Studio attempts to validate the package. I can open it successfully on another computer though. The package itself is fairly simple and only has two control flow tasks and maybe ten tasks in the data flow. I'm wondering if there is a tool that goes through the XML in the dtsx file and repairs any issues or if this is even necessary. The dtsx file is about 171 kB and it seems like there's a lot in it considering what a simple package it is.

    Read the article

  • How to use MSBuild to create ClickOnce files that match those created by Visual Studio

    - by EasyTimer
    I am trying to use MSBuild in a TFS Build file to create click once files for my application. According to MSDN (http://msdn.microsoft.com/en-us/library/ms165431.aspx) it seems that the project file's click once settings should be used unless you override them with property arguments on the command line. However, even though I have specified (in VS) that some prerequisites should be bootstrapped in with the setup, they don't seem to get installed if I use the MSBuild command line to create the package, even though they do seem to get installed if I use Visual Studio to create the ClickOnce package. Please could someone advise me how to get my prerequisites to get installed via ClickOnce when the clickonce package is built using the MSBuild command line? For information, in Visual Studio, project properties, "Publish" tab, "Prerequisite" button, I have ticked "Create setup program to install prerequisite components", I have ticked my pre-requisites and I have specified "Download prerequisites from the component vendor's web site"

    Read the article

  • Dynamic function docstring

    - by Tom Aldcroft
    I'd like to write a python function that has a dynamically created docstring. In essence for a function func() I want func.__doc__ to be a descriptor that calls a custom __get__ function create the docstring on request. Then help(func) should return the dynamically generated docstring. The context here is to write a python package wrapping a large number of command line tools in an existing analysis package. Each tool becomes a similarly named module function (created via function factory and inserted into the module namespace), with the function documentation and interface arguments dynamically generated via the analysis package.

    Read the article

  • Python: win32console import problem

    - by David
    I want to run wexpect (the windows port of pexpect) on my Windows 7 64-bit machine. I am getting the following error: C:\Program Files (x86)\wexpect\build\libwexpect.py Traceback (most recent call last): File "C:\Program Files (x86)\wexpect\build\lib\wexpect.py", line 97, in raise ImportError(str(e) + "This package was intended for Windows like operating systems.") ImportError: No module named win32console This package requires the win32 python packages.This package was intended for Windows like operatin g systems. In the code it is failing on the following line: from win32console import * I am using Python 2.6.4. I cannot figure out how to install win32console.

    Read the article

  • Problems with WiX and Visual Studio web deployment project

    - by Valeriu
    Hello, I want to create an .MSI package from a web deployment project in Visual Studio 2008. Now we want to use continuous integration and we would need the .MSI package build in the nightly builds. Till now we used standard Visual Studio Web Setup project, but this is not compatible with the MSBuild. So we decided to use WiX. The problem is that I have not found any good tutorial/documentation about this. Is there a way to do a WiX installer package from a web deployment project? If yes, how? Also, I tried to use heat.exe to create the XML for the WiX project .wxs file, but it seems that heat.exe doesn't recognize the web deployment project format. Thank you for your responses. Regards, V.

    Read the article

  • Error while running jar file

    - by Manu
    Hi, I have created jar file which includes my .class , manifest file and dependency jar files like jar cfmv custadvicejar.jar mymanifest.txt Gchreportsautomation Bean Utils jxl.jar ojdbc14.jar where custadvicejar.jar - is my jar file name mymanifest.txt contains Main-Class: Gchreportsautomation.GCH_Home_Loan_Data_Cust_Advice_DAO "Gchreportsautomation" is the package name contains "GCH_Home_Loan_Data_Cust_Advice_DAO.class" [This class is my starting point of my application] Gchreportsautomation/ GCH_Home_Loan_Data_Cust_Advice_DAO.class "Bean" is the package name contains "GCH_Home_Loan_Data_Cust_Advice_Bean.class" Bean/ GCH_Home_Loan_Data_Cust_Advice_Bean.class "Utils" is the package name contains "Utils.class" Utils/ Utils.class and jxl.jar and ojdbc14.jar are jar files required for my application which i kept in parent directory of the .class files like D:\Excalcreation /Gchreportsautomation/ GCH_Home_Loan_Data_Cust_Advice_DAO.class /Bean/ GCH_Home_Loan_Data_Cust_Advice_Bean.class /Utils/ Utils.class /jxl.jar /ojdbc.jar while running the application i got error like Caused by: java.lang.ClassNotFoundException: jxl.format.CellFormat i know this is because of class-path error. how to rectify it. If i click my jar file ,the application has to run. please provide solution.

    Read the article

  • Playing around with Eclipse features - Project files are now hidden?

    - by Daddy Warbox
    I don't even remember how, but somehow I managed to make all of my project's source files hidden in Eclipse's Package and Project Explorer panels. Go figure. 'Show Filtered Children (alt+click)' temporarily reveals the files, and only in Package Explorer can I double-click to reopen them from this view. They go back into hiding after I select another item, though. Plus, now I'm getting other annoyances, such as all of the folded non-hidden trees altogether expanding when I click on any item, and the entire file folder tree of my project now being shown in these panels (including my .svn subversion folders... which shouldn't be any of Eclipse's business, presently). Long story short, my Package/Project Explorers' just blew up on me, and I want to know how to fix this. Thanks in advance. P.S. What's a good guide I can use to learn my way around this silly contraption, anyway?

    Read the article

  • serving js libraries: better performance from google code or using asset packager?

    - by brahn
    I am working on a rails application that uses big javascript libraries (e.g. jquery UI), and I also have a handful of my own javascript files. I'm using asset packager to package up my own javascript. I'm considering two ways of serving these files: Link to the jQuery libraries from Google Code as described at http://code.google.com/apis/ajaxlibs/documentation/#jquery , and separately package up and serve my javascript files using asset packager. Host the jquery libraries myself, and package them together with my own javascript as one big merged javascript file. My hosting solution is of course not going to beat out Google's content delivery network, so at first I assumed that end users would experience faster page loads via option #1. However, it also occured to me that if I serve them myself, users would only need to issue one request to get the merged javascript (as opposed to one for my merged javascript and another for the libraries served by google). Which approach will provide the best end-user experience (presumably in the form of faster load times?)

    Read the article

  • Splitting string on probable English word boundaries

    - by Sean
    I recently used Adobe Acrobat Pro's OCR feature to process a Japanese kanji dictionary. The overall quality of the output is generally quite a bit better than I'd hoped, but word boundaries in the English portions of the text have often been lost. For example, here's one line from my file: softening;weakening(ofthemarket)8 CHANGE [transform] oneselfINTO,takethe form of; disguise oneself I could go around and insert the missing word boundaries everywhere, but this would be adding to what is already a substantial task. I'm hoping that there might exist software which can analyze text like this, where some of the words run together, and split the text on probable word boundaries. Is there such a package? I'm using Emacs, so it'd be extra-sweet if the package in question were already an Emacs package or could be readily integrated into Emacs, so that I could simply put my cursor on a line like the above and repeatedly invoke some command that splits the line on word boundaries in decreasing order of probable correctness.

    Read the article

  • How to schedule daily backup in SQL Server 2008 Web Edition

    - by Xenon
    In SQL Server Management Studio I created a maintenance plan but it won't work Error is; "Message Executed as user: LITESPELL-19C34\Administrator. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services. The package execution failed. The step failed." But in Microsoft page http://www.microsoft.com/sqlserver/2008/en/us/web.aspx in Automate tasks and policies section it is written that backup can be scheduled in this edition but how?

    Read the article

  • How to override environment variables when running configure?

    - by Sam
    In any major package for Linux, running ./configure --help will output at the end: Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries in a nonstandard directory <lib dir> CPPFLAGS C/C++ preprocessor flags, e.g. -I<include dir> if you have headers in a nonstandard directory <include dir> CPP C preprocessor Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. How do I use these variables to include a directory? I tried running ./configure --CFLAGS="-I/home/package/custom/" and ./configure CFLAGS="-I/home/package/custom/" however these do not work. Any suggestions?

    Read the article

  • An update question !

    - by rantravee
    Hi , Here's the problem I'm facing : trying to do my own application update I download the update apk from a web server on the sd card and then I launch the Package Installer with the downloading path ( while the old application is running) . So far after the package installer starts and the user agrees to install the application I get the following message " MyApp Could not be installed on this phone" and in the logcat then following message is printed : "No package identifier when getting value for resource number 0x00000000" . I could not find a reason for this behaviour , so please if there is something that I'm missing do point it to me !

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >