Search Results

Search found 36277 results on 1452 pages for 'vs 10'.

Page 371/1452 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • MySQL VIEW vs. embedded query, which one is faster?

    - by Vincenzo
    I'm going to optimize a MySQL embedded query with a view, but I'm not sure whether it will give an effect: SELECT id FROM (SELECT * FROM t); I want to convert it to: CREATE VIEW v AS SELECT * FROM t; SELECT id FROM v; I've heard about "indexed views" in SQL Server, but I'm not sure about MySQL. Any help would be appreciated. Thanks!

    Read the article

  • C++ Primer vs. Thinking in C++?

    - by Oszkar
    I've worked with C++ in the last few years but never went through a book covering all the basics. I've recently read Effective C++, but I feel it would be very important for me to read a more fundamental book as well. Which one would be more recommended? C++ Primer or Thinking in C++?

    Read the article

  • How to get VS or Xcode warning with something like "x = x++"?

    - by Jim Buck
    In the spirit of undefined behavior associated with sequence points such as “x = ++x” is it really undefined?, how does one get the compiler to complain about such code? Specifically, I am using Visual Studio 2010 and Xcode 4.3.1, the latter for an OSX app, and neither warned me about this. I even cranked up the warnings on VS2010 to "all", and it happily compiled this. (For the record, VS2010's version added 1 to the variable where Xcode's version kept the variable unchanged.)

    Read the article

  • Java vs. C variant for desktop and tablet development

    - by MirroredFate
    I am going to write a desktop application, but I am conflicted concerning which language to use. It (the desktop application) will need to have a good GUI, and to be extendable (hopefully good with modules of some sort). It must be completely cross-platform, including executable in various tablet environments. I put this as a requirement while realizing that some modification will no doubt be necessary. The language should also have some form of networking tools available. I have read http://introcs.cs.princeton.edu/java/faq/c2java.html and understand the differences between Java and C very well. I am looking not necessarily at C, but more at a C variant. If it is a complete toss-up, I will use Java as I know Java much better. However, I do not want to use a language that will be inferior for the task I wish to accomplish. Thank you for all suggestions and explanations. NOTE: If this is not the correct stack for this question, I apologize. It seemed appropriate according to the rules.

    Read the article

  • Calendar add() vs roll() when do we use it?

    - by Pentium10
    I know add() adds the specified (signed) amount of time to the given time field, based on the calendar's rules. And roll() adds the specified (signed) single unit of time on the given time field without changing larger fields. I can't think of an everyday usage of roll() I would do everything by add(). Can you help me out with examples when do we use roll() and when add()?

    Read the article

  • Announcing ASP.NET MVC 3 (Release Candidate 2)

    - by ScottGu
    Earlier today the ASP.NET team shipped the final release candidate (RC2) for ASP.NET MVC 3.  You can download and install it here. Almost there… Today’s RC2 release is the near-final release of ASP.NET MVC 3, and is a true “release candidate” in that we are hoping to not make any more code changes with it.  We are publishing it today so that people can do final testing with it, let us know if they find any last minute “showstoppers”, and start updating their apps to use it.  We will officially ship the final ASP.NET MVC 3 “RTM” build in January. Works with both VS 2010 and VS 2010 SP1 Beta Today’s ASP.NET MVC 3 RC2 release works with both the shipping version of Visual Studio 2010 / Visual Web Developer 2010 Express, as well as the newly released VS 2010 SP1 Beta.  This means that you do not need to install VS 2010 SP1 (or the SP1 beta) in order to use ASP.NET MVC 3.  It works just fine with the shipping Visual Studio 2010.  I’ll do a blog post next week, though, about some of the nice additional feature goodies that come with VS 2010 SP1 (including IIS Express and SQL CE support within VS) which make the dev experience for both ASP.NET Web Forms and ASP.NET MVC even better. Bugs and Perf Fixes Today’s ASP.NET MVC 3 RC2 build contains many bug fixes and performance optimizations.  Our latest performance tests indicate that ASP.NET MVC 3 is now faster than ASP.NET MVC 2, and that existing ASP.NET MVC applications will experience a slight performance increase when updated to run using ASP.NET MVC 3. Final Tweaks and Fit-N-Finish In addition to bug fixes and performance optimizations, today’s RC2 build contains a number of last-minute feature tweaks and “fit-n-finish” changes for the new ASP.NET MVC 3 features.  The feedback and suggestions we’ve received during the public previews has been invaluable in guiding these final tweaks, and we really appreciate people’s support in sending this feedback our way.  Below is a short-list of some of the feature changes/tweaks made between last month’s ASP.NET MVC 3 RC release and today’s ASP.NET MVC 3 RC2 release: jQuery updates and addition of jQuery UI The default ASP.NET MVC 3 project templates have been updated to include jQuery 1.4.4 and jQuery Validation 1.7.  We are also excited to announce today that we are including jQuery UI within our default ASP.NET project templates going forward.  jQuery UI provides a powerful set of additional UI widgets and capabilities.  It will be added by default to your project’s \scripts folder when you create new ASP.NET MVC 3 projects. Improved View Scaffolding The T4 templates used for scaffolding views with the Add-View dialog now generates views that use Html.EditorFor instead of helpers such as Html.TextBoxFor. This change enables you to optionally annotate models with metadata (using data annotation attributes) to better customize the output of your UI at runtime. The Add View scaffolding also supports improved detection and usage of primary key information on models (including support for naming conventions like ID, ProductID, etc).  For example: the Add View dialog box uses this information to ensure that the primary key value is not scaffold as an editable form field, and that links between views are auto-generated correctly with primary key information. The default Edit and Create templates also now include references to the jQuery scripts needed for client validation.  Scaffold form views now support client-side validation by default (no extra steps required).  Client-side validation with ASP.NET MVC 3 is also done using an unobtrusive javascript approach – making pages fast and clean. [ControllerSessionState] –> [SessionState] ASP.NET MVC 3 adds support for session-less controllers.  With the initial RC you used a [ControllerSessionState] attribute to specify this.  We shortened this in RC2 to just be [SessionState]: Note that in addition to turning off session state, you can also set it to be read-only (which is useful for webfarm scenarios where you are reading but not updating session state on a particular request). [SkipRequestValidation] –> [AllowHtml] ASP.NET MVC includes built-in support to protect against HTML and Cross-Site Script Injection Attacks, and will throw an error by default if someone tries to post HTML content as input.  Developers need to explicitly indicate that this is allowed (and that they’ve hopefully built their app to securely support it) in order to enable it. With ASP.NET MVC 3, we are also now supporting a new attribute that you can apply to properties of models/viewmodels to indicate that HTML input is enabled, which enables much more granular protection in a DRY way.  In last month’s RC release this attribute was named [SkipRequestValidation].  With RC2 we renamed it to [AllowHtml] to make it more intuitive: Setting the above [AllowHtml] attribute on a model/viewmodel will cause ASP.NET MVC 3 to turn off HTML injection protection when model binding just that property. Html.Raw() helper method The new Razor view engine introduced with ASP.NET MVC 3 automatically HTML encodes output by default.  This helps provide an additional level of protection against HTML and Script injection attacks. With RC2 we are adding a Html.Raw() helper method that you can use to explicitly indicate that you do not want to HTML encode your output, and instead want to render the content “as-is”: ViewModel/View –> ViewBag ASP.NET MVC has (since V1) supported a ViewData[] dictionary within Controllers and Views that enables developers to pass information from a Controller to a View in a late-bound way.  This approach can be used instead of, or in combination with, a strongly-typed model class.  The below code demonstrates a common use case – where a strongly typed Product model is passed to the view in addition to two late-bound variables via the ViewData[] dictionary: With ASP.NET MVC 3 we are introducing a new API that takes advantage of the dynamic type support within .NET 4 to set/retrieve these values.  It allows you to use standard “dot” notation to specify any number of additional variables to be passed, and does not require that you create a strongly-typed class to do so.  With earlier previews of ASP.NET MVC 3 we exposed this API using a dynamic property called “ViewModel” on the Controller base class, and with a dynamic property called “View” within view templates.  A lot of people found the fact that there were two different names confusing, and several also said that using the name ViewModel was confusing in this context – since often you create strongly-typed ViewModel classes in ASP.NET MVC, and they do not use this API.  With RC2 we are exposing a dynamic property that has the same name – ViewBag – within both Controllers and Views.  It is a dynamic collection that allows you to pass additional bits of data from your controller to your view template to help generate a response.  Below is an example of how we could use it to pass a time-stamp message as well as a list of all categories to our view template: Below is an example of how our view template (which is strongly-typed to expect a Product class as its model) can use the two extra bits of information we passed in our ViewBag to generate the response.  In particular, notice how we are using the list of categories passed in the dynamic ViewBag collection to generate a dropdownlist of friendly category names to help set the CategoryID property of our Product object.  The above Controller/View combination will then generate an HTML response like below.    Output Caching Improvements ASP.NET MVC 3’s output caching system no longer requires you to specify a VaryByParam property when declaring an [OutputCache] attribute on a Controller action method.  MVC3 now automatically varies the output cached entries when you have explicit parameters on your action method – allowing you to cleanly enable output caching on actions using code like below: In addition to supporting full page output caching, ASP.NET MVC 3 also supports partial-page caching – which allows you to cache a region of output and re-use it across multiple requests or controllers.  The [OutputCache] behavior for partial-page caching was updated with RC2 so that sub-content cached entries are varied based on input parameters as opposed to the URL structure of the top-level request – which makes caching scenarios both easier and more powerful than the behavior in the previous RC. @model declaration does not add whitespace In earlier previews, the strongly-typed @model declaration at the top of a Razor view added a blank line to the rendered HTML output. This has been fixed so that the declaration does not introduce whitespace. Changed "Html.ValidationMessage" Method to Display the First Useful Error Message The behavior of the Html.ValidationMessage() helper was updated to show the first useful error message instead of simply displaying the first error. During model binding, the ModelState dictionary can be populated from multiple sources with error messages about the property, including from the model itself (if it implements IValidatableObject), from validation attributes applied to the property, and from exceptions thrown while the property is being accessed. When the Html.ValidationMessage() method displays a validation message, it now skips model-state entries that include an exception, because these are generally not intended for the end user. Instead, the method looks for the first validation message that is not associated with an exception and displays that message. If no such message is found, it defaults to a generic error message that is associated with the first exception. RemoteAttribute “Fields” -> “AdditionalFields” ASP.NET MVC 3 includes built-in remote validation support with its validation infrastructure.  This means that the client-side validation script library used by ASP.NET MVC 3 can automatically call back to controllers you expose on the server to determine whether an input element is indeed valid as the user is editing the form (allowing you to provide real-time validation updates). You can accomplish this by decorating a model/viewmodel property with a [Remote] attribute that specifies the controller/action that should be invoked to remotely validate it.  With the RC this attribute had a “Fields” property that could be used to specify additional input elements that should be sent from the client to the server to help with the validation logic.  To improve the clarity of what this property does we have renamed it to “AdditionalFields” with today’s RC2 release. ViewResult.Model and ViewResult.ViewBag Properties The ViewResult class now exposes both a “Model” and “ViewBag” property off of it.  This makes it easier to unit test Controllers that return views, and avoids you having to access the Model via the ViewResult.ViewData.Model property. Installation Notes You can download and install the ASP.NET MVC 3 RC2 build here.  It can be installed on top of the previous ASP.NET MVC 3 RC release (it should just replace the bits as part of its setup). The one component that will not be updated by the above setup (if you already have it installed) is the NuGet Package Manager.  If you already have NuGet installed, please go to the Visual Studio Extensions Manager (via the Tools –> Extensions menu option) and click on the “Updates” tab.  You should see NuGet listed there – please click the “Update” button next to it to have VS update the extension to today’s release. If you do not have NuGet installed (and did not install the ASP.NET MVC RC build), then NuGet will be installed as part of your ASP.NET MVC 3 setup, and you do not need to take any additional steps to make it work. Summary We are really close to the final ASP.NET MVC 3 release, and will deliver the final “RTM” build of it next month.  It has been only a little over 7 months since ASP.NET MVC 2 shipped, and I’m pretty amazed by the huge number of new features, improvements, and refinements that the team has been able to add with this release (Razor, Unobtrusive JavaScript, NuGet, Dependency Injection, Output Caching, and a lot, lot more).  I’ll be doing a number of blog posts over the next few weeks talking about many of them in more depth. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • heartbeat: Bad nodename in /etc/ha.d//haresources [node1]

    - by Richard
    I'm trying to start heartbeat on Ubuntu 10.04 with service heartbeat start, but getting the following errors: heartbeat[24829]: 2011/11/22_19:31:07 ERROR: Bad nodename in /etc/ha.d//haresources [node1] heartbeat[24829]: 2011/11/22_19:31:07 ERROR: Configuration error, heartbeat not started. On on server uname -n produces loadb1, on the second server uname -n produces loadb2. The two servers can ping each other okay with those names. This is /etc/ha.d/ha.cnf on both servers: debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 udpport 694 bcast eth1 ucast eth0 my.external.ip ucast eth0 my.external.ip ucast eth1 10.0.0.5 ucast eth1 10.0.0.6 #udp eth0 node loadb1 node loadb2 auto_failback off And this is /etc/ha.d/haresources on both servers: node1 IPaddr::46.20.121.113 httpd smb dhcpd Authkeys is also set up. What am I doing wrong? The part where I'm least clear is the ucast/bcast lines.

    Read the article

  • setting up bind to work with nsupdate (SERVFAIL)

    - by funny_ha_ha
    I'm trying to update my DNS-Server dynamically using nsupdate. Prerequisite I'm using Debian 6 on my DNS-Server and Debian 4 on my client. I created a public/private key pair using: dnssec-keygen -C -a HMAC-MD5 -b 512 -n USER sub.example.com. I then edited my named.conf.local to contain my public key and the new zone i wish to update. It now looks like this (note: I also tried allow-update { any; }; without success): zone "example.com" { type master; file "/etc/bind/primary/example.com"; notify yes; allow-update { none; }; allow-query { any; }; }; zone "sub.example.com" { type master; file "/etc/bind/primary/sub.example.com"; notify yes; allow-update { key "sub.example.com."; }; allow-query { any; }; }; key sub.example.com. { algorithm HMAC-MD5; secret "xxxx xxxx"; }; Next, I copied the private key file (key.private) to another server I want to update the zone from. I also created a textfile (update) on this server which contained the update information (note: I tried toying around with this stuff too. no success): server example.com zone sub.example.com update add sub.example.com. 86400 A 10.10.10.1 show send Now I'm trying to update the zone using: nsupdate -k key.private -v update The Problem Said command gives me the following output: Outgoing update query: ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0 ;; flags: ; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;sub.example.com. IN SOA ;; UPDATE SECTION: sub.example.com. 86400 IN A 10.10.10.1 update failed: SERVFAIL named debug Level 3 gives me the following information when I issue the nsupdate command on the remote server (note: I obfuscated the client IP): 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: new TCP connection 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: replace 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: createclients 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: recycle 06-Aug-2012 14:51:33.978 client @0x2ada475f1120: accept 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: TCP request 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: request has valid signature 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: recursion not available 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: update 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: send 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: sendto 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: senddone 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: request failed: end of file 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: closetcp But it doesn't do anything. The zone isn't updated, nor does my nsupdate change anything. I'm not sure if the file /etc/bind/primary/sub.example.com should exist prior to the first update or not. I tried it without the file, with an empty file and with a pre-configured zone file. Without success. The sparse information I found on the net pointed me towards file and folder permissions regarding the bind working directory, so I changed the permissions of both /etc/bind and /var/cache/bind (which is the home dir of my "bind" user). I'm not a 100% sure if the permissions are correct.. but it looks good to me: ls -lah /var/cache/bind/ total 224K drwxrwxr-x 2 bind bind 4.0K Aug 6 03:13 . drwxr-xr-x 12 root root 4.0K Jul 21 11:27 .. -rw-r--r-- 1 bind bind 211K Aug 6 03:21 named.run ls -lah /etc/bind/ total 72K drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 . drwxr-xr-x 87 root root 4.0K Jul 30 01:24 .. -rw------- 1 bind bind 125 Aug 6 02:54 key.public -rw------- 1 bind bind 156 Aug 6 02:54 key.private -rw-r--r-- 1 bind bind 2.5K Aug 6 03:07 bind.keys -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.0 -rw-r--r-- 1 bind bind 271 Aug 6 03:07 db.127 -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.255 -rw-r--r-- 1 bind bind 353 Aug 6 03:07 db.empty -rw-r--r-- 1 bind bind 270 Aug 6 03:07 db.local -rw-r--r-- 1 bind bind 3.0K Aug 6 03:07 db.root -rw-r--r-- 1 bind bind 493 Aug 6 03:32 named.conf -rw-r--r-- 1 bind bind 490 Aug 6 03:07 named.conf.default-zones -rw-r--r-- 1 bind bind 1.2K Aug 6 14:18 named.conf.local -rw-r--r-- 1 bind bind 666 Jul 29 22:51 named.conf.options drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 primary/ -rw-r----- 1 root bind 77 Mar 19 02:57 rndc.key -rw-r--r-- 1 bind bind 1.3K Aug 6 03:07 zones.rfc1918 ls -lah /etc/bind/primary/ total 20K drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 . drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 .. -rw-r--r-- 1 bind bind 356 Jul 30 00:45 example.com

    Read the article

  • /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID

    - by user1495181
    using ubuntu with net-snmp snmp work but in sys.log i see a lot of errors about snmpd.conf snmpd.conf: rwcommunity community 10.0.0.1 rwcommunity community 10.0.0.2 agentAddress udp:10.0.0.1:161 view systemonly included .1.3.6.1.2.1.1 view systemonly included .1.3.6.1.2.1.25.1 # Default access to basic system info rocommunity public default -V systemonly rouser authOnlyUser sysLocation Sitting on the Dock of the Bay sysContact Me <[email protected]> sysServices 72 proc mountd proc ntalkd 4 proc sendmail 10 1 disk / 10000 disk /var 5% includeAllDisks 10% load 12 10 5 trapsink localhost public iquerySecName internalUser rouser internalUser defaultMonitors yes linkUpDownNotifications yes master agentx errors: Sep 12 16:35:00 test snmpd[5485]: payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: prErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: memSwapError Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: extResult Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: dskErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: laErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: fileErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: snmperrErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: Turning on AgentX master support. Sep 12 16:35:00 test snmpd[5485]: net-snmp: 33 error(s) in config file(s)

    Read the article

  • Why isn't ICMP routing with iptables nat routing

    - by Scott Forsyth - MVP
    I'm using iptables on Ubuntu server to route a public IP to a private IP. I want to nat all traffic, including 80, 443 and ICMP. However, it appears that ICMP isn't routing. I have a steady ping going to the public IP and it never stops, even with NAT pointing to a bogus IP. Here are the rules that I'm using: iptables -t nat -I PREROUTING -d 206.72.119.76 -j DNAT --to-destination 10.240.5.5 iptables -t nat -I POSTROUTING -s 10.240.5.5 -j SNAT --to-source 206.72.119.76 I tried with rules for ICMP specifically, but no such luck: iptables -t nat -I PREROUTING -d 206.72.119.76 - icmp --icmp-type echo-request -j DNAT --to-destination 10.240.5.5 Any ideas?

    Read the article

  • plesk update fails

    - by Caballero
    I have a dedicated server running CentOS 5.3. I'm trying to update Plesk 9.5.2 to version 10.0.0 but update fails. I do it with yum: yum update psa* --skip-broken -t but it fails and I get the errors at the end: Skip-broken could not solve problems Error: plesk-mail-pc-driver conflicts with psa-qmail Error: plesk-mail-pc-driver conflicts with psa-qmail-rblsmtpd Error: plesk-mail-qc-driver conflicts with plesk-mail-pc-driver Error: plesk-core conflicts with plesk-billing Error: Missing Dependency: sw-engine = 2.0 is needed by package plesk-billing-6.0.4-20090625.11.noarch (installed) Error: Missing Dependency: pp-sitebuilder >= 10.3.0 is needed by package psa-10.3.0-cos5.build1012110629.18.x86_64 (plesk) Error: plesk-mail-pc-driver conflicts with plesk-mail-qc-driver Is there a way that I could solve this?

    Read the article

  • HP D2D 4312 Bacula configuration

    - by krisdigitx
    I have configured 5 libraries on the HP D2D system Discovery on the Bacula server shows only the last library and not all libraries. Why? [root@server bacula]# iscsiadm --mode discovery --type sendtargets --portal 10.66.59.114 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dca5e.library12.drive1 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dcaf2.library12.robotics I can query it fine using... [root@server bacula]# mtx -f /dev/sg2 inquiry Product Type: Tape Drive Vendor ID: 'HP ' Product ID: 'Ultrium 5-SCSI ' Revision: 'ED51' Attached Changer API: No [root@bray bacula]# mtx -f /dev/sg3 inquiry Product Type: Medium Changer Vendor ID: 'HP ' Product ID: 'MSL G3 Series ' Revision: 'EL41' Attached Changer API: No [root@server bacula]# mtx -f /dev/sg3 status Storage Changer /dev/sg3:1 Drives, 97 Slots ( 1 Import/Export ) Data Transfer Element 0:Empty Storage Element 1:Full :VolumeTag=50507F82 Storage Element 2:Full :VolumeTag=50507F83 Storage Element 3:Full :VolumeTag=50507F84 Storage Element 4:Full :VolumeTag=50507F85 Storage Element 5:Full :VolumeTag=50507F86 Storage Element 6:Full :VolumeTag=50507F87 Storage Element 7:Full :VolumeTag=50507F88 Does anyone have any good documentation for implementing Bacula with an HP D2D tape drive for server backups, and how to allocate libraries?

    Read the article

  • Finder.app preview pane and QuickLook stretch some of my photos

    - by mcandre
    The Finder column view preview pane and QuickLook stretch many of my photos. But when I open the same photos in Preview.app, they look normal. Screenshot: For example, download this image (reaver.jpg), and view it with Finder's column view. Now view it with QuickLook. It renders correctly in every other application, so there's something going wrong in how QuickLook/Finder get the image dimensions. This problem started happening in either Mac OS X 10.8.1 or 10.8.2. Specs: Finder 10.8 QuickLook v4.0 (555.0) Mac OS X 10.8.2 MacBook Pro 2009 Also posted in Apple Discussions.

    Read the article

  • PortForwarding to IIS in Linux

    - by Simon
    Hi, I am trying to set up port forwarding on a linux box to a IIS webserver on my internal network. The web server sits on Windows 2003 Server. My linux box has eth0 - Internet connection eth1 - internal subnet (10.10.10.x) eth2 - 2nd internal subnet (129.168.0.x) dhcp interface my webserver is on the eth2 interface (192.168.0.6) I am doing port forwarding for port 80 with no avail. I use the same set of rules to port forward to a different webserver and it works. The webapplication is available on the internal network but not for external users. iptables -t nat -A PREROUTING -p tcp -i eth0 -d $PUBLIC_IP --dport 80 -j DNAT --to 192.168.0.6:80 iptables -A FORWARD -p tcp -i eth0 -o eth2 -d 192.168.0.6 --dport 80 -m state --state NEW -j ACCEPT iptables -A FORWARD -t filter -o eth0 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -t filter -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Any Ideas?

    Read the article

  • Single Msi File - Many RemoteApps

    - by Mikeon
    How can I create a single .MSI installer file for many Remote Apps in Remote Desktop Services? Suppose I have 10 applications exposed via RDS. To make life easier I created MSI installer packages so users can "install" those apps. currently I have 10 different .msi files which forces users to install 10 times. Is it possible to make all/some apps into a single .msi file? (I don't control user machines so installing via GPO or other magic is out of question).

    Read the article

  • IPcop Multiple WAN Subnets

    - by obsidian
    We have an IPcop firewall and have had no issues with it. We've had a block of 10 IP addresses from our colocation provider and have been able port forward from those to internal servers as needed. We've recently needed additional IPs and the colocation provider issued an additional block of 10. The problem: The 10 new IP addresses issued are in a different subnet with a different gateway. The question: How do I add the new gateway into IPcop? How do I make it so that any outbound traffic in response to any inbound traffic from a new IP go back out through the new gateway? I attempted to add a static route via the console using the following command: route add -net x.x.x.x gw x.x.x.x netmask 255.255.255.192 I also added the new IPs as aliases and setup port forwarding as I've done with the existing IP block. However, when I attempt to access a web server from an external workstation, it just times out. Thanks in advance for your assistance.

    Read the article

  • Poor upload/download speed on 2 x ADSL lines into a Cisco 2621XM

    - by 2020mobile
    Hi, Sorry never been on this site before so I apologise if not the right section or even forum. I have users complaining of very slow internetn connectivity on site and have checked with our ISP who have said that the line is testing at 8mb. We have 2 x BT lines that have our ISP broadand on them. Both lines go into a Cisco 2600 series router that then has a PIX firewall off that. Connectivity is successful just gone really slow and unable to download anything. Config is below: version 12.3 no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname ROUTER-ADSL-INTERNET ! logging buffered 16384 informational enable secret xxx enable password xxx ! username xxx username xxx clock summer-time UK recurring last Sun Mar 1:00 last Sun Oct 1:00 aaa new-model ! ! aaa authentication login default local aaa authorization exec default local aaa session-id common ip subnet-zero no ip source-route ! ! ! ip audit notify log ip audit po max-events 100 no ip bootp server ip name-server 213.208.106.212 no mpls ldp logging neighbor-changes no ftp-server write-enable ! ! ! ! ! ! ! ! ! ! no voice hpi capture buffer no voice hpi capture destination ! ! ! ! ! ! ! ! interface ATM0/0 description 01270 111111 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/0 ip address 82.133.32.9 255.255.255.248 shutdown speed 100 full-duplex no cdp enable ! interface ATM0/1 description 01270 222222 no ip address no atm ilmi-keepalive pvc 0/38 encapsulation aal5mux ppp dialer dialer pool-member 1 ! dsl operating-mode auto ! interface FastEthernet0/1 ip address 217.146.115.49 255.255.255.240 duplex auto speed auto no cdp enable ! interface Dialer0 ip address 217.146.115.250 255.255.255.248 encapsulation ppp dialer pool 1 dialer-group 1 ppp authentication chap callin ppp chap hostname [email protected] ppp chap password 7 xxxxx ppp multilink ! ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ! no ip http server no ip http secure-server ! no logging trap access-list 10 permit 217.146.115.50 access-list 10 permit 82.133.32.10 access-list 10 deny any access-list 22 permit 217.146.115.50 access-list 22 permit 217.206.239.86 access-list 22 permit 82.133.32.10 access-list 22 deny any dialer-list 1 protocol ip permit no cdp run ! ! snmp-server community xxxxxx RO 10 snmp-server enable traps tty radius-server authorization permit missing Service-Type ! ! ! ! ! ! line con 0 exec-timeout 5 0 password 7 xxxxxx line aux 0 no exec line vty 0 4 access-class 22 in exec-timeout 5 0 password 7 xxxxxx transport input telnet ssh transport output none line vty 5 15 password 7 xxxxxx transport input telnet ssh ! ntp clock-period 17180095 ntp server 130.88.200.98 ! ! end Now my knowledge is very limited but ISP have said that while the lines are bonded each needs a seperate login as they've recently changed their L2TP router and that enforces the use of seperate logins - when the lines were configured we were given two logins. So, my question is what changes do I need to make to the config in order to get this working? it was ok before their change and I do have another login :- 01270 111111 - [email protected] 01270 222222 - [email protected] Apologies for the long email and thanks for taking the time to read it. Any more info I can provide please let me know. Thanks,

    Read the article

  • IPFW not locking people out

    - by Cole
    I've had some brute-forcing of my ssh connection recently, so I got fail2ban to hopefully prevent that. I set it up, and started testing it out by giving wrong passwords on my computer. (I have physical access to the server if I need to unblock myself) However, it never stops me from entering passwords. I see in /var/log/fail2ban.log that fail2ban kicked in and banned me, and there's a ipfw entry for my IP, but I'm not locked out. I've changed the configuration around, and then tried just using the ipfw command myself, but nothing seems to lock me out. I've tried the following blocks: 65300 deny tcp from 10.0.1.30 to any in 65400 deny ip from 10.0.1.30 to any 65500 deny tcp from 10.0.1.30 to any My firewall setup has a "allow ip from any to any" rule after these though, maybe that's the problem? I'm using Mac OS 10.6 (stock ipfw, it doesn't seem to have a --version flag) Thanks in advance.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >