Search Results

Search found 14815 results on 593 pages for 'cost management'.

Page 188/593 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • How do I use BCP or Sql Server Management Studio to get BLOB data our of Sql Server?

    - by Eric
    I'm sorry if this question has been asked already, but I couldn't find it anywhere. I have a table that stores files as BLOBS. The column that holds the file is an image datatype. I would like to be able to extract the binary data out of the column and turn it in to an actual file. I would ideally like to be able to do this with BCP or management studio if possible. I have tried BCP, but for some reason when I try and pull out an office document Word thinks it's corrupt. Here's what I've tried so far (obviously the values have been changed to protect the innocent :): bcp "select document_binary_data from database where id = 12345" queryout "c:\filename.doc" -n -S server -U username -P password This isn't working though? Any thoughts?

    Read the article

  • Ninject caching an injected DataContext? Lifecycle Management?

    - by awrigley
    I had a series of very bizarre errors being thrown in my repositories. Row not found or changed, 1 of 2 updates failed... Nothing made sense. It was as if my DataContext instance was being cached... Nothing made sense and I was considering a career move. I then noticed that the DataContext instance was passed in using dependency injection, using Ninject (this is the first time I have used DI...). I ripped out the Dependency Injection, and all went back to normal. Instantly. So dependency injection was the issue, but I still don't know why. I am speculating that Ninject was caching the injected DataContext. Is this correct? Is there a way of configuring the lifecycle management of injected parameters? If so, what would be the best configuration to use to have the DataContext behave like a normal DataContext, ie, no caching across requests?

    Read the article

  • Are there cheat sheets for misc source code management tools?

    - by Alex_coder
    I'm looking for something similar to Pacman Rosetta, which explains how to achieve similar tasks using different source code management tools. Sometimes docs for a certain SCM contain examples comparing that particular SCM to a couple of others. But I'm looking for a central place that contains maximum available information. Example: one uses bzr and knows that 'bzr pull' syncs a local repo by fetching new content from a remote repo. One want to know how to do that with git. One finds the git command, he knows the keyword. Since the keyword is known, one can proceed straight to git docs, he knows what to read about, he doesn't have to waste time by searching the git docs. I understand this might be not the only way people use to learn a new SCM tool. If you use other approaches, please do tell.

    Read the article

  • Why is textbox.focus throwing the lostfocus event?

    - by cost
    I've seen a few similar questions on SO but nothing that seems to actually address the issue. Here's a simplified version of the function. Private Sub Check_Quantity(sender As System.Object, e As System.Windows.RoutedEventArgs) _ Handles textbox_quantity.LostFocus Dim worked As Boolean = Integer.TryParse(textbox_quantity.Text, quantity) If Not worked Then MsgBox("Enter a valid number for the quantity") textbox_quantity.Focus() textbox_quantity.SelectAll() quantity = 0 End If End Sub It's important to note that this is WPF. What I want to do is very simple. When someone finishes with the textbox the program checks that what they entered is a number. If it does it sticks this in an integer. If not, it tells them to fix it and keeps the focus on the textbox. The issue is a few things, but what it comes down to is this function runs in an infinite loop. This same function works fine in WinForms, but not in WPF. On some other questions people have said that the messagebox appearing causes focus to be lost, but in testing this isn't true. It still loops regardless of if the messagebox is called or not. The problem is the call to textbox_quantity.Focus(). Without that it works fine. Regardless of whether it's there or not though, focus is not set to the textbox, though textbox_quantity.Focus() still returns a value of true. Any thought of what's going on and maybe how I could fix it?

    Read the article

  • What's the best way to do Ruby gemspec creation and dependency management?

    - by John Feminella
    Over the last few months, there have been a number of rapid developments in the state of Ruby dependency management and gem creation, to the point where I've been having trouble keeping up with everything. If I'm writing a new gem, what's the best tool for me to use to create my gemspec? Are there disadvantages of using this tool over competitors? I've used Bundler a few times on applications and for me it's been a lifesaver. Is the consensus that it is suitable for use with production apps? Are there quirks or idiosyncracies people should be aware of? Links to resources you've used and have found helpful would also be much appreciated.

    Read the article

  • SQLce create table "index field" explanation (SQL Management Studio) ?

    - by bretddog
    Hi, I'm new to databases, and now creating SQLCE database in Management Studio. There is a value in brackets; [UQ_Users_0000000000000028], which seems a bit random to me, so would just like to ask if someone could explain this field? Is it just simply required to be a unique field? Is there any reason why I would want to change it to something else than what SSMS scripts? cheers! CREATE TABLE [Users] ( [UserID] int NOT NULL ..... CREATE UNIQUE INDEX [UQ__Users__0000000000000028] ON [Users] ([UserID] ASC);

    Read the article

  • UPK Pre-Built Content Update

    - by Karen Rihs
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} UPK pre-built content development efforts are always underway and growing. Over the last few months, the following new, upgraded, and revised modules became available:  NEW CONTENT RELEASES E-Business Suite 12.1 Install Base Process Manufacturing, Process Quality Fundamentals for EBS Fusion 11g Release 1 Receivables Assets Purchasing Distributed Order Orchestration Payables Functional Setup Manager Project Portfolio Management Self Service Procurement JDE E1 9.0 Accounts Payable 9.0 with 9.1 Tools Fundamentals 9.0 with 9.1 Tools General Ledger 9.0 with 9.1 Tools Accounts Receivable 9.0 with 9.1 Tools Procurement and Subcontract Management 9.0 with 9.1 Tools Oracle Utilities Customer Care and Billing 2.3.1 Administrative Setup User Tasks Primavera Primavera Contract Management 14 Primavera P6 Enterprise Project Portfolio Management 8.2 UPK CONTENT UPGRADES Agile CNM 1.2 Customer Needs Management E-Business Suite 12.1 Project Foundation JDE E1 9.1 Fixed Assets Accounting General Ledger Fundamentals Inventory Management Sales Order Management PeopleSoft 9.1 Reporting Tools for PeopleTools 8.5.2  UPK CONTENT REVISIONS Oracle Utilities for Meter Data Management 2.0.1 Administrative Setup User Tasks VEE and Usage Rules Working with Measurement Data PeopleSoft 9.0 and 9.1 Enterprise Learning Management Reporting Tools for HCM (previously Reporting Tools for HRMS) PeopleSoft 9.1 Expenses General Ledger Inventory Contracts Grants Strategic Sourcing For a list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information on how your organization can take advantage of UPK pre-built content, see our previous blog,  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Value of UPK Pre-Built Content. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} - Karen Rihs, UPK Outbound Product Management

    Read the article

  • Solving the NP-complete problem in XKCD

    - by Adam Tuttle
    The problem/comic in question: http://xkcd.com/287/ I'm not sure this is the best way to do it, but here's what I've come up with so far. I'm using CFML, but it should be readable by anyone. <cffunction name="testCombo" returntype="boolean"> <cfargument name="currentCombo" type="string" required="true" /> <cfargument name="currentTotal" type="numeric" required="true" /> <cfargument name="apps" type="array" required="true" /> <cfset var a = 0 /> <cfset var found = false /> <cfloop from="1" to="#arrayLen(arguments.apps)#" index="a"> <cfset arguments.currentCombo = listAppend(arguments.currentCombo, arguments.apps[a].name) /> <cfset arguments.currentTotal = arguments.currentTotal + arguments.apps[a].cost /> <cfif arguments.currentTotal eq 15.05> <!--- print current combo ---> <cfoutput><strong>#arguments.currentCombo# = 15.05</strong></cfoutput><br /> <cfreturn true /> <cfelseif arguments.currentTotal gt 15.05> <cfoutput>#arguments.currentCombo# > 15.05 (aborting)</cfoutput><br /> <cfreturn false /> <cfelse> <!--- less than 15.05 ---> <cfoutput>#arguments.currentCombo# < 15.05 (traversing)</cfoutput><br /> <cfset found = testCombo(arguments.currentCombo, arguments.currentTotal, arguments.apps) /> </cfif> </cfloop> </cffunction> <cfset mf = {name="Mixed Fruit", cost=2.15} /> <cfset ff = {name="French Fries", cost=2.75} /> <cfset ss = {name="side salad", cost=3.35} /> <cfset hw = {name="hot wings", cost=3.55} /> <cfset ms = {name="moz sticks", cost=4.20} /> <cfset sp = {name="sampler plate", cost=5.80} /> <cfset apps = [ mf, ff, ss, hw, ms, sp ] /> <cfloop from="1" to="6" index="b"> <cfoutput>#testCombo(apps[b].name, apps[b].cost, apps)#</cfoutput> </cfloop> The above code tells me that the only combination that adds up to $15.05 is 7 orders of Mixed Fruit, and it takes 232 executions of my testCombo function to complete. Is there a better algorithm to come to the correct solution? Did I come to the correct solution?

    Read the article

  • Django: Summing values

    - by Anry
    I have a two Model - Project and Cost. class Project(models.Model): title = models.CharField(max_length=150) url = models.URLField() manager = models.ForeignKey(User) class Cost(models.Model): project = models.ForeignKey(Project) cost = models.FloatField() date = models.DateField() I must return the sum of costs for each project. view.py: from mypm.costs.models import Project, Cost from django.shortcuts import render_to_response from django.db.models import Avg, Sum def index(request): #... return render_to_response('index.html',... How?

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Looking for an issue tracker / project management software that automatically manages start/completion dates based on priority/relationships

    - by user361910
    So, a little background. We are a small company with a half-dozen developers. We have been evaluating many project management / issue tracking software packages (TRAC, Redmine, FogBugz, etc) and trying to create a decent process/workflow for managing projects, adding features, fixing bugs, etc. I'd like to think our requirements are similar to most other companies our size. Essentially, what this comes down to is 1) An easy way for the PM and developers to track projects, issues, bugs, etc 2) An easy way for the PM and admin/executives to get a birds-eye view of progress and easily manage timelines, schedules, and priorities. After trying TRAC, we moved to Redmine. We found Redmine to be easier than track to administer and the ability to have sub-projects and sub-tickets is great. However, the big problem we ran into is the fact that it is very difficult to manage schedules and timelines. It seems like it would be incredibly time-intensive to manage because you have to manually enter a start date, estimated time, and end date for each ticket, project, etc. So if you setup a month's schedule based on priorities, what are you supposed to do when a particular ticket/issue/subproject takes up more time than was estimated. Right now, it appears I would have to go back in and MANUALLY change the start/end date of every single item. What would be ideal is to be able to set priorities/dependencies and estimated time on tickets/milestones, and have the software automatically manage the start/end dates. Does anyone know how to get Redmine to do this, or recommend a different software package that can do something like this!

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • What applications do people use for windows folder management? How do you switch between folders in

    - by 118184489189799176898
    I'm always switching between client's folders in different applications like photoshop, sql manager, explorer etc. It's so slow to go between them, navigate to the folder and it's still too slow to copy and paste the directory etc. It's so annoying to do. Someone must have a good solution. I was thinking if there was a "recently accessed" folders list available within every folder explorer window... so in any application, if i go "file open" it will have something, somewhere that lists the recently accessed folders - that would be really helpful. I am aware of the recent places folder in win7, but this sucks because it is not sorted by date accessed. Perhaps if there was a way to change this then this would become a decent feature? Is there some application that already does this? i'm sure someone has already solved this issue in a more elegant solution than I can think off. I'm keen to know what programs people use or how people addresss this issue? Thanks...

    Read the article

  • Microsoft Forefront Management Gateway 2010 - Which topology to choose for monitoring only server?

    - by MadBoy
    Hello, I've installed Forefront and wanted to use it as monitoring traffic solution until we decide to put it as a router. I've 2 nic's assigned to this virtual machine. One NIC has connected port which is "mirror port" of our WAN redirected on switch so it sees all the network traffic flying by. The other NIC is internet access. This server is located inside our lan network. What topology should i choose and which options I should look at to be able to see which traffic is used (SMTP, WWW etc) and who does what? We had cases of infected machines with spam and we want to be able to see that some machine is sending large amounts of mails. Is that possible ?

    Read the article

  • Hanging of host network connections when starting KVM guest on bridge

    - by Chris Phillips
    Hi, I've a KVM system upon which I'm running a network bridge directly between all VM's and a bond0 (eth0, eth1) on the host OS. As such, all machines are presented on the same subnet, available outside of the box. The bond is doing mode 1 active / passive, with an arp_ip_target set to the default gateway, which has caused some issues in itself, but I can't see the bond configs mattering here myself. I'm seeing odd things most times when I stop and start a guest on the platform, in that on the host I lose network connectivity (icmp, ssh) for about 30 seconds. I don't lose connectivity on the other already running VM's though... they can always ping the default GW, but the host can't. I say "about 30 seconds" but from some tests it actually seems to be 28 seconds usually (or at least, I lose 28 pings...) and I'm wondering if this somehow relates to the bridge config. I'm not running STP on the bridge at all, and the forwarding delay is set to 1 second, path cost on the bond0 lowered to 10 and port priority of bond0 also lowered to 1. As such I don't think that the bridge should ever be able to think that bond0 is not connected just fine (as continued guest connectivity implies) yet the IP of the host, which is on the bridge device (... could that matter?? ) becomes unreachable. I'm fairly sure it's about the bridged networking, but at the same time as this happens when a VM is started there are clearly loads of other things also happening so maybe I'm way off the mark. Lack of connectivity: # ping 10.20.11.254 PING 10.20.11.254 (10.20.11.254) 56(84) bytes of data. 64 bytes from 10.20.11.254: icmp_seq=1 ttl=255 time=0.921 ms 64 bytes from 10.20.11.254: icmp_seq=2 ttl=255 time=0.541 ms type=1700 audit(1293462808.589:325): dev=vnet6 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.604:326): dev=vnet7 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.618:327): dev=vnet8 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffdd694a kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079 64 bytes from 10.20.11.254: icmp_seq=30 ttl=255 time=0.514 ms 64 bytes from 10.20.11.254: icmp_seq=31 ttl=255 time=0.551 ms 64 bytes from 10.20.11.254: icmp_seq=32 ttl=255 time=0.437 ms 64 bytes from 10.20.11.254: icmp_seq=33 ttl=255 time=0.392 ms brctl output of relevant bridge: # brctl showstp brdev brdev bridge id 8000.b2e1378d1396 designated root 8000.b2e1378d1396 root port 0 path cost 0 max age 19.99 bridge max age 19.99 hello time 1.99 bridge hello time 1.99 forward delay 0.99 bridge forward delay 0.99 ageing time 299.95 hello timer 0.50 tcn timer 0.00 topology change timer 0.00 gc timer 0.04 flags vnet5 (3) port id 8003 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags vnet0 (2) port id 8002 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags bond0 (1) port id 0001 state forwarding designated root 8000.b2e1378d1396 path cost 10 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 0001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags I do see the new port listed as learning, but in line with the forward delay, only for 1 or 2 seconds when polling the brctl output on a loop. All pointers, tips or stabs in the dark appreciated.

    Read the article

  • Cisco 877 as PPPoA/PPPoE bridge (no routing) - how to make it listen to IP for management?

    - by Ingmar Hupp
    I have a Cisco 877 configured to bridge ADSL with PPPoA to PPPoE on Vlan1. This works fine, but in this mode the only way I can configure the Cisco is via the serial console. I'd like to have the Cisco also listen on an IP address so I can telnet/ssh into it. I think the right way to go about this would be via bridge irb, but I'm not sure exactly how (or if that's even the right direction). IOS is 12.4T and my current config (cut down to essentials) is: no ip routing no ip cef ! ! interface ATM0 no ip address no ip route-cache no atm ilmi-keepalive pvc 0/38 encapsulation aal5snap ! dsl operating-mode auto bridge-group 1 ! ! interface Vlan1 no ip address no ip route-cache bridge-group 1 Just setting an IP address on Vlan1 didn't have the desired effect, but surely this must be possible somehow (the Draytek Vigor 120 even does it by default).

    Read the article

  • How can I flush my ssh keys on power management activity?

    - by Sam Halicke
    Hi all, Using ssh-agent and private keys per the usual. Everything's working as normal. My question regards best practices on flushing keys from ssh-add on activity like sleep, suspend, hibernate, etc. I thought about writing a simple wrapper around those commands, but then wondered if are they even called? Or does the kernel initiate this activity directly? Are the PM utilities strictly userland? I would like this additional layer of security beyond locking my screen, etc. and was wondering if anyone else had solved this elegantly or has best practices to recommend. Thanks.

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Is it possible to open an Active Director or Exchange Management Console user dialog directly from Powershell?

    - by Myrddin Emrys
    I'd like to be able to launch either the AD user dialog, or the EMC mailbox dialog directly from a Powershell script to open a specific user. The workflow goes something to the effect of "Does everything look correct on this user? Y/N" to continuing on, or to bringing up the account to edit. There's no reason to completely duplicate the functionality of these dialogs. I don't mind requiring that EMC or ADU&C already be open before the script is run, if necessary.

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • Suggestion for live video stream aggregation/switching/forwarding/management software?

    - by deceze
    I'm looking for a software or system that can receive video streams from a number of cameras via a network (RTMP or similar protocol), present a visual overview of all video streams and allow me to forward/send a selected stream to another service (e.g. to a Flash Media Server, or anywhere via RTMP). Basically the digital internet equivalent of a TV studio control panel, which allows a director to put together a live show. Is there any such software at an affordable price? A GUI-less server which can be scripted to switch streams would be good too. I'm not even quite sure what kind of product category this falls into or what search terms to plug into Google. Most results I have come up with have little more than an executive summary description which doesn't tell me anything. Any suggestion welcome.

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • Where do I find a free (open source preferably) VNC management tool?

    - by thenior
    Hello, I am trying to get remote internal setup for our business. Basically, I just want to remote desktop into any computer on the network. I don't want to use LogMein, because I only want it to be internal for security. Basically, I am looking for a way to just install VNC clients on all the machines, and on my machine have centralized manager for all the machines that are connected to it. Doesn't have to VNC - just needs to work and be free. All systems running Win 7 64bit

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >