Search Results

Search found 941 results on 38 pages for 'mental health'.

Page 26/38 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Hundreds of custom UserControls create thousands of USER Objects

    - by Andy Blackman
    I'm creating a dashboard application that shows hundreds of "items" on a FlowLayoutPanel. Each "item" is a UserControl that is made up of 12 or labels. My app queries a database and then creates an "item" instance for each record, populating ethe labels and textboxes with data before adding it to the FlowLayoutPanel. After adding about 560 items to the panel, I noticed that the USER Objects count in my Task Manager had gone up to about 7300, which was much much larger than any other app on my machine. I did a quick spot of mental arithmetic (OK, I might have used calc.exe) and figured that 560 * 13 (12 labels plus the UserControl itself) is 7280. So that suddenly gave away where all the objects were coming from... Knowing that there is a 10,000 USER object limit before windows throws in the towel, I'm trying to figure better ways of drawing these items onto the FlowLayoutPanel. My ideas so far are as follows: 1) User-draw the "item", using graphics.DrawText and DrawImage in place of many of the labels. I'm hoping that this will mean 1 item = 1 USER Object, not 13. 2) Have 1 instance of the "item", then for each record, populate the instance and use the Control.DrawToBitmap() method to grab an image and then use that in the FlowLayoutPanel (or similar) So... Does anyone have any other suggestions ??? P.S. It's a zoomable interface, so I have already ruled out "Paging" as there is a requirement to see all items at once :( Thanks everyone.

    Read the article

  • Access VBA question: Change the query being referenced by a function, depending on context

    - by Tara Amatista
    I have a custom function in Access2007 that hinges on grabbing data out of a specific query. It opens Outlook, creates a new email and populates the fields with specific addresses and data taken from the query ("DecisionEmail"). Now I want to make a different query ("RequestEmail") and have it populate the email with that data. So all I have to do is change this line: Set MailList = db.OpenRecordset("DecisionEmail") and that's where I get stumped. This is my desired result: If the user is on Form_Decision and clicks the button "Send email", "DecisionEmail" will get plugged into the function and that data will appear in the email. If the user on Form_SendRequest and clicks the button "Send email", "RequestEmail" will instead get plugged in. The reason that these are different queries is because they contain very different information that is smudged about in different ways. However, since it's just one little thing that needs to change in the function code, I don't think a brand new function is a good idea. My last resort would be to make a brand new function and use the Conditions field in the Macro interface to choose between them, but I have a feeling there's a more elegant solution possible. I have a vague notion of setting the query names as variables and using an If statement but I just don't have the mental vocabulary for thinking through this.

    Read the article

  • kQuery : Trying to find a specific instance of a class by number

    - by mattelliottIT
    I guess I have just hit a mental block with this one; maybe some fresh eyes will help. Basically I have a few instance of the class "menu-item" which when clicked call the click function via jQuery and bring up a video. Instead of giving each on an id as well as a class I am trying to find which instance of the class was clicked (1, 2, 3, etc). Just can't seem to get it though. //click listener for menu-items $('.menu-item').click(function(event) { var o = $('.menu-item'); var count = o.length(); // switch(count) { case 0 : filename == 'letters'; break; case 1 : filename == 'the-gift'; break; } var videoPlayer = '<video controls width="618px">'; videoPlayer += '<source src="_video/' + filename + '.mp4" />'; videoPlayer += '</video>'; //place video $('#videoCont').html(videoPlayer); }); I'm trying to create an array there where each instance of the 'menu-item' is one array item. (btw, for now I am just proofing this with an mp4 filetype before I add in the ogv and webm formats). Thanks for any and all help!

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • How do I pass a value to a method in Objective C

    - by user268124
    I'm new to Obj-C and I have run in to a very, very basic mental block that's making me feel a little stupid. I want to pass a variable from one method to another in an purely objective-C way. So far I'm stuck and have resorted to writing it in C. In C it goes as follows; //detect the change in the segmented switch so we can; change text - (IBAction)switchChanged:(id)sender { NSLog(@"Switch change detected"); //grab the info from the sender UISegmentedControl *selector = sender; //grab the selected segment info from selector. This returns a 0 or a 1. int selectorIndex = [selector selectedSegmentIndex]; changeText (selectorIndex); } int changeText (int selectorPosition) { NSLog(@"changeText received a value. Selector position is %d", selectorPosition); //Idea is to receive a value from the segmented controler and change the screen text accordingly return 0; } This works perfectly well, but I want to learn how to do this with objects and messages. How would I rewrite these two methods to do this? Cheers Rich

    Read the article

  • Confused about std::runtime_error vs. std::logic_error

    - by David Gladfelter
    I recently saw that the boost program_options library throws a logic_error if the command-line input was un-parsable. That challenged my assumptions about logic_error vs. runtime_error. I assumed that logic errors (logic_error and its derived classes) were problems that resulted from internal failures to adhere to program invariants, often in the form of illegal arguments to internal API's. In that sense they are largely equivalent to ASSERT's, but meant to be used in released code (unlike ASSERT's which are not usually compiled into released code.) They are useful in situations where it is infeasible to integrate separate software components in debug/test builds or the consequences of a failure are such that it is important to give runtime feedback about the invalid invariant condition to the user. Similarly, I thought that runtime_errors resulted exclusively from runtime conditions outside of the control of the programmer: I/O errors, invalid user input, etc. However, program_options is obviously heavily (primarily?) used as a means of parsing end-user input, so under my mental model it certainly should throw a runtime_error in the case of bad input. Where am I going wrong? Do you agree with the boost model of exception typing?

    Read the article

  • CodePlex Daily Summary for Thursday, January 06, 2011

    CodePlex Daily Summary for Thursday, January 06, 2011Popular ReleasesStyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.SSH.NET Library: 2011.1.6: Fixes CommandTimeout default value is fixed to infinite. Port Forwarding feature improvements Memory leaks fixes New Features Add ErrorOccurred event to handle errors that occurred on different thread New and improve SFTP features SftpFile now has more attributes and some operations Most standard operations now available Allow specify encoding for command execution KeyboardInteractiveConnectionInfo class added for "keyboard-interactive" authentication. Add ability to specify bo...UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - ultimateJB DEFAULT => Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43.NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...VidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...EnhSim: EnhSim 2.2.9 BETA: 2.2.9 BETAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the Gobl...xUnit.net - Unit Testing for .NET: xUnit.net 1.7 Beta: xUnit.net release 1.7 betaBuild #1533 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision)...Json.NET: Json.NET 4.0 Release 1: New feature - Added Windows Phone 7 project New feature - Added dynamic support to LINQ to JSON New feature - Added dynamic support to serializer New feature - Added INotifyCollectionChanged to JContainer in .NET 4 build New feature - Added ReadAsDateTimeOffset to JsonReader New feature - Added ReadAsDecimal to JsonReader New feature - Added covariance to IJEnumerable type parameter New feature - Added XmlSerializer style Specified property support New feature - Added ...DbDocument: DbDoc Initial Version: DbDoc Initial versionASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.2: Atomic CMS 2.1.2 release notes Atomic CMS installation guide N2 CMS: 2.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) All-round improvements and bugfixes File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open the proj...Wii Backup Fusion: Wii Backup Fusion 1.0: - Norwegian translation - French translation - German translation - WBFS dump for analysis - Scalable full HQ cover - Support for log file - Load game images improved - Support for image splitting - Diff for images after transfer - Support for scrubbing modes - Search functionality for log - Recurse depth for Files/Load - Show progress while downloading game cover - Supports more databases for cover download - Game cover loading routines improvedAutoLoL: AutoLoL v1.5.1: Fix: Fixed a bug where pressing Save As would not select the Mastery Directory by default Unexpected errors are now always reported to the user before closing AutoLoL down.* Extracted champion data to Data directory** Added disclaimer to notify users this application has nothing to do with Riot Games Inc. Updated Codeplex image * An error report will be shown to the user which can help the developers to find out what caused the error, this should improve support ** We are working on ...TortoiseHg: TortoiseHg 1.1.8: TortoiseHg 1.1.8 is a minor bug fix release, with minor improvementsBlogEngine.NET: BlogEngine.NET 2.0: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info 3 Months FREE – BlogEngine.NET Hosting – Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. To get started, be sure to check out our installatio...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.6 Released: Hi, Today we are releasing final version of Visifire, v3.6.6 with the following new feature: * TextDecorations property is implemented in Title for Chart. * TitleTextDecorations property is implemented in Axis. * MinPointHeight property is now applicable for Column and Bar Charts. Also this release includes few bug fixes: * ToolTipText property of DataSeries was not getting applied from Style. * Chart threw exception if IndicatorEnabled property was set to true and Too...New Projects.NET Framework Extensions Packages: Lightweight NuGet packages with reusable source code extending core .NET functionality, typically in self-contained source files added to your projects as internal classes that can be easily kept up-to-date with NuGet..NET Random Mock Extensions: .NET Random Mock Extensions allow to generate by 1 line of code object implementing any interface or class and fill its properties with random values. This can be usefull for generating test data objects for View or unit testing while you have no real domain object model.ancc: anccASP.NET Social Controls: ASP.NET Social Controls is a small collection of server controls designed to make integrating social sharing utilities such as ShareThis, AddThis and AddToAny easier, more manageable, and X/HTML-compliant, with configuration files and per-instance settings.Autofac for WindowsPhone7: This project hosts the releases for Autofac built for WindowsPhone7AutoSensitivity: AutoSensitivity allows you to define different mouse sensitivities (speeds) for your tocuhpad and mouse and automatically switch between them (based on mouse connect / disconnect).BaseCode: basecodeCaliburn Micro Silverlight Navigation: Caliburn Micro Silverlight Navigation adds navigation to Caliburn Micro UI Framework by applying the ViewModel-First principle. Debian 5 Agent for System Center Operations Manager 2007 R2: Debian 5 System Center Operations Manager 2007 R2 Agent. Debian 5 Management Pack For System Center Operations Manager 2007 R2: Debian 5 Management Pack for SCOM 2007 R2. It will be useless without the Agent (in another project).Eventbrite Helper for WebMatrix: The Eventbrite Helper for WebMatrix makes it simple to promote your Eventbrite events in your WebMatrix site. With a few lines of code you will be able to display your events on your web site with integration with Windows Live Calendar and Google Calendar.Eye Check: EyeCheck is an eye health testing project. It contains a set of tests to examine eye health. It's developed in C# using the Silverlight technology.Hooly Search: This ASP.NET project lets you browse through and search text within holy booksIssueVision.ST: A Silverlight LOB sample using Self-tracking Entities, WCF Services, WIF, MVVM Light toolkit, MEF, and T4 Templates.Lawyer Officer: Projeto desenvolvido como meu trabalho de conclusão de curso para formação em bacharelado em sistemas da informação da FATEF-São VicenteLINQtoROOT: Translates LINQ queries from the .NET world in to CERN's ROOT language (C++) and then runs them (locally or on a PROOF server).OA: ??????????Open Manuscript System: Open Manuscript Systems (OMS) is a research journal management and publishing system with manuscript tracking that has been developed in this project to expand and improve access to research.ProjectCNPM_Vinhlt_Teacher: Ðây là b?n CNPM demo c?a nhóm 6,K52a3 HUS VN. b?n demo này cung là project dâu ti?n tri?n khai phát tri?n th? nghi?m trên mô hình m?ng - Nhi?u member cùng phát tri?n cùng lúc QuanLyNhanKhau: WPF test.RazorPad: RazorPad is a quick and simple stand-alone editing environment that allows anyone (even non-developers) to author Razor templates. It is developed in WPF using C# and relies on the System.WebPages.Razor libraries (included in the project download). Rovio Tour Guide: More details to follow soon....long story short building a robotic tour guide using the Rovio roving webcam platform for proof of concept.ScrumPilot: ScrumPilot is a viewer of events coming from Team Foundation Server The main goal of this project is to help team to follow in real time the Checkins and WorkItems changing. Team can do comments to each event and they can preview some TFS artifacts.S-DMS: S-DMS?????????(Document Manage System)Sharepoint Documentation Generator: New MOSS feature to automatically generate documentation/tables for fields, content types, lists, users, etc...ShengjieGao's projects: ?????Stylish DOS Box: Since the introduction of Windows 3.11 I am trying to avoid the DOS box and use any applet provided with GUI in Windows system. Yet, I realize that there is no week passed by without me opening the DOS box! This project will give the DOS Box a new look.Table2DTO: Auto generate code to build objects (DTOs, Models, etc) from a data table.Techweb: Alon's and Simon's 236607 homework assignments.TLC5940 Driver for Netduino: An Netduino Library for the TI TLC5940 16-Channel PWM Chip. Tratando Exceptions da Send Port na Orchestration: Quando a Send Port é do tipo Request-Response manipular o exception é intuitivo, já que basta colocar um escopo e adicionar um exception do tipo System.Exception. Mas quando a porta é one-way a coisa complica um pouco.UAC Runner: UAC Runner is a small application which allows the running of applications as an administrator from the command line using Windows UAC.Ubuntu 10 Agent for System Center Operations Manager 2007 R2: Ubuntu 10 System Center Operations Manager 2007 R2 Agent.Ubuntu 10 Management Pack For System Center Operations Manager 2007 R2: Ubuntu 10 Management Pack for SCOM 2007 R2. It will be useless without the Agent (in another project). It is based on Red Hat 5 Management Pack. See the Download section to download the MPs and the source files (XML) Whe Online Storage: Whe Online Storage, is an 3. party online storage system and tools for free source. C#, .NET 4.0, SilverlightWindows Phone MVP: An MVP implementation for Windows Phone.

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • How do I set an event off when player is on certain tile?

    - by Tom Burman
    Here is the code I use to create and print my map to the canvas: var board = []; function loadMap(map) { if (map == 1) { return [ [2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,3,0,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,3,3,3,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,3,0,0,2], [2,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,3,3,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,3,3,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,3,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,3,0,3,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,3,3,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,3,3,3,3,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,3,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,3,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,3,3,3,3,3,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,3,3,0,0,3,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,3,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,3,3,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,0,0,0,0,0,0,0,0,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,3,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,3,3,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,3,3,3,3,3,3,3,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,3,3,3,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2], [2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2] ]; } } board = loadMap(1); enterfor (y = 0; y <= viewHeight; y++) { for (x = 0; x <= viewWidth; x++) { var theX = x * 32; var theY = y * 32; context.drawImage(mapTiles[board[y+viewY][x+viewX]], theX, theY, 32, 32); } } And here is the code I use for player movement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerX > 0) playerX--; break; case 38: // Up if (playerY > 0) playerY--; break; case 39: // Right if (playerX < worldWidth) playerX++; break; case 40: // Down if (playerY < worldHeight) playerY++; break; } viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; }, false); What I am looking for is a method for when the player lands on tile 3 he loses health. I have tried to use this in the player movement but it doesnt seem to work e.g the left movement: case 37: // Left if (playerX > 0) playerX--; if(board[x2 - 1] == 3) { health--; playerX--; }

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • Problem making system calls with PHP scripts

    - by mazin k.
    I have the following PHP script: <?php $fortune = `fortune`; echo $fortune; ?> but the output is simply blank (no visible errors thrown). However, if I run php -a, it works: php > echo `fortune`; Be careful of reading health books, you might die of a misprint. -- Mark Twain php > Am I missing a config directive or something that would cause this? Edit: So, I tried running my script using $ php-cgi fortunetest.php and it worked as expected. Maybe the issue is with Apache2?

    Read the article

  • Regex: replace inner string

    - by AllenG
    I'm working with X12 EDI Files (Specifialy 835s for those of you in Health Care), and I have a particular vendor who's using a non-HIPAA compliant version (3090, I think). The problem is that in a particular segment (PLB- again, for those who care) they're sending a code which is no longer supported by the HIPAA Standard. I need to locate the specific code, and update it with a corrected code. I think a Regex would be best for this, but I'm still very new to Regex, and I'm not sure where to begin. My current methodology is to turn the file into an array of strings, find the array that starts with "PLB", break that into an array of strings, find the code, and change it. As you can guess, that's very verbose code for something which should be (I'd think) fairly simple. Here's a sample of what I'm looking for: ~PLB|1902841224|20100228|49KC15X078001104|.08~ And here's what I want to change it to: ~PLB|1902841224|20100228|CSKC15X078001104|.08~ Any suggestions?

    Read the article

  • Good embedded database solution (like SQLite) for .Net

    - by vfilby
    I am looking for file based storage solutions that I can use with a .Net project. THey need to have a sql-like interface for storing and retrieving data. They need to have relatively little overhead and must not require any additional components installed by the end user. I am hopping for a .dll that I can reference and use. Cool points awarded if it is closely tied to an ORM. My current favourite is SQLite, are there any better ones out there that I should know about? I have a (health?) bias against access because I feel it is overcomplicated for what I need, I am open to being convinced otherwise though. PS: "No, there is nothing better than SQLite" is a perfectly good answer.

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

  • Finding the time to program in your spare time?

    - by Omar Kooheji
    I've got about a dozen programming projects bouncing about my head, and I'd love to contribute to some open source projects, the problem I have is that having spent the entire day staring at Visual Studio and or Eclipse (Sometimes both at the same time...) the last thing I feel like doing when I go home is program. How do you build up the motivation/time to work on your own projects after work? I'm not saying that I don't enjoy programming, it's just that I enjoy other things to and it can be hard to even do something you enjoy if you've spent all day already doing it. I think that if I worked at a chocolate factory the last thing I'd want to see when I got home was a Wonka bar.... Related: How do you keep a balance between working, training, health and family?

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Program seems to get stuck ?

    - by Frank
    I have a Java program, I did some change and the strange thing is no matter what I do now, it will output the same result before my change, I looked into my system (Windows Vista), and clicked on "Generate a system health report", the result says : Symptom : A service is reported as having an unexpected error code Cause : One or more services has failed. The service did not stop gracefully, suggesting the service may have crashed or one of its components stopped in an unsupported way. Details : Service exited with code not equal to 0 or 1077 Resolution : Restart the service It doesn't say which service, and I defragmented my drives a few times, it still gave me the same report. I also checked my memory, it says no problem with ram. Everything else is ok, I re-started the windows a few times, got more free space on my C: drive, yet the Java program is still not responding to my new changes, unless I intentionally make a mistake, it gives me compile error, but if it can compile successfully, it still gives me the results before the change. What can I do to fix this ? Frank

    Read the article

  • Real Time Monitoring System using .net [closed]

    - by sameer
    Need to know the right way of accomplishing this task. Task Description: We need to develop the application which display the dashboard where data from various SQL DB is fetched from different servers and displayed. Now this need to happen real time we can have refresh time say 5 min. Please refer the analogy so that it will help me to take the right path. Here are some details, Just for your reference. 1) How many servers? This application may support around Max 100 servers. 2) This Application will be used by Administrator to find the health of the SQL Server DB. 3) This will be build by 2 developers in approx 3 months time. 4) Refresh rate will be configurable as a Administrator option. Regards, Sameer

    Read the article

  • program crashes at CIN input | C++

    - by TimothyTech
    hello okay, so i made a DOS program however my game always crashes on my second time running to the cin function. #include <iostream> #include <string> #include <ctime> #include <cstdlib> using namespace std; //call functions int create_enemyHP (int a); int create_enemyAtk (int a); int find_Enemy(int a); int create_enemyDef (int a); // user information int userHP = 100; int userAtk = 10; int userDef = 5; string userName; //enemy Information int enemyHP; int enemyAtk; int enemyDef; string enemies[] = {"Raider", "Bandit", "Mugger"}; int sizeOfEnemies = sizeof(enemies) / sizeof(int); string currentEnemy; int chooseEnemy; // ACTIONS int journey; int test; int main() { // main menu cout << "welcome brave knight, what is your name? " ; cin >> userName; cout << "welcome " << userName << " to Darland" << endl; //TRAVELING MENU: cout << "where would you like to travel? " << endl; cout << endl << " 1.> Theives Pass " << endl; cout << " 2.> Humble Town " << endl; cout << " 3.> Mission HQ " << endl; cin >> journey; if (journey == 1) { // action variable; string c_action; cout << "beware your journey grows dangerous " << endl; //begins battle // Creating the enemy, HP ATK DEF AND TYPE. ; srand(time(0)); enemyHP = create_enemyHP(userHP); enemyAtk = create_enemyAtk(userAtk); enemyDef = create_enemyDef(userDef); chooseEnemy = find_Enemy(sizeOfEnemies); currentEnemy = enemies[chooseEnemy]; cout << " Here comes a " << currentEnemy << endl; cout << "stats: " << endl; cout << "HP :" << enemyHP << endl; cout << "Attack : " << enemyAtk << endl; cout << "Defense : " << enemyDef << endl; ACTIONS: cout << "Attack <A> | Defend <D> | Items <I>"; cin >> c_action; //if ATTACK/DEFEND/ITEMS choice if (c_action == "A" || c_action == "a"){ enemyHP = enemyHP - userAtk; cout << " you attack the enemy reducing his health to " << enemyHP << endl; userHP = userHP - enemyAtk; cout << "however he lashes back causing you to have " << userHP << "health left " << endl; //end of ATTACK ACTION } the last line "cin c_action crashes. i use two other pages. they just create the functions. is it a complier issue. also why does my complier always shutdown after it runs he app. is there a way to stop it?

    Read the article

  • SQL Server missing tables and stored procedures

    - by Robo
    I have an application on a client's site that processes data each night, last night SQL Server 2005 gave the error "Could not find stored procedure 'xxxx'". The stored procedure does exist in the database, has the right permission as far as I can tell, the application runs fine in other nights as well. In previous occasions, the SQL Server has also gave error saying 'database object not found', and refers to a table in the database that does exists. So, on rare occasions, the server thinks certain stored procedures and tables does not exist in the database. The objects it refers to are often ones that are frequently used. Is the database somehow corrupted, is there some sort of repair/health check I can do?

    Read the article

  • EMV credit/debit card chip application

    - by rohitamitpathak
    I am working in smart card industry and familiar with ISO 7816 and Java Card. Till now I worked for ID card, Health card and SIM application using GSM standard. Now I have a chance to work in EMV. Here I have some confusion like: Is a EMV credit/debit card will be a java card containing applet inside it? what standard I need to go through to develop emv debit/credit card application? Is there any good tutorial which helps to understand about EMV debit/credit Card application development? Is there any good simulator to develop and test emv card application?

    Read the article

  • Help understanding this stack trace

    - by user80632
    Hi I have health monitoring turned on, and i have the following error i'm trying to understand: Exception: Exception information: Exception type: System.InvalidCastException Exception message: Specified cast is not valid. Thread information: Thread ID: 5 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at _Default.Repeater1_ItemDataBound(Object sender, RepeaterItemEventArgs e) at System.Web.UI.WebControls.Repeater.CreateControlHierarchy(Boolean useDataSource) at System.Web.UI.WebControls.Repeater.OnDataBinding(EventArgs e) at _Default.up1_Load() at _Default.Timer1_Tick(Object sender, EventArgs e) at System.Web.UI.Timer.OnTick(EventArgs e) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I'm just trying to figure out exactly where the problem is happening and what it is - is it happening in the Repeater1_ItemDataBound sub routine, or in the Timer1_Tick sub routine? Is the last thing that happened before the error occured at the top or bottom of the trace? any help much appreciated thanks

    Read the article

  • How do game trainers change a address in memory thats dynamic?

    - by GameTrainersWTF
    Lets assume i am a game and i have a global int* that contains my health. A game trainers job is to modify this value to whatever in order to achieve god mode. I've looked up tutorials on game trainers to understand how they work, and the general idea is to use a memory scanner to try and find the address of a certain value. Then modify this address by injecting a dll or whatever. But i made a simple program with a global int* and its address changes every time i run the app, so i don't get how game trainers can hard code these addresses? Or is my example wrong? What am i missing?

    Read the article

  • How do game trainers change an address in memory that's dynamic?

    - by GameTrainersWTF
    Lets assume I am a game and I have a global int* that contains my health. A game trainer's job is to modify this value to whatever in order to achieve god mode. I've looked up tutorials on game trainers to understand how they work, and the general idea is to use a memory scanner to try and find the address of a certain value. Then modify this address by injecting a dll or whatever. But I made a simple program with a global int* and its address changes every time I run the app, so I don't get how game trainers can hard code these addresses? Or is my example wrong? What am I missing?

    Read the article

  • How to use the watchdog timer in a RTOS?

    - by user946230
    Assume I have a cooperative scheduler in an embedded environment. I have many processes running. I want to utilize the watchdog timer so that I can detect when a process has stopped behaving for any reason and reset the processor. In simpler applications with no RTOS I would always touch the watchdog from the main loop and this was always adequate. However, here, there are many processes that could potentially hang. What is a clean method to touch the watchdog timer periodically while ensuring that each process is in good health? I was thinking that I could provide a callback function to each process so that it could let another function, which oversees all, know it is still alive. The callback would pass a parameter which would be the tasks unique id so the overseer could determine who was calling back.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >