Search Results

Search found 47246 results on 1890 pages for 'system recovery'.

Page 16/1890 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • MySQL Open Source Backup and Recovery Alternative: Xtrabackup

    MySQL database administrators are always looking for a solid backup and recovery tool that will suit all their needs. Xtrabackup, created by Percona, is the open source alternative to the commercial Innodb Hot Backup tool. This article explains a good methodology for testing and verifying Xtrabackups capabilities and precision.

    Read the article

  • MySQL Open Source Backup and Recovery Alternative: Xtrabackup

    MySQL database administrators are always looking for a solid backup and recovery tool that will suit all their needs. Xtrabackup, created by Percona, is the open source alternative to the commercial Innodb Hot Backup tool. This article explains a good methodology for testing and verifying Xtrabackups capabilities and precision.

    Read the article

  • Planning for Recovery

    Uncertainty sets the tone of business planning these days and past precedents, 'rules of thumb' and trading history provide little comfort when assessing future prospects. After 18 years of constant growth in GDP, planning is no longer about extrapolating past performance and adjusting for growth. It is now about constantly testing the temperature of the water, formulating scenarios, assessing risk and assigning probabilities. So how does one plan for recovery and improve forecast accuracy in such a volatile environment?

    Read the article

  • Disaster Recovery For Small Business

    You may not be a Boy Scout, but the best way to protect your small business is to be prepared. We look at disaster recovery options to help you stay one step ahead of trouble and to sleep better at night.

    Read the article

  • Disaster Recovery For Small Business

    You may not be a Boy Scout, but the best way to protect your small business is to be prepared. We look at disaster recovery options to help you stay one step ahead of trouble and to sleep better at night.

    Read the article

  • Retrieving SQL Server Fixed Database Roles for Disaster Recovery

    We ran into a case recently where we had the logins and users scripted out on my SQL Server instances, but we didn't have the fixed database roles for a critical database. As a result, our recovery efforts were only partially successful. We ended up trying to figure out what the database role memberships were for that database we recovered but we'd like not to be in that situation again. Is there an easy way to do this?

    Read the article

  • Disaster Recovery Planning for Data: The Cribsheet

    Planning for disaster recovery and business continuity aren't amongst the most exciting IT activities. They are, however, essential and relevant to any Database Administrator who is responsible for the safety and integrity of the companies' data, since data is a key part of business continuity. Make working with SQL a breezeSQL Prompt 5.3 is the effortless way to write, edit, and explore SQL. It's packed with features such as code completion, script summaries, and SQL reformatting, that make working with SQL a breeze. Try it now.

    Read the article

  • ???????/???Oracle Database Core Tech Seminar Oracle Data Guard,Oracle Recovery Manager(RMAN),Flashback

    - by user788995
    ????? ??:2012/05/14 ??:??????/?? Oracle Database????????????????Core Tech Seminar? ????????????????????????????????????Oracle Data Guard?Oracle Recovery Manager?Oracle Flashback Technology????????????·?????????? Active Data GuardRecovery Manager(RMAN)Flashback?????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/D3-22.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/D3-22.mp4 http://www.oracle.com/technetwork/jp/ondemand/database/db-new/d3-22-dl-1626591-ja.pdf

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • Did chkdsk make it harder to restore files?

    - by neyl
    My friend asked me to try and fix his loaded Sansa Clip + which wasn't playing. After opening it in MSC mode I discovered that the Music directory was empty and total of all files was only a few MB. However Disk properties showed me that it was 7Gb full. I then ran Tools - Error Checking and Windows dutifully informed me that disk was corrupt and I should run again Allowing Windows to Fix Errors. I did that and it told me everything was fixed and that all files were placed in FOUND.000 Dir. FOUND.000 was about 7.5 GB with FILE0000-1546 . CHK. (I am aware of methods like ChkBack to scan and convert to mp3 etc BUT Original filenames and structure needed!) Now I started getting worried that I made things worse! I have plenty of experience with Data Recovery Programs - Recuva, Restore My Files etc. and I was anyhow planning to use them to scan the drive. But NOW after CHKDSK "fixed" the drive maybe it modified critical FAT information vital for data recovery. So I run these programs and 0!!!. No trace of files! I tried a ton of Recovery Programs with same results TILL EaseUS Data Recovery Wizard found all files and I purchased program for $55! My Question In your opinion - did running CHKDSK with automatic fixing of errors make matters worse (i.e. many data recovery progs. didn't find a trace and they would have done if not for chkdsk) or was the filesystem too corrupt anyhow for regular File Recovery Progs.? If I would be a Professional - would I be responsible for running CHKDSK - automatic Fixing. Do you know of a better Data Recovery Program than EaseUs Data Recovery wizard - According to my experience I haven't found better!? Thanks

    Read the article

  • How to remove NTFS system files from a previous Vista installation

    - by Boldewyn
    I'm trying to shrink my system partition under Win Vista. It's all fine, except that in front of the last 300MB of the volume sits a single file, that cannot be moved by defrag or other means from its position. It's called C:\$Extend\$UsnJrnl:$J, and my assumtion is, that it is left from a previous installation of Vista, when I re-set up the system. Now, googling for this kind of files brings interesting results, but no solution to my problem: Files left on the disk can become ownerless in a new setup of Windows and inaccessible (even for administrators). To be able to access them again, I found the tip to use takeown to re-assign them to the Admin group (or anyone else). Works like a charm for normal files, but not for the C:\$Extend stuff. The C:\$Extend folder is a system folder of the NTFS file system, where the journal is stored (especially in a file called $UsnJrnl:$Data, whose name is surprisingly close to mine). You can delete the journal with fsutil usn /delete C:, however, this doesn't work from within the booted system (as I found out trying). Also, I'm not quite sure of the side effects. You can't move the NTFS own files with standard defrag tools. The same holds, by the way, for not accessible files. Every bit of knowledge out there is targeted to either not accessible files or the $Extend NTFS stuff, but noone addresses my problem involving both, an inaccessible system file. Question: How can I remove this file, or at least how can I move it on the disk?

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Mixing Silverlight-Specific System.Xml.Linq dll with Non-Silverlight System.Xml.Linq dll

    - by programatique
    I have a Logic layer that references Silverlight's System.Xml.Linq dll and a GUI that is in WPF (hence using the non-Silverlight System.Xml.Linq dll). When I attempt to pass an XElement from GUI project to a method in the Logic project, I am getting (basically) "XElement is not of type XElement" errors. To complicate matter, I am unable to edit the Logic layer project. The Non-Silverlight DLL is at: C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll THe Silverlight DLL is at: C:\Program Files (x86)\Microsoft SDKs\Silverlight\v3.0\Libraries\Client\System.Xml.Linq.dll I am new to C# but I'm fairly sure my issue is that I am referencing different DLL's to access the System.Xml.Linq namespace. I attempted to replace my non-Silverlight System.Xml.Linq.dll with the Silverlight's System.Xml.Linq.dll, but received assembly errors. Is there any way to resolve this short of scrapping my WPF GUI project and creating a Silverlight project?

    Read the article

  • postgres - sharing same pg_hba.conf between IPV4 system and IPV6 system

    - by StackTrace
    I am trying to package postgres from one machine to another. The source is windows 7 with IP V6 and target is windows XP with IPv4. Starting postgres on windows XP gives error 2010-11-01 12:01:07 IST LOG: invalid IP address "::1": Unknown host 2010-11-01 12:01:07 IST CONTEXT: line 76 of configuration file "C:/postgres/data/pg_hba.conf" 2010-11-01 12:01:07 IST FATAL: could not load pg_hba.conf -- postgres - sharing same pg_hba.conf between IpV4 system and IpV6 system Here is how my pg_hba.conf looks like # TYPE DATABASE USER CIDR-ADDRESS METHOD # IPv4 local connections: host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust

    Read the article

  • File recovery from Mac results in random files and extensions – how do I get my data back?

    - by Robsta
    This Mac hard drive was dying. Someone I knew did a file recovery and got as many files as he could. The program (not sure how it was done, or what program it was) dished out a bunch of folders names such as: DIR56.TOC DIR55.CUR DIR54.GPZ DIR53.GZI … and so forth, all the way down to DIR0.LZH. Some of the file extensions I do understand — like .JPEG, or .MOV — but most of them are ones I've never heard of. I've googled some of them like .TOC, wich stands for "table of contents", but I don't understand how to transfer that data back to the Mac. Currently, they are on a Windows machine. They are being transfered onto an external hard drive that the Mac can read. It can also see all the files. However, the few that I tested to see if the Mac recognizes them (like .TOC and .CUR) cannot be opened. Anyone have any idea as to what I should do? There are some important assignments on there I need to get. EDIT: Data transfer was most likely done by: Easy Recover 6 professional (95% sure, no guarantee)

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • Recovering data from /

    - by Abhijit Gavas
    I accidentally installed Ubuntu to one of my data drives from Windows. The drive was a NTFS drive and contained about 80 GB of important data. The size of the drive is 110 GB. Its new file system is ext4. In an attempt to recover the data, I downloaded foremost and tried the following commands: foremost -i / -o /media/281C8DB01C8D7998/Recovery/ -T -v foremost -i /dev/sda7 -o /media/281C8DB01C8D7998/Recovery/ -T -v (sda7 is the drive in question.) It appears that with either command, foremost gets stuck reading some file. Here is the console output: abhi@abi-PC:/dev$ foremost -i /dev/sda7 -o /media/281C8DB01C8D7998/Recovery/ -T -v Foremost version 1.5.7 by Jesse Kornblum, Kris Kendall, and Nick Mikus Audit File Foremost started at Fri Sep 28 20:58:00 2012 Invocation: foremost -i /dev/sda7 -o /media/281C8DB01C8D7998/Recovery/ -T -v Output directory: /media/281C8DB01C8D7998/Recovery_Fri_Sep_28_20_58_00_2012 Configuration file: /etc/foremost.conf Processing: stdin |------------------------------------------------------------------ File: stdin Start: Fri Sep 28 20:58:00 2012 Length: Unknown Num Name (bs=512) Size File Offset Comment Killed As you can see I have to kill it from system monitor. This approach does not seem to be working. What else could I try to recover the files? Please help. The files are very important and I will be devastated if I cannot recover them.

    Read the article

  • Partial Trust in WPF 4

    - by Hadi Eskandari
    I've started a new project in WPF 4 (.NET 4) and trying to see if I can run it in xbap mode. I need to run the application in Full Trust with the new mode made available in .NET 4 which asks the end user if the full trust application should be run. I've set the "Security" mode to "Full Trust" application, and it builds just fine. When I run it, an exception is thrown and IE error message shows the following error. Any ways around it?? Startup URI: T:\projects\Hightech Sources\PayRoll\PayRoll.Web\publish\PayRoll.Web.xbap Application Identity: file:///T:/projects/Hightech%20Sources/PayRoll/PayRoll.Web/publish/PayRoll.Web.xbap#PayRoll.Web.xbap, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1d910f49755d2c97, processorArchitecture=msil/PayRoll.Web.exe, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1d910f49755d2c97, processorArchitecture=msil, type=win32 System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessSecurityEngine.Check(CodeAccessPermission cap, StackCrawlMark& stackMark) at System.Security.CodeAccessPermission.Demand() at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection, Boolean suppressSecurityChecks) at System.Reflection.RuntimeAssembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, Boolean suppressSecurityChecks, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at PayRoll.Web.App.SelectAssemblies() at Caliburn.PresentationFramework.ApplicationModel.CaliburnApplication..ctor() at PayRoll.Web.App..ctor() at PayRoll.Web.App.Main() at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.Runtime.Hosting.ManifestRunner.Run(Boolean checkAptModel) at System.Runtime.Hosting.ManifestRunner.ExecuteAsAssembly() at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext, String[] activationCustomData) at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext) at System.Windows.Interop.PresentationApplicationActivator.CreateInstance(ActivationContext actCtx) at System.Activator.CreateInstance(ActivationContext activationContext) at System.AppDomain.Setup(Object arg) at System.AppDomain.nCreateInstance(String friendlyName, AppDomainSetup setup, Evidence providedSecurityInfo, Evidence creatorsSecurityInfo, IntPtr parentSecurityDescriptor) at System.Runtime.Hosting.ApplicationActivator.CreateInstanceHelper(AppDomainSetup adSetup) at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext, String[] activationCustomData) at System.Windows.Interop.PresentationApplicationActivator.CreateInstance(ActivationContext actCtx) at System.Activator.CreateInstance(ActivationContext activationContext) at System.Deployment.Application.DeploymentManager.ExecuteNewDomain() at System.Deployment.Application.InPlaceHostingManager.Execute() at MS.Internal.AppModel.XappLauncherApp.ExecuteDownloadedApplication() at System.Windows.Interop.DocObjHost.RunApplication(ApplicationRunner runner) at MS.Internal.AppModel.XappLauncherApp.XappLauncherApp_Exit(Object sender, ExitEventArgs e) at System.Windows.Application.OnExit(ExitEventArgs e) at System.Windows.Application.DoShutdown() at System.Windows.Application.ShutdownImpl() at System.Windows.Application.ShutdownCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.StartDispatcherInBrowser(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.FileIOPermission

    Read the article

  • ObjectContext.SaveChanges() fails with SQL CE

    - by David Veeneman
    I am creating a model-first Entity Framework 4 app that uses SQL CE as its data store. All is well until I call ObjectContext.SaveChanges() to save changes to the entities in the model. At that point, SaveChanges() throws a System.Data.UpdateException, with an inner exception message that reads as follows: Server-generated keys and server-generated values are not supported by SQL Server Compact. I am completely puzzled by this message. Any idea what is going on and how to fix it? Thanks. Here is the Exception dump: System.Data.UpdateException was unhandled Message=An error occurred while updating the entries. See the inner exception for details. Source=System.Data.Entity StackTrace: at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) at System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() at FsDocumentationBuilder.ViewModel.Commands.SaveFileCommand.Execute(Object parameter) in D:\Users\dcveeneman\Documents\Visual Studio 2010\Projects\FsDocumentationBuilder\FsDocumentationBuilder\ViewModel\Commands\SaveFileCommand.cs:line 68 at MS.Internal.Commands.CommandHelpers.CriticalExecuteCommandSource(ICommandSource commandSource, Boolean userInitiated) at System.Windows.Controls.Primitives.ButtonBase.OnClick() at System.Windows.Controls.Button.OnClick() at System.Windows.Controls.Primitives.ButtonBase.OnMouseLeftButtonUp(MouseButtonEventArgs e) at System.Windows.UIElement.OnMouseLeftButtonUpThunk(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.ReRaiseEventAs(DependencyObject sender, RoutedEventArgs args, RoutedEvent newEvent) at System.Windows.UIElement.OnMouseUpThunk(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.RaiseEventImpl(DependencyObject sender, RoutedEventArgs args) at System.Windows.UIElement.RaiseTrustedEvent(RoutedEventArgs args) at System.Windows.UIElement.RaiseEvent(RoutedEventArgs args, Boolean trusted) at System.Windows.Input.InputManager.ProcessStagingArea() at System.Windows.Input.InputManager.ProcessInput(InputEventArgs input) at System.Windows.Input.InputProviderSite.ReportInput(InputReport inputReport) at System.Windows.Interop.HwndMouseInputProvider.ReportInput(IntPtr hwnd, InputMode mode, Int32 timestamp, RawMouseActions actions, Int32 x, Int32 y, Int32 wheel) at System.Windows.Interop.HwndMouseInputProvider.FilterMessage(IntPtr hwnd, WindowMessage msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at System.Windows.Interop.HwndSource.InputFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at FsDocumentationBuilder.App.Main() in D:\Users\dcveeneman\Documents\Visual Studio 2010\Projects\FsDocumentationBuilder\FsDocumentationBuilder\obj\x86\Debug\App.g.cs:line 50 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Data.EntityCommandCompilationException Message=An error occurred while preparing the command definition. See the inner exception for details. Source=System.Data.Entity StackTrace: at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree) at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.CreateCommand(UpdateTranslator translator, Dictionary`2 identifierValues) at System.Data.Mapping.Update.Internal.DynamicUpdateCommand.Execute(UpdateTranslator translator, EntityConnection connection, Dictionary`2 identifierValues, List`1 generatedValues) at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) InnerException: System.NotSupportedException Message=Server-generated keys and server-generated values are not supported by SQL Server Compact. Source=System.Data.SqlServerCe.Entity StackTrace: at System.Data.SqlServerCe.SqlGen.DmlSqlGenerator.GenerateReturningSql(StringBuilder commandText, DbModificationCommandTree tree, ExpressionTranslator translator, DbExpression returning) at System.Data.SqlServerCe.SqlGen.DmlSqlGenerator.GenerateInsertSql(DbInsertCommandTree tree, List`1& parameters, Boolean isLocalProvider) at System.Data.SqlServerCe.SqlGen.SqlGenerator.GenerateSql(DbCommandTree tree, List`1& parameters, CommandType& commandType, Boolean isLocalProvider) at System.Data.SqlServerCe.SqlCeProviderServices.CreateCommand(DbProviderManifest providerManifest, DbCommandTree commandTree) at System.Data.SqlServerCe.SqlCeProviderServices.CreateDbCommandDefinition(DbProviderManifest providerManifest, DbCommandTree commandTree) at System.Data.Common.DbProviderServices.CreateCommandDefinition(DbCommandTree commandTree) at System.Data.Common.DbProviderServices.CreateCommand(DbCommandTree commandTree) at System.Data.Mapping.Update.Internal.UpdateTranslator.CreateCommand(DbModificationCommandTree commandTree) InnerException:

    Read the article

  • nginx bad gateway 502 with mono fastcgi

    - by Bradley Lederholz Leatherwood
    Hello so I have been trying to get my website to run on mono (on ubuntu server) and I have followed these tutorials almost to the letter: However when my directory is not blank fastcgi logs reveal this: Notice Beginning to receive records on connection. Error Failed to process connection. Reason: Exception has been thrown by the target of an invocation. I am not really sure what this means, and depending on what I do I can get another error that tells me the resource cannot be found: The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Default.aspx/ Version information: Mono Runtime Version: 2.10.8 (tarball Thu Aug 16 23:46:03 UTC 2012) ASP.NET Version: 4.0.30319.1 If I should provide some more information please let me know. Edit: I am now getting a nginx gateway error. My nginx configuration file looks like this: server { listen 2194; server_name localhost; access_log $HOME/WWW/nginx.log; location / { root $HOME/WWW/dev/; index index.html index.html default.aspx Default.aspx Index.cshtml; fastcgi_index Views/Home/; fastcgi_pass 127.0.0.1:8000; include /etc/nginx/fastcgi_params; } } Running the entire thing with xsp4 I have discovered what the "Exception has been thrown by the target of an invocation." Handling exception type TargetInvocationException Message is Exception has been thrown by the target of an invocation. IsTerminating is set to True System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. Server stack trace: at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in :0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in :0 at System.Runtime.Serialization.ObjectRecord.LoadData (System.Runtime.Serialization.ObjectManager manager, ISurrogateSelector selector, StreamingContext context) [0x00000] in :0 at System.Runtime.Serialization.ObjectManager.DoFixups () [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (BinaryElement elem, System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] in :0 at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] in :0 at System.Runtime.Remoting.RemotingServices.DeserializeCallData (System.Byte[] array) [0x00000] in :0 at (wrapper xdomain-dispatch) System.AppDomain:DoCallBack (object,byte[]&,byte[]&) Exception rethrown at [0]: --- System.ArgumentException: Couldn't bind to method 'SetHostingEnvironment'. at System.Delegate.GetCandidateMethod (System.Type type, System.Type target, System.String method, BindingFlags bflags, Boolean ignoreCase, Boolean throwOnBindFailure) [0x00000] in :0 at System.Delegate.CreateDelegate (System.Type type, System.Type target, System.String method, Boolean ignoreCase, Boolean throwOnBindFailure) [0x00000] in :0 at System.Delegate.CreateDelegate (System.Type type, System.Type target, System.String method) [0x00000] in :0 at System.DelegateSerializationHolder+DelegateEntry.DeserializeDelegate (System.Runtime.Serialization.SerializationInfo info) [0x00000] in :0 at System.DelegateSerializationHolder..ctor (System.Runtime.Serialization.SerializationInfo info, StreamingContext ctx) [0x00000] in :0 at (wrapper managed-to-native) System.Reflection.MonoCMethod:InternalInvoke (System.Reflection.MonoCMethod,object,object[],System.Exception&) at System.Reflection.MonoCMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in :0 --- End of inner exception stack trace --- at (wrapper xdomain-invoke) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at (wrapper remoting-invoke-with-check) System.AppDomain:DoCallBack (System.CrossAppDomainDelegate) at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] in :0 at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] in :0 at Mono.WebServer.XSP.Server.RealMain (System.String[] args, Boolean root, IApplicationHost ext_apphost, Boolean quiet) [0x00000] in :0 at Mono.WebServer.XSP.Server.Main (System.String[] args) [0x00000] in :0

    Read the article

  • How can I get System.Type from "System.Drawing.Color" string

    - by jonny
    I have an xml stored property of some control <Prop Name="ForeColor" Type="System.Drawing.Color" Value="-16777216" /> I want to convert it back as others System.Type type = System.Type.GetType(propertyTypeString); object propertyObj = TypeDescriptor.GetConverter(type).ConvertFromString(propertyValueString); System.Type.GetType("System.Drawing.Color") returns null. The question is how one can correctly get color type from string (it will be better not to do a special case just for Color properties) Update from time to time this xml will be edited by hand

    Read the article

  • mod_mono 'Service Temporarily Unavailable' issue

    - by Charlie Somerville
    I've deployed an ASP.NET web application on a Linux (Debian) server running Apache 2.2 and mod_mono 1.9 It's working well, however Mono occasionally segfaults and uses the entire CPU which causes the website to stop working and display 'Service Temporarily Unavailable' Killing mono fixes it, but obviously this isn't a good solution. I tailed the system log after this happened and I saw the following error messages from the kernel: Apr 20 01:49:37 charliesomerville kernel: [1596436.204158] mono[17909]: segfault at b645f671 ip b645f671 sp b4ffb604 error 4<6>mono[19047]: segfault at b645f66e ip b645f66e sp b4bf7604 error 4<6>mono[18017]: segfault at b645f66e ip b645f66e sp b52fe604 error 4<6>mono[19668]: segfault at b645f5e6 ip b645f5e6 sp b48f4604 error 4<6>mono[22565]: segfault at b645f674 ip b645f674 sp b45f1604 error 4<6>mono[17700]: segfault at b645f661 ip b645f661 sp b51fd604 error 4<6>mono[19596]: segfault at b645f5e6 ip b645f5e6 sp b49f5604 error 4 Apr 20 01:49:37 charliesomerville kernel: [1596436.208172] mono[23219]: segfault at b645f66e ip b645f66e sp b44f0604 error 4 At the end of Apache's error.log are the following errors: [Tue Apr 20 03:10:23 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 [Tue Apr 20 03:10:23 2010] [error] Command stream corrupted, last command was 1 System.ArgumentNullException: null key Parameter name: key at System.Collections.Hashtable.get_Item (System.Object key) [0x00000] at System.Runtime.Serialization.SerializationCallbacks.GetSerializationCallbacks (System.Type t) [0x00000] at System.Runtime.Serialization.ObjectManager.RaiseOnDeserializingEvent (System.Object obj) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectContent (System.IO.BinaryReader reader, System.Runtime.Serialization.Formatters.Binary.TypeMetadata metadata, Int64 objectId, System.Object& objectInstance, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectInstance (System.IO.BinaryReader reader, Boolean isRuntimeObject, Boolean hasTypeInfo, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObject (BinaryElement element, System.IO.BinaryReader reader, System.Int64& objectId, System.Object& value, System.Runtime.Serialization.SerializationInfo& info) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadNextObject (System.IO.BinaryReader reader) [0x00000] at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ReadObjectGraph (System.IO.BinaryReader reader, Boolean readHeaders, System.Object& result, System.Runtime.Remoting.Messaging.Header[]& headers) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.NoCheckDeserialize (System.IO.Stream serializationStream, System.Runtime.Remoting.Messaging.HeaderHandler handler) [0x00000] at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize (System.IO.Stream serializationStream) [0x00000] at System.Runtime.Remoting.Channels.CADSerializer.DeserializeObject (System.IO.MemoryStream mem) [0x00000] at System.Runtime.Remoting.RemotingServices.GetDomainProxy (System.AppDomain domain) [0x00000] at System.AppDomain.CreateDomain (System.String friendlyName, System.Security.Policy.Evidence securityInfo, System.AppDomainSetup info) [0x00000] at System.Web.Hosting.ApplicationHost.CreateApplicationHost (System.Type hostType, System.String virtualDir, System.String physicalDir) [0x00000] at Mono.WebServer.VPathToHost.CreateHost (Mono.WebServer.ApplicationServer server, Mono.WebServer.WebSource webSource) [0x00000] at Mono.WebServer.ApplicationServer.GetApplicationForPath (System.String vhost, Int32 port, System.String path, Boolean defaultToRoot) [0x00000] at (wrapper remoting-invoke-with-check) Mono.WebServer.ApplicationServer:GetApplicationForPath (string,int,string,bool) at Mono.WebServer.ModMonoWorker.GetOrCreateApplication (System.String vhost, Int32 port, System.String filepath, System.String virt) [0x00000] at Mono.WebServer.ModMonoWorker.InnerRun (System.Object state) [0x00000] at Mono.WebServer.ModMonoWorker.Run (System.Object state) [0x00000] [Tue Apr 20 03:10:26 2010] [error] (70014)End of file found: read_data failed [Tue Apr 20 03:10:26 2010] [error] Command stream corrupted, last command was -1 Along with the above errors, Apache's error.log is packed with hundreds (if not thousands) of the following error: Maximum number (20) of concurrent mod_mono requests to /tmp/mod_mono_dashboard_default_2.lock reached. Droping request. At the moment, I'm thinking there might be something wrong with configuration here (it's basically running on out-of-the-box config)

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >