Search Results

Search found 51926 results on 2078 pages for 'application monitoring'.

Page 16/2078 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

  • VNC application/terminal server

    - by sebastian nielsen
    Which software should I use, if I want to set up a linux VNC terminal server that works in this way: The VNC server should be able to accept up to X simultanous connections on the same port 5900. The VNC server should use 640x480 on 8 or 16bit color. When the VNC server receives the connection, it should start a new "session" for a user, and auto-launch a specific linux application for that user. If the application is killed, crashes, or is exited in any way, user should be disconnected (kicked) from server. If the user disconnect, the application should be killed in a "graceful way", that allows the application to cleanup. (There should be no way to "pick up" a old session) Any ideas?

    Read the article

  • Deployed Web Application Requests for User Name and Password

    - by user43175
    Deployed Web Application Requests for User Name and Password I recently deployed a .NET web application into the server. Authentication mode is set to Windows (since the application is accessible only to Intranet users. Testing some machines, the application loads up properly. For some machines, a logon dialog window appears asking for User Name or Password. These dialog windows are those that you also normally see when you are trying to log into a Windows domain. Any idea why this happens randomly? Thanks.

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

  • How does the PPA fit into the scenario of publishing an application to the Ubuntu Software Center?

    - by Mridang Agarwalla
    I've been going through docs for the past couple of hours but I haven't understood what the PPA is? I have a cross-platform Java application that I'd like to publish to the Ubuntu Software Center. My application is open-source and I'm using Github. Apparently, publishing applications to the store isn't as simple as uploading a deb package - am I right? I need to create an account on Launchpad and put all my code there. I don't intend to move from Git to Bzr merely for the sake of publishing to the app store but luckily, one is able to set up source-code mirroring from Github to Launchpad. Since my application is still very premature, it'll have updates fairly often. When I build my application on my machine, do I simply go my Ubuntu App Developer page and upload the new DEB package or do they build my application from source? What exactly is the PPA for? I don't think I'll need too many of the Launchpad features so I'd like to stick to Github if possible. (Publishing for Ubuntu really isn't trivial. I can see why there are so many developers out there who haven't published their applications to the Ubuntu Software Center. Publishing an Android applications has been the easiest so far.)

    Read the article

  • Is it possible for an application (written in Mono C#) to run a console command?

    - by Razick
    I am wondering if a Mono C# application can somehow run a terminal command. For example, could the user give the program his or her password and then have the application run sudo apt-get install application-name (console requests password) password (console requests confirmation) y Preferably, this would be done without actually opening a terminal visible to the user, so that the application could provide the necessary feedback and manage the whole operation cleanly with as little user interaction as possible. Is there a way to do that? Let me know if clarification is needed. Thank you!

    Read the article

  • What is the best strategy for licensing a desktop application using a web service, when all I need to know is when people use the product?

    - by user1667022
    Our company's main application is a desktop program that is used at warehouses and written in C# and Windows Presentation Forms. The next thing we want to be able to do is track when customers open up the application and when it is being used. The reason for this is so we can charge them per month, based on if they are/arn't using the application. My boss is having me research different ways to "license" the product under these requirements. Not having any experience doing this, a few things come to mind. I could create a web application that runs on a server, and every time the desktop application is opened and the user logs in, the application connects to the server and marks a database with the DateTime. Or is there licensing software that I can use to accomplish this? Just looking for tips/advice from people who have experience with this type of stuff.

    Read the article

  • How to use Application Verifier to find memory leaks

    - by Patrick
    I want to find memory leaks in my application using standard utilities. Previously I used my own memory allocator, but other people (yes, you Neil) suggested to use Microsoft's Application Verifier, but I can't seem to get it to report my leaks. I have the following simple application: #include <iostream> #include <conio.h> class X { public: X::X() : m_value(123) {} private: int m_value; }; void main() { X *p1 = 0; X *p2 = 0; X *p3 = 0; p1 = new X(); p2 = new X(); p3 = new X(); delete p1; delete p3; } This test clearly contains a memory leak: p2 is new'd but not deleted. I build the executable using the following command lines: cl /c /EHsc /Zi /Od /MDd test.cpp link /debug test.obj I downloaded Application Verifier (4.0.0665) and enabled all checks. If I now run my test application I can see a log of it in Application Verifier, but I don't see the memory leak. Questions: Why doesn't Application Verifier report a leak? Or isn't Application Verifier really intended to find leaks? If it isn't which other tools are available to clearly report leaks at the end of the application (i.e. not by taking regular snapshots and comparing them since this is not possible in an application taking 1GB or more), including the call stack of the place of allocation (so not the simple leak reporting at the end of the CRT) If I don't find a decent utility, I still have to rely on my own memory manager (which does it perfectly).

    Read the article

  • Low Power Speed Monitoring

    - by user555584
    I am aware of speed detection via gps, however as a background app, I am concerned about high power drain. I am looking to detect speed, say over 5mph, but it does not have to be accurate, say like a speedometer. Is there a low power way to detect if the phone is in motion, say by triangulation, or tracking tower strength and new/recently lost towers? I have an app design that is dependent on running in the background upon launch and knowing if the phone is in a car or not. Thanks!

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • Traditional ASP.NET application in subdirectory of an MVC application

    - by David
    Windows Server 2003, IIS6. We're trying to deploy a non-MVC ASP.NET web application as a subdirectory of an MVC application. However the ASP.NET application in the subdirectory is failing with the message "Could not load file or assembly 'System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." which is bizarre because it's not an MVC application.

    Read the article

  • How to create a stand alone command line application with Node.js

    - by Fab
    I'm trying to find a way to use a command line nodejs application that I created on a computer without node.js installed. In other words how to package my application with node.js inside, in order to avoid the users to have node.js already installed. The tipical use case is: I run the application and the application works using the node core that is provide with the application (or the application checks if there is node.js installed, and if not it donwload and install it automatically). Do you have any idea?

    Read the article

  • low speed web application, Server problem or Application

    - by Ashian
    Hi, I have a web application written by asp.net (c#) sql server 2005. we host it on 2 dedicated server ( IIS and SQL server ) From some month ago , in some days of week we have many reports about speed issue. we have some other application on this server using same database. when we have speed problem all aplication on these server have this problem, but applications on other server in same data center work correctly. ram and cpu usage are ok. how can I check that the problem related to internet connection or my application design? which parameters must be checked. Some other information In applications users can upload several files to server , each file up to 3 MB. we use a sql web admin application, on same server that has same problem, this is a standard application which work perfectly on other servers. Thanks

    Read the article

  • June 22-24, 2010 in London City Level 400 SQL Server Performance Monitoring & Tuning Workshop

    - by sqlworkshops
    We are organizing the “3 Day Level 400 SQL Server Performance Monitoring & Tuning Workshop” for the 1st time in London City during June 22-24, 2010.Agenda is located @ www.sqlworkshops.com/workshops & you can register @ www.sqlworkshops.com/ruk. Charges: £ 1800 (5% discount for those who register before 21st May, £ 1710).In this 3 Day Level 400 hands-on workshop, unlike short SQLBits sessions, we go deeper on the tuning topics. Not sure if this will be a good use of your time & money? Watch our webcasts @ www.sqlworkshops.com/webcasts.We are trying to balance these commercial offerings with our free community contributions. Financially: These workshops are essential for us to stay in business!Feedback from Finland workshop posted by Jukka, Wärtsilä Oyj on February 23, 2010 to the LinkedIn SQL Server User Group Finland (more feedbacks @ www.sqlworkshops.com/feedbacks):Just want to start this thread and give some feedback on the Workshop that I attended last week at Microsoft.Three days in a row, deep dive into the query optimization and performance monitoring :-) I must say, that the SQL guru Ramesh has all the tricks up in his sleeves.The workshop was very helpful and what's most important: no slide show marathon: samples after samples explained very clearly and with our own class room SQL servers we can try the same stuff while Ramesh typed his own samples.If the workshop will be rearranged, I can most willingly recommend it to anyone who wants to know what's "under the hood" of SQL Server 2008.Once again, thank you Microsoft and Ramesh to make this happen. May the force be with us all :-)Hope to see you @ the Workshop. Feel free to pass on this information to your SQL Server colleagues.-ramesh-www.sqlbits.com/speakers/r_meyyappan/default.aspx

    Read the article

  • IoT end-to-end demo – Remote Monitoring and Service By Harish Doddala

    - by JuergenKress
    Historically, data was generated from predictable sources, stored in storage systems and accessed for further processing. This data was correlated, filtered and analyzed to derive insights and/or drive well constructed processes. There was little ambiguity in the kinds of data, the sources it would originate from and the routes that it would follow. Internet of Things (IoT) creates many opportunities to extract value from data that result in significant improvements across industries such as Automotive, Industrial Manufacturing, Smart Utilities, Oil and Gas, High Tech and Professional Services, etc. This demo showcases how the health of remotely deployed machinery can be monitored to illustrate how data coming from devices can be analyzed in real-time, integrated with back-end systems and visualized to initiate action as may be necessary. Use-case: Remote Service and Maintenance Critical machinery once deployed on the field, is expected to work with minimal failures, while delivering high performance and reliability. In typical remote monitoring and industrial automation scenarios, although many physical objects from machinery to equipment may already be “smart and connected,” they are typically operated in a standalone fashion and not integrated into existing business processes. IoT adds an interesting dynamic to remote monitoring in industrial automation solutions in that it allows equipment to be monitored, upgraded, maintained and serviced in ways not possible before. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: IoT,Iot demo,sales,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Partner Webcast – Introducing Oracle Business Activity Monitoring - 18 October 2012

    - by Thanos
    Oracle Business Activity Monitoring (Oracle BAM), a component of both SOA Suite and BPM Suite, is a complete solution for building interactive, real-time dashboards and proactive alerts for monitoring business processes and services. Oracle BAM gives both business executives and operational manager’s timely information to make better business decisions.  A Real-time Business Visibility Solution that allows to monitor business services and processes in the enterprise, to correlate KPIs down to the actual business process themselves, and most important, to change business processes quickly or to take corrective action if the business environment changes. Let us show you how BAM provides a powerful insight, through Real-Time Dashboards, that can be a competitive edge for all your customers. Agenda: Oracle BAM Overview Business Problems New Approach with Oracle BAM 11g Demonstration Summary & Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now Send your questions and migration/upgrade requests [email protected] Visit regularly our ISV Migration Center blog or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. All content is made available through our YouTube - SlideShare - Oracle Mix.

    Read the article

  • Loading the Cache from the Business Application Server

    - by ACShorten
    By default, the Web Application server will directly connect to the Database to load its cache at startup time. Customers, who implement the product installation in distributed mode, where the Web Application Server and Business Application Server are deployed separately, may wish to prevent the Web Application Server to connect to the database directly. Installation of the product in distributed mode was introduced in Oracle Utilities Application Framework V2.2. In the Advanced Web Application Server configuration, it is possible to set the Create Simple Web Application Context (WEBAPPCONTEXT) to true to force the Web Application Server to load its cache via the Business Application rather than direct loading. The value of false will retain the default behavior of allowing the Web Application Server to connect directly to the database at startup time to load the cache. The value of true will load the cache data via direct calls to the Business Application Server, which can cause a slight delay in the startup process to cater for the architecture load rather than the direct load. The impact of the settings is illustrated in the figure below:                             When setting this value to true, the following properties files should be manually removed prior to executing the product: $SPLEBASE/etc/conf/root/WEB-INF/classes/hibernate.properties $SPLEBASE/splapp/applications/root/WEB-INF/classes/hibernate.properties Note: For customers who are using a local installation, where the Web Application Server and Business Application Server are combined in the deployed server, it is recommended to set this parameter to false, the default, unless otherwise required. This facility is available for Oracle Utilities Application Framework V4.1 in Group Fix 3 (via Patch 11900153) and Patch 13538242 available from My Oracle Support.

    Read the article

  • Simple monitoring of a Raspberry Pi powered screen - Part 2

    - by Chris Houston
    If you have read my previous blog post Raspberry Pi entrance signed backed by Umbraco - Part 1 which describes how we used a Raspberry Pi to drive an Entrance sign for QV Offices you will have seen I mentioned a follow up post about monitoring the sign.As the sign is mounted in the entrance of the building on the ground floor and the reception is on the 1st floor, this meant that if there was a fault of any kind showing on the screen, the first person to see this was inevitably one of QV Offices' clients as they walked into the building.Although the QV Offices' team were able to check the Umbraco website address that the sign uses, this did not always mean that everything was working as expected. We noticed a couple of times that the sign had Wifi issues (it is now hard wired) and this caused the Chromium browser to render a 404 error when it tried to refresh the screen.The simple monitoring solutionWe added the following line to our refresh script, so that after the sign had been refreshed a screen shot of the Raspberry Pi would be taken:import -display :0 -window root ~/screenshot.jpgFinally we wrote a small Crontab task that ran on a QV Offices Mac that grabs this screen shot and saved it on the desktop, admittedly we have used a package that it not mega secure, but in reality this is an internal system that only runs an office sign, so we are not to concerned about it being hacked.*/5 * * * * /usr/local/bin/sshpass -p 'password' /usr/bin/scp [email protected]:screenshot.jpg Desktop/QVScreenShot.jpgAs the file icon updates, if the image changes, this gives a quick visual indication of the status of the sign, if for some reason the icon does not look correct the QV Offices administrator can just click on the file to see the exact image currently displayed on the sign.Sometimes a quick and easy solution is better than a more complex and expensive one.

    Read the article

  • Good practice or service for monitoring unhandled application errors for a small organization

    - by palto
    I'm working with multiple software with varying ways of monitoring for errors. When I make software, I usually send email with the stack trace to admins(usually me). Some customer software is monitored by a team who check that a particular batch run was successfull. Other software might not have any monitoring at all(someone will call when things go wrong horribly). Sending emails is good, except when things start going wrong, my mail gets filled fast. Also I don't want to solve the same problem in code for every software. Is there some relatively cheap and low maintenance software or practice to handle this. I want it to be cheap/low maintenance because usually I work alone or in teams of 5 or smaller. For example it would be great if errors would be aggregated so I don't get 10 000 emails when something unexpected happens... For clarification: By unhandled errors I mean Exceptions that were unhandled by application code that were propagated to Tomcat or Jboss. I don't need help with how to catch those errors. I need help with what to do with them. Is there any cloud application that I could send my errors to? Or some simple server to install? Or some library that can handle errors using configuration files. I use Java if that is any help.

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

  • Need advice on a monitoring, reminder and warning application.

    - by cbmeeks
    I am a developer that also has to monitor several things on different servers. Such as: 1) Did all of the MS SQL databases backup last night? 2) Did all of the MySQL databases backup last night? 3) Were the database dumps actually copied to the right folder? 4) How much free space is left on each server's hard drives? 5) How big are folders "abc", "def", "etc" getting? 6) Send emails/alerts when thresholds are reached Etc. Just basically something to help me NOT forget such important things. I thought about writing something myself but didn't want to waste the effort if something is already out there. I would also prefer one application instead of many if I could. Thanks. EDIT Forgot to mention the operating system. These run on Windows Server 2003 and/or 2008. In fact, what would be cool is a program that supports multiple servers from one machine. Something that can log into those servers.

    Read the article

  • Calling a WPF Application and modify exposed properties?

    - by Justin
    I have a WPF Keyboard Application, it is developed in such a way that an application could call it and modify its properties to adapt the Keyboard to do what it needs to. Right now I have a file *.Keys.Set which tells the application (on open) to style itself according to that new style. I know this file could be passed as a command line argument into the application. That would not be a problem. My concern is, is there a way via a managed environment to change the properties of the executable as long as they are exposed properly, an example: 'Creates a new instance of the Keyboard Application Dim e_key as new WpfApplication("C:\egt\components\keyboard.exe") 'Sets the style path e_key.SetStylePath("c:\users\joe\apps\me\default.keys.set") e_key.Refresh() 'Applies the style e_key.HideMenu() 'Hides the menu e_key.ShowDeck("PIN") 'Shows the custom "deck" of keyboard keys the developer 'Created in the style application. ''work with events and response 'Clear the instance from memory e_key.close e_key.dispose e_key = nothing This would allow my application to become easily accessible to other Touch Screen Application Developers, allowing them to use my keyboard and keep the functionality they need. It seems like it might be possible because (name of executable).application shows all the exposed functions, properties, and values. I just have never done this before. Any help would be appreciated, thank you in advance.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >