Search Results

Search found 11115 results on 445 pages for 'disk monitoring'.

Page 85/445 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Monitoring all events in a class and sub-classes

    - by Basiclife
    Hi, I wonder if someone can help me. I've got a console App which I use to debug various components as I develop them. I'd like to be able to log to the console every time an event is fired either in the object I've instantiated or in anything it's instantiated [ad infinitum]. I wouldn't see some of these events normally due to them being consumed further down the chain). Ideally I would be able to log all public and private events but if only public are possible, I can live with that. I've Googled and all I can find is how to monitor a directory - So I'm not sure if this is not possible or simply has a name that I don't know. The sort of information I'm after is similar to what's found in an exception - Target Site, Source, Stack Trace, etc... Could I perhaps do this through reflection somehow? If someone could tell me if this is even possible and perhaps point me at some good resources, I'd be very grateful. Many thanks Basic To Give you an idea of the console App: Sub Main() Container = ContainerGenerate.GenerateContainer() Dim TemplateID As New Guid("5959b961-b347-46bc-b1b6-cba311304f43") Dim Templater = Container.Resolve(Of Interfaces.Mail.IMailGenerator)() Dim MyMessage = Templater.GenerateMail(TemplateID, Nothing, Nothing) Dim MySMTPClient = Container.Resolve(Of SmtpClient)() MySMTPClient.Send(MyMessage) Finish() End Sub

    Read the article

  • Ten latest files on disk

    - by Artic
    I need effective algorithm to keep only ten latest files on disk in particular folder to support some kind of publishing process. Only 10 files should present in this folder at any point of time. Please, give your advises what should be used here.

    Read the article

  • CVS tools for repo monitoring on windows?

    - by Bjorn J
    I sometime use the very simple but effective svncommitmonitor, http://tools.tortoisesvn.net/CommitMonitor to monitor activity. It's easy to see in the sys tray on a windows box and I've become used to it by now. So, is there a similar/identical tool for CVS. Some googling and to my surprise I couldn't find one. Any tips?

    Read the article

  • Monitoring DOM Changes in JQuery

    - by user363866
    Is there a way to detect when the disabled attribute of an input changes in JQuery. I want to toggle the style based on the value. I can copy/paste the same enable/disable code for each change event (as I did below) but I was looking for a more generic approach. Can I create a custom event that will monitor the disabled attribute of specified inputs? Example: <style type="text/css">.disabled{ background-color:#dcdcdc; }</style> <fieldset> <legend>Option 1</legend> <input type="radio" name="Group1" id="Radio1" value="Yes" />Yes <input type="radio" name="Group1" id="Radio2" value="No" checked="checked" />No <div id="Group1Fields" style="margin-left: 20px;"> Percentage 1: <input type="text" id="Percentage1" disabled="disabled" /><br /> Percentage 2: <input type="text" id="Percentage2" disabled="disabled" /><br /> </div> </fieldset> <fieldset> <legend>Option 2</legend> <input type="radio" name="Group2" id="Radio3" value="Yes" checked="checked" />Yes <input type="radio" name="Group2" id="Radio4" value="No" />No <div id="Group2Fields" style="margin-left: 20px;"> Percentage 1: <input type="text" id="Text1" /><br /> Percentage 2: <input type="text" id="Text2" /><br /> </div> </fieldset> <script type="text/javascript"> $(document).ready(function () { //apply disabled style to all disabled controls $("input:disabled").addClass("disabled"); $("input[name='Group1']").change(function () { var disabled = ($(this).val() == "No") ? "disabled" : ""; $("#Group1Fields input").attr("disabled", disabled); //apply disabled style to all disabled controls $("input:disabled").addClass("disabled"); //remove disabled style to all enabled controls $("input:not(:disabled)").removeClass("disabled"); }); $("input[name='Group2']").change(function () { var disabled = ($(this).val() == "No") ? "disabled" : ""; $("#Group2Fields input").attr("disabled", disabled); //apply disabled style to all disabled controls $("input:disabled").addClass("disabled"); //remove disabled style to all enabled controls $("input:not(:disabled)").removeClass("disabled"); }); }); </script>

    Read the article

  • Monitoring folders for changes

    - by blcArmadillo
    I'm working on a project that will require an application that watches a list of directories the user specifies for changes. Also, I'd like to give the users the option of running the application as a service or on an individual basis. Since users can choose to run it on an individual basis I don't think listening for some operating system event triggered by the addition or deletion of files (if such events exist) would be sufficient. I thought about maybe calculating a checksum for the deepest folder and then building up. I could then compare these checksums on subsequent scans to try and pinpoint where the changes have occurred. Would that be an appropriate solution; if not what would be the best way of doing this in an efficient manner? Also, I'm not quite sure what to tag this as so if you have any recommendations let me know and I'll as them as I see fit. EDIT: I'll need this method to work on Windows, OS X, and ideally Linux

    Read the article

  • Possible to migrate from non-RAID to RAID 1 and then RAID 5?

    - by stueng
    Using software RAID only Is it possible to start with a 2TB disk full of data and safely add it to a RAID 1 array? Is it then possible to add a third disk and migrate the RAID 1 array into a RAID 5 array? OR Is it possible to start with a 2 disk degraded RAID 5 array and then add the third disk later to create a health RAID 5 array? Backstory: I wish to migrate from a 2 disk NAS (RAID 1) to a 3 disk NAS and only purchase one new disk in doing so

    Read the article

  • Emulate disk IO speeds in Android SDK

    - by Ben L.
    I don't have a physical Android device to work with, so all I can use is the emulator. I'm wondering if there's something I could use to make the IO speeds more realistic - How do I slow down disk access to the speed it would be on a physical device? Also, this may be unrelated, but when I change the speed and latency options in Eclipse ADT DDMS view, I don't notice any change in internet speed on the emulator. Is this a bug?

    Read the article

  • CPU temperature monitoring C#

    - by Paul
    For a programming project I would like to access the temperature readings from my CPU and GPUs. I will be using C#. From various forums I get the impression that there is specific information and developer resources you need in order to access that information for various boards. I have a MSI NF750-G55 board. MSI's website does not have any of the information I am looking for. I tried their tech support and the rep I spoke with stated they do not have any such information. There must be a way to obtain that info. Any thoughts?

    Read the article

  • Shell script task status monitoring

    - by Bikram Agarwal
    I'm running an ANT task in background and checking in 60 second intervals whether that task is complete or not. If it is not, every 60 seconds, a message should be displayed on screen - "Deploy process is still running. $slept seconds since deploy started", where $slept is 60, 120, 180 n so on. There's a limit of 1200 seconds, after which the script will show the log via 'ant log' command and ask the user whether to continue. If the user chooses to continue, 300 seconds are added to the time limit and the process repeats. The code that I am using for this task is - ant deploy & limit=1200 deploy_check() { while [ ${slept:-0} -le $limit ]; do sleep 60 && slept=`expr ${slept:-0} + 60` if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process is still running. $slept seconds since deploy started." else wait $! && echo "Application ${New_App_Name} deployed successfully" || echo "Deployment of ${New_App_Name} failed" break fi done } deploy_check if [ $$ = "`ps -o ppid= -p $!`" ]; then echo "Deploy process did not finish in $slept seconds. Here's the log." ant log echo "Do you want to kill the process? Press Ctrl+C to kill. Press Enter to continue." read log limit=`expr ${limit} + 300` deploy_check fi Now, the problem is - this code is not working. This looks like a perfectly good code and yet, this is not working. Can anyone point out what is wrong with this code, please.

    Read the article

  • Please recommend a CMS framework in Java good for making a monitoring system

    - by TiansHUo
    I already have Java code to display and process data from a database. I would now like to implement the code as modules, and incorporate as a whole into a CMS system. The CMS MUST support Spring, Hibernate, etc. The CMS MUST not be GPL, and preferably open source (lGPL, BSD,etc). The CMS MUST have good documentation and support The CMS MUST be secure (safe from XSS and injection) and support different levels of authorizations, (built-in or module) The CMS MUST have good navigation and tab, (built-in or module) +1, for having a good ajax paginated table module to display data +1, for using ajax to change pages with support for bookmarks and history.back() +1, for using jquery or prototype +1, for being easy to deploy, and easy to add modules +1, for supporting dynamically add/remove and drag-dropping of modules Please recommend a CMS best for the job. Thank you! EDIT: I don't need blogs or wikis or etc. I just want a framework where I can display paginated lists and time series graphs and log files. I will also host some pages for configuration. All of these code are already implemented.

    Read the article

  • Monitoring for non-pgp files

    - by Aheho
    For privacy reasons, I want to prevent my users from posting unencrypted files to my ftp site. It is company policy that all data exchanges of sensitive data be encrypted with PGP. I'd like to setup a program to monitor the ftp folders and whenever a new file is placed there, verify that it is in fact encrypted. I can't just rely on the file extension because in some cases, our trading partners require a specific filename that doesn't have a .PGP on the end. Is there a library or another method I can use to verify that a given file is encrypted? I'm using C# and .NET on a windows platform.

    Read the article

  • Monitoring File Changes in C#

    - by Pcstalljr
    Hi there, I'm using C# for a mini project of mine, I am trying to monitor files that are changed, Deleted, And/or created. And export that to a file. But I am not quite sure how to monitor files. Any ideas?

    Read the article

  • monitoring unix resources from inside a process

    - by kamziro
    I've had a bunch of EAGAIN's from trying to fork() or spawning threads, which lead me to believe that I'm leaking resources somewhere. Is it possible, in POSIX, to get the following from inside the process itself: number of active pthreads number of active child processes number of active pipes number of active sockets (or maybe this and pipes would be counted as file descriptors?) Or do these have to be counted manually? There's already counters for them, but I think one of them are leaking.

    Read the article

  • Real Time Monitoring System using .net [closed]

    - by sameer
    Need to know the right way of accomplishing this task. Task Description: We need to develop the application which display the dashboard where data from various SQL DB is fetched from different servers and displayed. Now this need to happen real time we can have refresh time say 5 min. Please refer the analogy so that it will help me to take the right path. Here are some details, Just for your reference. 1) How many servers? This application may support around Max 100 servers. 2) This Application will be used by Administrator to find the health of the SQL Server DB. 3) This will be build by 2 developers in approx 3 months time. 4) Refresh rate will be configurable as a Administrator option. Regards, Sameer

    Read the article

  • Monitoring GPS Coordinates

    - by Suriyan Suresh
    I need to monitor GPS Coordinates changes at every 15 min and take action based on that. as per bada developer guide report "only one application allowed to run at a time if another application try to run first one is closed" .so that how do i monitor GPS coordinates without interruption from other applications. how do i keep my application running at all times

    Read the article

  • Writing direct to disk with php

    - by Jurander
    I would like to create an upload script that doesn't fall under the php upload limit. There might be an occasion where I need to upload a 2GB, or larger file and I don't want to have to change the whole server execution to above 32MB. Is there a way to write direct to disk from php? What method might you propose someone would use to accomplish this? I have read around stack overflow but haven't quite found what I am looking to do.

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • Using Pisa to write a pdf to disk

    - by phoebebright
    I have pisa producing .pdfs in django in the browser fine, but what if I want to automatically write the file to disk? What I want to do is to be able to generate a .pdf version file at specified points in time and save it in a uploads directory, so there is no browser interaction. Is this possible?

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • How can I Monitor the Performance of Individual Apps on Windows?

    My XP machine has become terribly slow and I want to identify the application at fault. It seems to be related to disk access rather than processor hogging. I can look at the task manager to get a good idea but it's not ideal. I was wondering if there was some application that can monitor all aspects of processes effectively. Is Process Explorer my only hope?

    Read the article

  • Most efficient method of detecting/monitoring DOM changes?

    - by Graza
    I need an efficient mechanism for detecting changes to the DOM. Preferably cross-browser, but if there's any efficient means which are not cross browser, I can implement these with a fail-safe cross browser method. In particular, I need to detect changes that would affect the text on a page, so any new, removed or modified elements, or changes to inner text (innerHTML) would be required. I don't have control over the changes being made (they could be due to 3rd party javascript includes, etc), so it can't be approached from this angle - I need to "monitor" for changes somehow. Currently I've implemented a "quick'n'dirty" method which checks body.innerHTML.length at intervals. This won't of course detect changes which result in the same length being returned, but in this case is "good enough" - the chances of this happening are extremely slim, and in this project, failing to detect a change won't result in lost data. The problem with body.innerHTML.length is that it's expensive. It can take between 1 and 5 milliseconds on a fast browser, and this can bog things down a lot - I'm also dealing with a large-ish number of iframes and it all adds up. I'm pretty sure the expensiveness of doing this is because the innerHTML text is not stored statically by browsers, and needs to be calculated from the DOM every time it is read. The types of answers I am looking for are anything from the "precise" (for example event) to the "good enough" - perhaps something as "quick'n'dirty" as the innerHTML.length method, but that executes faster.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >