Search Results

Search found 26093 results on 1044 pages for 'process monitor'.

Page 70/1044 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • The Top 5 Business Challenges in Financial Services. Oracle Process Accelerators as a Solution By Lance Shaw

    - by JuergenKress
    Here at Oracle, we continue to release Process Accelerators for additional solutions.  These Accelerators help achieve process excellence faster with end-to-end implementations of common business processes.  They are Ready-to-use and extensible, and include industry specific best practices. One common industry where Process Accelerators are used to speed the delivery of business process management solutions is Financial Services.  We've recently produced a whitepaper that identifies the top five business challenges in the financial services industry and outlines how adopting Oracle Process Accelerators can give a competitive edge. To get the whitepaper please visit our website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: financial services,process accelerators,Lance Shaw,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Multi monitor setup with a tv as well?

    - by jasondavis
    I have always had a dual monitor setup for my pc's for years now. I am now in the market to build a new PC and get some new stuff. I need some help or advice on how to run 3 monitors WITH a 4th display which will be a lcd/plasma tv. So I am thinking I can get 2 video cards and that will give me all the hookups to run 3 monitors instead of 2. I will mount all 3 monitors in a row side by side and above them on the wall I would like to mount a larger 30-40+ inch lcd or plasma tv. I would then like to hook up my atelite/cable to this tv just as I would normally hook up a tv but I would like to also be able to have an option to view my PC o this tv as well. I know that is possible but would it be possible to view my PC on that tv and also still view my 3 other monitors and have the TV be a 4th display, where I could dock a different app/window in windows in all 4 displays (3 monitors + tv) ?? Please tell me any tips/advice on how to do this including what cables/software if any/converters/ you name it. Thanks for any help

    Read the article

  • COMPAQ Tower No Signal to monitor

    - by Lancelot
    I received a Compaq tower: Compaq Presario SR1224NX Onboard VGA Windows XP SP2 from a friend. My plan was to turn this into an Ubuntu Server. It booted up with no problems even with the Ubuntu live disc. After a normal shutdown (not unplugging the power cord and not doing a hard shutdown with the power button), it would not restart even after SEVERAL attempts. I realized the light next to the power supply would flash very rapidly. I researched and found out it was one of two things: a dead power supply or the cables to the motherboard and to the disks might be faulty, etc. Thus, I checked to ensure the cables were fine(and they were). I purchased a Power Supply (this one has 400 watts, the initial had 250) and installed it. The tower was able to boot into the live disk and everything. After a normal shutdown, it now restarts but is not sending signal to my monitor. I have tried several monitors in which I know work perfectly but not with this tower (I recall that it did show display after replacing the power supply). The monitors are ACER. This is different than most "no Signal" problems since I am not using an external Video Card, this is onboard VGA.

    Read the article

  • Configure a Windows PC as network appliance w/o monitor, keyboard and mouse

    - by Joshua Lim
    I intend to use a small form factor PC with Windows 7 Professional installed as a network appliance attached directly to my customer's LAN without connecting a monitor, keyboard or mouse. How should I configure the networking for my PC so that I can access it via say my laptop? I figure that I can do it 2 ways. Attach my laptop to the PC using a crossover cable? Connect via RDP and configure networking. Configure an IP address on the PC before I deliver it to the customer place. At the customer's place, attach the PC to LAN and connect to the IP address which I previously configured from my laptop or from one of the customer's workstations. I know the first way is doable, but is the second way possible? I'm sorry if this question sounds ridiculous - I am Delphi programmer but a novice on networking. Finally, if possible, I hope to make the configuration process web based as I wouldn't like to reveal the fact that I am using Win7 Pro for the network appliance!

    Read the article

  • How can I monitor VNC via Nagios?

    - by atroon
    I have a number of remote sites which have VNC running on a few computers for support purposes. They are (obviously) only available on our internal network. I am using Nagios to keep track of all the systems in the network and I want to have it check to make sure the VNC server is running on the appropriate hosts. There is a 'check_vnc' plugin available here but it relies on VNC Snapshot which I don't want to use. Certainly I could use it, but it adds more complexity and dependency, which I want to avoid. It seems simpler to just use check_tcp to make sure I get the proper response to a connection request for VNC, e.g. port 5900, send a connect string, get back framebuffer info. My real question, I suppose, is this: What is the 'proper' generic connect string for VNC (I use both UltraVNC and RealVNC) and what is the expected response? If it's really easier to use the VNC Snapshot and check_vnc, let me know. I just can't imagine that a string of text isn't easier, faster, and less bandwidth intensive to monitor.

    Read the article

  • "What happens?" server performance monitor

    - by AlexAtNet
    Hello! After reviewing some thread about server monitoring software I end up with a simple question: Which of the server monitoring tools should I use for automatic detection of "abnormal" situations with recommendations on how to fix them? I look for software that checks the system performance after installation and calculate some average load values (memory, CPU, etc). And when something happens (CPU load is increased to 20%) then it tries to detect a reason for this. If it is apache, it should check for access logs. If mysql, it should check mysql logs and tell me what happens. It this is because some user decodes a lot of images, I'd like to know which command is executed, when and user name. The same for disk usage, memory, number of processes, threads and so on. Ideally, this software should periodically checks the system and report problems: errors in PHP error log, outdated packages, security vulnerabilities. In other word I'm looking a software that will keep my simple Debian/Apache/PHP/MySQL server without forcing me to monitor the charts every day. I hope that such program exists. Thanks, Alex

    Read the article

  • How to monitor bandwidth use of each device on wifi network

    - by GWLlosa
    I have in my home a standard Comcast cable internet connection. I have it going from the wall to a cable modem, and from the modem to a late-series Linksys router, which provides wired and wireless networking. The vast majority of the users are wireless connections. For day-to-day tasks, this connection is fully sufficient for all my needs. However, on regular occassions, we have social gatherings that involve many people bringing laptops and other PCs and using the network and internet simultaneously, frequently for gaming. I have no administrative oversight over these machines; they have been known to be riddled with spyware and/or bloatware or be running torrents, legal or otherwise. The only reason I care is that on a regular basis, one of the machines will flatline my internet bandwith, and consume it all in order to upload/download/spam people/whatever. When this happens, the latency of the connections for gaming and the like becomes unacceptable, and everyone suffers. My question is: Is there a system I can set up whereby I can easily monitor the various systems connected to my wireless connection, see how much bandwith each one is using, and for what ends? That way, at a glance, I can spot the offending machine and kick it from the connection, without having to go from machine to machine, checking each one's "bandwith used" properties manually, and dealing with the owner's indignant protests all the while. I understand this will likely involve 3rd-party software and/or hardware; my issue is I don't even know where to begin.

    Read the article

  • configure a Macbook Pro to use external monitor at boot (Debian Linux)

    - by Eric
    In the spirit of reuse, I've installed Debian (version 6.0.5 "squeeze") on my wife’s old Macbook Pro (circa 2009 or so), to repurpose it for various tasks. The catch is the display is flaky. It will last a random amount of time, between 2 minutes and 2 hours, before freezing and graying out. This is a known issue with that generation of MBP. Fortunately it’s no problem for me, as I plan to use it with an external monitor anyway. Which brings us to the problem: How do I configure this thing to output to the external display by default, and hopefully disable the built-in LCD? The ideal solution would be to modify a setting in the EFI (BIOS), but I’m not holding out much hope for that. Next best thing would be a kernel option I can pass to the NVIDIA driver. What won’t work is a solution that doesn’t give me a display until X starts. I need to have console access, especially given that the built-in LCD is dying, and any day now might give out completely. So far I haven’t been able to find anything online. lspci says I’ve got an NVIDIA GeForce 9400M Help is much appreciated! Eric PS if this question is better suited to the Unix & Linux area, pls advise and I will move it.

    Read the article

  • How To Monitor Home Wireless Network Connected Devices Bandwith

    - by GWLlosa
    (Originally posted on SuperUser, not sure if it might be better suited here) I have in my home a standard Comcast cable internet connection. I have it going from the wall to a cable modem, and from the modem to a late-series Linksys router, which provides wired and wireless networking. The vast majority of the users are wireless connections. For day-to-day tasks, this connection is fully sufficient for all my needs. However, on regular occassions, we have social gatherings that involve many people bringing laptops and other PCs and using the network and internet simultaneously, frequently for gaming. I have no administrative oversight over these machines; they have been known to be riddled with spyware and/or bloatware or be running torrents, legal or otherwise. The only reason I care is that on a regular basis, one of the machines will flatline my internet bandwith, and consume it all in order to upload/download/spam people/whatever. When this happens, the latency of the connections for gaming and the like becomes unacceptable, and everyone suffers. My question is: Is there a system I can set up whereby I can easily monitor the various systems connected to my wireless connection, see how much bandwith each one is using, and for what ends? That way, at a glance, I can spot the offending machine and kick it from the connection, without having to go from machine to machine, checking each one's "bandwith used" properties manually, and dealing with the owner's indignant protests all the while. I understand this will likely involve 3rd-party software and/or hardware; my issue is I don't even know where to begin.

    Read the article

  • ATI Firepro v4800 3 monitor support

    - by Jared275
    I have an interesting, yet very frustrating problem. I have two computers, both running win7 32 bit, and both with ATI Firepro V4800 graphics cards. Both are using the DVI port and two DP to DVI adapters to connect 3 monitors. One of them is able to display three desktops, while the other fails at enabling the third, displaying "cannot save changes" in display properties while requiring that one monitor must be disabled when making the change in CCC. I've verified that both computers have the same driver version and that both are using the same DP-DVI adapters. This articles suggests a few things to try, but none of its suggestions seem to work either. I'm kind of at my wits end, hence my posting here. If this is not an appropriate question for SU, I apologize. I admit that I am not very familiar with the differences between dual link and single link DVI, and that is something I have not verified is standardized between the two computers. Is that a possible reason one is working and the other isn't? How do I check if the DVI cable is a single link, or dual link?

    Read the article

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • Tomcat shudown does not kill process

    - by vijay.shad
    Hi all, I have got some problems with my tomcat instance. I am using apache-tomcat-6.0.20 for linux.My OS is CENTOS when I execute command # bin/shutdown.sh It does not close the process that is running the tomcat. Can any body please give me some idea; what is happening with the process.

    Read the article

  • CPU/Mem/Disk utilization (average) after process has completed

    - by BassKozz
    Ubuntu Server 9.10 So there is the time command which will show you the time it took for a specific process/command to run after the command has completed. For example: :~$ time ls real 0m0.020s user 0m0.000s sys 0m0.000s I'd like to also collect the average CPU usage, Memory, and Disk (i/o) utilization after the process has completed using time (or another command if necessary). How can I accomplish this? Mainly I am using this to benchmark MySQL import performance using different innodb_buffer_pool_size settings.

    Read the article

  • Establishing a web page bookmarking process - looking for ideas to improve

    - by Matt
    Like many others, I have a process for bookmarking web pages to read later. My requirements for web page bookmarking are: Ability to bookmark pages must be available from all (within reason) platforms - PC/browser, mobile device, etc. Bookmarks must be centrally stored (implicit from #2) so that I can read the bookmarks from anywhere/any device Full text of web pages must be stored Bonus features would be: Bookmarks and page content should be full text searchable Maintain an archive indefinitely Distinguish between what's read vs. unread Bookmarked page content is cleaned up, e.g. ads eliminated, unnecessary html removed, pages better formatted for reading My current process (which addresses most of these requirements) is as follows: I set up a Gmail account with 2 labels, "Bookmarks Unread" and "Bookmarks Read" Gmail filters set up such that depending on the form of the address (using Gmail's '+string' functionality in addresses), the incoming bookmark gets labeled appropriately On each of my browsers/devices, I have an address book entry for [email protected] and [email protected]. If I want to clean up the page content, I use the Readability bookmarklet which does a great job of giving me the essential content only Anywhere I have Firefox, I use the Send Page by Email extension which, with 2 clicks, allows me to send the cleaned-up Readability page URL and content to one of the above email addresses. Where I don't have Firefox (e.g. iPhone or other mobile device) I use the native ability to send the current link via email (most/all apps have them, including the browser, RSS readers, NYTimes, etc.). In most cases (unless it's built into the particular app), this won't include the page body. The process is almost perfect. I've got the central access and ubiquitous access of Gmail as the storage mechanism, full text searchability (due to Gmail, but of course only for the URLs I send from that Firefox extension), a cleaned up page due to Readability, ability to read offline (assuming I use an IMAP client against Gmail) and permanent archiving of content, including what's been read vs. unread. The missing pieces are: The Send Page by Email Firefox extension seems to only send X bytes of a web page. Or some portion. So it limits my full text searchability. Where I don't have Firefox, I can only send the link, so no full text search at all in those cases. Instapaper looks like it meets most of my requirements (and bonus items). The only downside to me (personal preference) is that central storage is based on Instapaper vs. something more broad like Gmail, which as a generalized service and with Google behind it pretty much means it's permanent. I'm not too hung up on this, but I would definitely prefer to keep Gmail if possible. An upside of Instapaper is that it does the page clean-up as well as stores the entire page content, unlike my Firefox extension. Thoughts on addressing the gaps and improving this process further?

    Read the article

  • Linux per-process resource limits - a deep Red Hat Mystery

    - by BobBanana
    I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!

    Read the article

  • Bacula backup process always blocks the restore

    - by georgehu
    Every day we have a long running catalog backup process, and I found there is no way to restore a file during the backup. So, Bacula is designed to block the restore while back is running? I'm using a disk backup, I couldn't understand why I can't restore file from early written volumes as the back process is not supposed to writing on the same volume file.

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • new ActiveXObject('Word.Application') creates new winword.exe process when IE security does not allo

    - by Mark Ott
    We are using MS Word as a spell checker for a few fields on a private company web site, and when IE security settings are correct it works well. (Zone for the site set to Trusted, and trusted zone modified to allow control to run without prompting.) The script we are using creates a word object and closes it afterward. While the object exists, a winword.exe process runs, but it is destroyed when the word object is closed. If our site is not set in the trusted zone (Internet zone with default security level) the call that creates the word object fails as expected, but the winword.exe process is still created. I do not have any way to interact with this process in the script, so the process stays around until the user logs off (users have no way to manually destroy the process, and it wouldn't be a good solution even if they did.) The call that attempts to create the object is... try { oWordApplication = new ActiveXObject('Word.Application'); } catch(error) { // irrelevant code removed, described in comments.. // notify user spell check cannot be used // disable spell check option } So every time the page is loaded this code may be run again, creating yet another orphan winword.exe process. oWordApplication is, of course, undefined in the catch block. I would like to be able to detect the browser security settings beforehand, but I have done some searching on this and do not think that it is possible. Management here is happy with it as it is. As long as IE security is set correctly it works, and it works well for our purposes. (We may eventually look at other options for spell check functionality, but this was quick, inexpensive, and does everything we need it to do.) This last problem bugs me and I'd like to do something about it, but I'm out of ideas and I have other things that are more in need of my attention. Before I put it aside, I thought I'd ask for suggestions here...

    Read the article

  • Bypassing confirmation prompt of an external process

    - by Alidad
    How can I convert this Perl code to Groovy? How to bypass confirmation prompts of an external process? I am trying to convert a Perl script to Groovy. The program is loading/delete maestro (job scheduling) jobs automatically. The problem is the delete command will prompt for confirmation (Y/N) on every single job that it finds. I tried the process execute in groovy but will stop at the prompts. The Perl script is writing bunch of Ys to the stream and print it to the handler( if I understood it correctly) to avoid stopping. I am wondering how to do the same thing in Groovy ? Or any other approach to execute a command and somehow write Y on every confirmation prompt. Perl Script: $maestrostring=""; while ($x < 1500) { $maestrostring .= "y\n"; $x++; } # delete the jobs open(MAESTRO_CMD, "|ssh mserver /bin/composer delete job=pserver#APPA@") print MAESTRO_CMD $maestrostring; close(MAESTRO_CMD); This is my groovy code so far: def deleteMaestroJobs (){ ... def commandSched ="ssh $maestro_server /bin/composer delete sched=$primary_server#$app_acronym$app_level@" def commandJobs ="ssh $maestro_server /bin/composer delete job=$primary_server#$app_acronym$app_level@" try { executeCommand commandJobs } catch (Exception ex ){ throw new Exception("Error executing the Maestro Composer [DELETE]") } try { executeCommand commandSched } catch (Exception ex ){ throw new Exception("Error executing the Maestro Composer [DELETE]") } } def executeCommand(command){ def process = command.execute() process.withWriter { writer -> 1500.times {writer.println 'Y' } } process.consumeProcessOutput(System.out, System.err) process.waitFor() }

    Read the article

  • Virus ridden computer freezes on startup - can't access safe mode

    - by Eric
    Someone whom I love but who cannot be trusted with a live internet connection downloaded a particularly nasty virus that in turn downloaded a variety of unknown other viruses onto my home computer. The computer now freezes completely a few seconds after reaching the desktop and is unresponsive to any keyboard or mouse command. There are videos of my little kid on this hard drive that are not backed up and that I cannot bear to lose. But if I could get in there long enough to copy them off to an external drive I would have no problem doing a clean windows install to fix the problem; everything else is backed up online but the videos were too large. Normally I would start by going into safe mode but I have a large Dell monitor that doesn't show anything until the welcome screen appears. I think that I have gotten into the setup screen once or twice by mashing keys before I can see anything, but this monitor doesn't support that so I can't see what I'm doing to get it to boot from CD or anything else. I'm at my wits end. Any advice?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >