Search Results

Search found 21963 results on 879 pages for 'lance may'.

Page 620/879 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • Find a non-case-sensitive text string within a range of cells

    - by Iszi
    I've got a bit of a problem to solve in Excel, and I'm not quite sure how to go about doing it. I've done a few searches online, and haven't really found any formulas that seem to be useful. Here's the situation (simplified just a bit, for the purpose of this question): I have data in columns A-E. I need to match data in the cells in A and B, with data in C-E, and return TRUE or FALSE to column F. Return TRUE if: - The string in A is found within any string in C-E. OR - The string in B is found within any string in C-E. Otherwise, return FALSE. The strings must be exact matches for whole or partial strings within the range, but the matching function must be case-insensitive. I've taken a screenshot of an example sheet for reference. I'm fairly sure I'll need to use IF or on the outermost layer of the formula, probably followed by OR. Then, for the arguments to OR, I'm expecting there will be some use of IFERROR involved. But what I'm at a loss for is the function I could most efficiently use to handle the text string searches. VLOOKUP is very limited in this regard, I think. It may be workable to do whole-string against whole-string comparisons, but I'm fairly certain it won't return accurate results for partial string matches. FIND and SEARCH appear limited to only single-target searches, and are also case-sensitive. I suppose I could use UPPER or LOWER to force case-insensitivity in the search, but I still need something that can do accurate partial matching and search a specified range of cells. Is there any function, or combination of functions, that could work here? Ideally, I want to do this with a straight Excel formula. I'm not at all familiar with VBScript or similar tools, nor do I have time to learn it for this project.

    Read the article

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that a lot of things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. especially basic configurations these could manage. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go? and for more advanced features, one just use the terminal like old days. is this a smart way of managing a linux server? and which one would you choose? have you used any of these and could share your thoughts about them?

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • SNMPD running but not listening for connections at random

    - by Lukasz
    OS: CentOS release 5.7 (Final) Net-SNMP: net-snmp-5.3.2.2-14.el5_7.1 (from RPM) Periodically my NMS notifies me that SNMP has gone down on this machine. The service is restored in between 10 to 30 minutes. My NMS also pings and check SSH and those services are not affected during the SNMP outage. SNMPD log file shows that it is working and apparently receiving packets (either from local agents from 127.0.0.1 or from my NMS at 172.16.37.37) however attempting to snmpwalk locally or from the NMS system fails with a timeout. I have 7 of these servers running mixture of CentOS 5.7 and RHEL 5.7 with this specific version of Net-SNMP installed from RPM - none of them have this issue except this one. 5 of the machines (including the NMS system and this problem server) are in the same rack connected using one switch. Restarting SNMPD does not fix the issue - it clears up by itself eventually. Any suggestions where I can begin diagnosing the issue? It's a closed subnet so IPTables is not used. SNMPD config below: # Following entries were added by HP Insight Management Agents at # Tue May 15 10:58:17 CLT 2012 dlmod cmaX /usr/lib64/libcmaX64.so rwcommunity public 127.0.0.1 rocommunity public 127.0.0.1 rwcommunity 3adRabRu 172.16.37.37 rocommunity 3adRabRu 172.16.37.37 rwcommunity 3adRabRu 172.16.37.36 rocommunity 3adRabRu 172.16.37.36 trapcommunity callmetraps trapsink 172.16.37.37 callmetraps trapsink 172.16.37.36 callmetraps syscontact Lukasz Piwowarek syslocation Santiago, Chile # ---------------------- END -------------------- agentAddress udp:161 com2sec rwlocal default public com2sec rolocal default public com2sec subnet default 3adRabRu group rwv2c v2c rwlocal group rov2c v2c rolocal group rov2c v2c subnet view all included .1 access rwv2c "" any noauth exact all all none access rov2c "" any noauth exact all none none

    Read the article

  • Cisco ASA intermittently fails to see traffic

    - by DrStalker
    users | Mikrotik -- Internet | ASA | ServerA and ServerB I'm trying to troubleshoot a problem with a new Cisco ASA 5505. The network design is as above - the Microtik is the existing router, ServerA and ServerB used to plug directly into it. ServerA has IP 10.30.1.10, ServerB has IP 10.30.1.11 The ASA is configured with no NAT, a "allow anything" firewall, and uses the microtik as its default gateway. In effect, it is currently a simple IP router; the firewall and VPN stuff will all come later once the basics are working. Th problem is access to ServerA and ServerB is erratic - sometimes it will work, sometimes it will fail. It can fail for either one of the servers only, or both. When it is working: The Mikrotik logs show ping packets being sent out over the proper interface The ASA logs show the incoming connections. When it is failing: The Mikrotik logs show ping packets being sent out over the proper interface The ASA logs show nothing reaching the ASA. This can fail for one server only (e.g.: the Mikrotik is putting out packets to 10.30.1.10 and 10.30.1.11, but the ASA is only seeing packets arrive destined for 10.30.1.11) It can fail for one source only (e.g.: ClientA on the users network can ping 10.30.1.11, but clientB cannot) The problem can also be seen from the mikrotik router itself; sometimes it can ping ServerA and ServerB, sometimes it can only ping one of them What could be causing this? I can't think of any possible cause that is intermittent and could explain why the problem may occur for one destination server and not others. edit: Link to ASA config

    Read the article

  • sendmail.exe: Error during delivery. Service not available, closing transmission channel

    - by user2810332
    I have a module in my system that will trigger an email and send it to user. The email will be sent to user when the product in my system is going to expired soon. I test the whole module in localhost and there is no problem with it. Then, I finally moved the module in my server but it gives this error. sendmail: Error during delivery: Service not available, closing transmission channel. It will also create a notepad in my desktop that contains information like this : command line : C:\wamp\sendmail\sendmail.exe -t -i executable : sendmail.exe exec. date/time : 2011-06-18 01:10 compiled with : Delphi 2006/07 madExcept version : 3.0l callstack crc : $fecf9b34, $5562b2fa, $5562b2fa exception number : 1 exception class : EIdSMTPReplyError exception message : Service not available, closing transmission channel. main thread ($15b0): 0045918a +003e sendmail.exe IdReplySMTP 501 +1 TIdReplySMTP.RaiseReplyError 0043ff28 +0008 sendmail.exe IdTCPConnection 576 +0 TIdTCPConnection.RaiseExceptionForLastCmdResult 004402f4 +003c sendmail.exe IdTCPConnection 751 +10 TIdTCPConnection.CheckResponse 0043feba +002a sendmail.exe IdTCPConnection 565 +2 TIdTCPConnection.GetResponse 004403fd +002d sendmail.exe IdTCPConnection 788 +4 TIdTCPConnection.GetResponse 0045ab97 +0033 sendmail.exe IdSMTP 375 +4 TIdSMTP.Connect 004b5f14 +1060 sendmail.exe sendmail 808 +326 initialization 77013675 +0010 kernel32.dll BaseThreadInitThunk thread $cf8: 77a400e6 +0e ntdll.dll NtWaitForMultipleObjects 77013675 +10 kernel32.dll BaseThreadInitThunk thread $1088: 77a41ecf +0b ntdll.dll NtWaitForWorkViaWorkerFactory 77013675 +10 kernel32.dll BaseThreadInitThunk May I know what is the problem of this error ? Is it something like firewall in the server that block my sendmail.exe or anything else ? FYI, I'm using Wamp and Sendmail to send the email. This is my first time seeing error like this. I need an explanation on this. Thank you.

    Read the article

  • Printer sharing problem (win7 / WinXP): Canon pixma USB printer

    - by Rabarberski
    For a friend, I am trying to share a USB Canon pixma ip3000 printer between two computers in his home network. But I can't get it to work due tot a Canon driver problem. The printer is connected to the Windows 7 (64 bit) computer, and we would like to be able to print from a Windows XP computer. 'Normally' it should be no problem to use Windows printer sharing, however, because one machine is 32-bit and the other is 64-bit, installing an extra driver is required. T he driver provided by canon (here) is described as a 'Canon Inkjet Printer Driver Add-On Module'. The problem is that the .inf file contained in the .exe file isn't accepted as a driver when prompted by the Printer Sharing Wizard, I suspect because it is an add-on driver (whatever that may be). I've connected and installed the printer locally on the XP machine first (which works), so that the XP machine would already know the driver when using it as a network printer, but that doesn't work; the wizard still wants a driver file. Anybody suggestions how to get this working? Maybe there is some sort of generic driver (would be OK even with limited functionality)?

    Read the article

  • Debian and Multipath IO problem

    - by tearman
    Basically the situation is, I have a box running Debian, the box internally has an Intel SCSI RAID controller which is controlling 2 hard drives in RAID1 mode which is where the OS is installed. Further, I have a QLogic fiber channel adapter that connects the unit to a Fiber Channel SAN. My process of installation is I'll install Debian to the local drives, and leave the QLogic firmware out of it for the time being. Then once I get the unit online, I'll install the firmware drivers. This flops my internal drives from /dev/sda to /dev/sdc, which is a bit annoying, but recoverable. Probably should address these by UUID anyways. Once I get back online, I have to install multipath-tools (the framework is a multipath framework). However, once I reboot the machine again, it fails on boot after discovering multipath targets, saying my local drives are busy and cannot be mounted to /root. Any help in what may be the problem here? Or at least how to disable multipath until after the unit boots and then ignores the internal drives?

    Read the article

  • PowerPoint '10 avoid animation completion on click & advance slide or start new one

    - by ScottS
    Scenario I have PowerPoint 2010 On the "Transitions" tab the "Advance Slide On Mouse Click" check box is checked. I have a long, slow, timed, non-repeating animation working in the background of the slide. I click to advance the slide before the animation is finished, but ... Instead of advancing the slide, the animation moves to the completed state ... Forcing a second click to actually advance the slide. Additionally If I have other animations on the slide that are initiated by a click, the long animation also advances to a finished state before starting the new animation. Desired Behavior On click, I want the slide to advance or the next on-click animation to start whether the long animation is done or not, and without having that long animation first "complete" itself. In the case of another animation, I simply want the long animation to continue, while also doing the new animation. Ultimate Question Is there a way to either: Set an option somewhere to not have that animation complete on click and simply "continue" to animate with the start of a new animation or to advance the slide (as the case may be)? Create a VBA script that will produce the desired behavior for the long animation?

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • Why do (Russian) characters in some received emails change when reading in David InfoCenter?

    - by waszkiewicz
    I'm using David InfoCenter as email Software, and I have troubles with some of my emails in Russian. It's only a few letters, in some emails (sent from different people), like for example the "R" ("P" in russian) will be shown as a "T". In other emails in Russian, the problem doesn't appear. Isn't it strange? Does anyone had the same problem already and found where it came from? When I transmit that email to an external mailbox (internet email account), it's even worse, and gives me symbols instead of all Russian letters... The default encoding was "Russian (ISO)", I changed it to "Russian (Windows)", but same problem. Another weird reaction is when I write an intern email and name it TEST in Russian (????), with ???? in the text window, it changes the title to "Oano"? But the content stays in Russian... With Mailinator I got the following, for message and subject "????": Subject: ???? [..] MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_000_00017783.4AF7FB71" This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 0KLQtdGB0YI= ------_=_NextPart_000_00017783.4AF7FB71 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxNRVRBIG5hbWU9R0VORVJBVE9SIGNvbnRl bnQ9Ik1TSFRNTCA4LjAwLjYwMDEuMTg4NTIiPjwvSEVBRD4NCjxCT0RZIHN0eWxlPSJGT05UOiAx MHB0IENvdXJpZXIgTmV3OyBDT0xPUjogIzAwMDAwMCIgbGVmdE1hcmdpbj01IHRvcE1hcmdpbj01 Pg0KPERJViBzdHlsZT0iRk9OVDogMTBwdCBDb3VyaWVyIE5ldzsgQ09MT1I6ICMwMDAwMDAiPtCi 0LXRgdGCPFNQQU4gDQppZD10b2JpdF9ibG9ja3F1b3RlPjxTUEFOIGlkPXRvYml0X2Jsb2NrcXVv dGU+PC9ESVY+PC9TUEFOPjwvU1BBTj48L0JPRFk+PC9IVE1MPg== ------_=_NextPart_000_00017783.4AF7FB71--

    Read the article

  • Weblogic 12 and the CLI

    - by Rig
    I am working with WebLogic on Fedora 19 and am attempting to use the CLI tools to no avail. It appears these were deprecated as far back as WebLogic 9 however I was assured they are still there and still functional. As it stands I have a need to use them if they are in fact functional. What appears to the case is that the weblogic jar file is not being loaded correctly to the classpath by this script after trying to manually add it to the classpath as it fails when trying to add those jars via java -classpath <path>. I've spent a lot of time so far trying to get this sorted out but I'm wondering what I may be missing here. My Java runtime is version 7, Fedora is 19, and WebLogic is 12.1. When I run env after running the provided set environment script it appears to have no impact from what I can see. (I'll add that later when I get back to that machine). I'm mostly a Windows developer so some of this is a topic I'm not well versed in. [foo@localhost bin]$ ./setDomainEnv.sh [foo@localhost bin]$ java weblogic.Admin -url t3://localhost:7001 -username <username> -password <password> HELP Error: Could not find or load main class weblogic.Admin [foo@localhost bin]$ ls -ltar total 72 drwxr-x--- 2 foo foo 4096 Jun 2 07:36 service_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 server_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 nodemanager -rwxr-x--- 1 foo foo 1267 Jun 2 07:36 setStartupEnv.sh -rwxr-x--- 1 foo foo 1105 Jun 2 07:36 startNodeManager.sh -rwxr-x--- 1 foo foo 5765 Jun 2 07:36 startWebLogic.sh -rwxr-x--- 1 foo foo 2001 Jun 2 07:36 stopWebLogic.sh -rwxr-x--- 1 foo foo 3170 Jun 2 07:36 startManagedWebLogic.sh -rwxr-x--- 1 foo foo 2776 Jun 2 07:36 stopManagedWebLogic.sh -rwxrwxrwx 1 foo foo 14136 Jun 2 07:36 setDomainEnv.sh -rwxr-x--- 1 foo foo 2060 Jun 2 07:36 startComponent.sh drwxr-x--- 5 foo foo 4096 Jun 2 07:36 . -rwxr-x--- 1 foo foo 1726 Jun 2 07:36 stopComponent.sh drwxr-x--- 12 foo foo 4096 Jun 2 07:45 ..

    Read the article

  • Windows 2008 Server cannot access any network share

    - by Ramesh
    Hello friends I run a Windows 2008 server with SP2. This server acts as a desktop alone. Recently, I switched between two networks (corporate and other) using this system. Ever since, I am unable to access any network share on the original network from where I installed and configured the desktop. The message I get is "Network path was not found". Note that I am able to access the internet and my corporate mail server. I am told this is a Vista and Windows 2008 specific problem and I have done everything I could think of: a) Deleted the second network settings from the desktop b) Installed a patch from MS that supposed took care of this problem (with MS clearly saying they had not tested this enough) c) The SP2 install was after the problem occurred and I went ahead with it in the hope that SP2 may have something that would fix this Some additional details: a) A system admin can log into this system from a remote terminal b) I cannot get into my own system using the hidden share C$ - for instance \mymachine\C$ gives me the same message as above - Network path not found c) I can log into my system remotely using mstsc d) I cannot create shares on this system - as an extension network printers are not detected I have an update for you: The error message is as follows - **Network Error** Windows cannot access \\network_share Check the spelling of the name. Otherwise there might a problem with your network. To try to identify and resolve network problems, click Diagnose. Clicking Diagnose gives Error Code: 0x80070035 The network path was not found. Any help will be appreciated Thanks

    Read the article

  • Sonicwall routing between multiple subnets on multiple interfaces

    - by Rain
    As shown by the network diagram below, I have two completely separate networks. One is being managed by a Sonicwall NSA 220, the other by some other router (the brand is not important). My goal is to allow devices within the 192.168.2.0/24 network to access devices in the 192.168.3.0/24 network. Allowing the reverse (192.168.3.0/24 - 192.168.2.0/24) is not required. So far, I have done the following: I connected the X3 Interface on the Sonicwall to the 192.168.3.0/24 network switch (shown as the dashed red line in the diagram). Next, I gave it a static ip address of 192.168.3.254 and set the Zone to LAN (the same Zone for the X0 interface). Judging by various articles and KBs I've read, this is all that should be necessary, although it does not work. I can ping 192.168.3.254 from any device in the 192.168.2.0/24 network although I cannot ping/connect to any device within the 192.168.3.0/24 network. Any help would be greatly appreciated! Network Diagram: (I asked a similar, yet more complicated, question earlier; although, I realized that I cannot solve that without first solving this (which may actually solve my original question))

    Read the article

  • Windows or Linux for VPN-VPN Bridge

    - by James
    I have the following network layout: Network1 ----VPN1-----Network2----VPN2----Network3 I can administer everything in Network1 only and my goal is to get to a box on Network3. I've been told by the admins of Network2 that it's not possible for them to route traffic from Network1 to Network3. I've finally been authorised to host a box in Network2 and I'm hoping with this I can set something up to resolve the issue. My question is should I set this up as a Windows or a Linux box. My initial thought was to use iptables to reroute requests but with my lack of experience with Windows Server (used for something or other in Network2) I'm not sure if this will work. My head's full of questions like: - can I get an ip without logging in to a windows domain? - if I do get an ip, do Windows Servers manage routing through the VPN? - can I make a linux box authenticate with Windows Server to log on to the domain? - would it just be easier to set up a windows box? - is it possible to configure a windows box to do routing from Network1 to Network3? Has anyone done anything like this before? Had experience managing Windows Server? Authenticated (or not as the case may be) to a Windows domain? I'd really appreciate your advice. It might be worth mentioning that the overall objective is to establish a telnet connection from a box on Network1 to a box on Network3.

    Read the article

  • Use icacls to make a directory read-only on Windows 7

    - by Dave G
    I'm attempting to test some filesystem exceptions in a Java based application. I need to find a way to create a directory that is located under %TMP% that is set to read-only. Essentially on UNIX/POSIX platforms, I can do a chmod -w and get this effect. Under Windows 7/NTFS this is of course a different story. I'm running into multiple issues on this. My user has "administrative" right (although this may not always be the case) and as such the directory is created with an ACL including: NT AUTHORITY\SYSTEM BUILTIN\Administrators <my current user> Is there a way using icacls to essentially get this directory into a state where it is read-only PERIOD, do my test, then restore the ACL for removal? EDIT With the information provided by @Ansgar Wiechers I was able to come up with a solution. I used the following: icacls dirname /deny %username%:(WD) In the page located here I found this in the remarks section: icacls preserves the canonical order of ACE entries as: * Explicit denials * Explicit grants * Inherited denials * Inherited grants By performing the above icalcs command, I was able to set the current user's ability to write or append files (WD) to the directory to deny. Then it was a question of returning it to a state post test: icacls dirname /reset /t /c Done

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

  • My computer loses network connectivity every 30 minutes

    - by Logan Garland
    My LAN connected PC loses network connectivity every 30 minutes. It has a static IP address and I've checked to make sure there aren't any IP conflicts on my network. If I'm streaming from that PC to my Xbox the stream will be interrupted and it normally takes about a minute to come back online. The same happens if I'm actually on the PC and just browsing the web. I'm looking for suggestions on how to track down this issue. I've tried checking the available logs on my router to see if there is an issue with DCHP but have been unsuccessful in finding any evidence. Any suggestions would be helpful. I can't think of any recent changes to my network, PC or software installations that may have caused this. I am a software developer and have intermediate networking knowledge. EDIT: During one outage I told windows to troubleshoot the network problem and it said that it could automatically fix the problem by changing DCHP info. It basically said my network adapter from static to automatically obtain. This did fix the issue quicker than just waiting it out, but the outage occurred again 30 minutes later even when leaving those settings.

    Read the article

  • Windows 7 cannot view FAT32 formatted bootable usb drive

    - by NaimK
    I'm having some issue where when I run bootsect with a command line of "bootsect.exe /nt52 : /force /mbr", then Windows 7 (the comp I'm running bootsect on) can no longer view the contents of the usb drive. Explorer tries to look at it, and then fails, and I can't even correctly eject the drive, when I try, it does nothing until I yank it out, and then I get some errors. Bootsect reports success on writing the volume and the drive data to make it bootable, but it doesn't boot after copying on the necessary files (files from a created ISO, it works when it is created on XP). But this may be that I'm not following the same instructions as when building it on XP since some of the command don't seem to always work correctly. The drive is formatted to FAT32 (necessary I think, cause I'm installing a custom version of Win XP embedded). Any ideas? Or perhaps a good or automated way to load a usb with a custom version of win xp and make it bootable from Win 7? I am having some issues, for instance, "ufdprep.exe" rarely works when I'm running it from Windows 7, I don't know why.

    Read the article

  • WebSeal and jsp content updated by Ajax

    - by lior chaga
    Hey, I have a problem running an application on environment with WebSeal. It is a web application with Java server that contains many parts that are replcaed within the page according to user input. For instance - a form called Outer.jsp may contain a form:options combo-box (by spring-forms), that uppon selection of an option, a certain Div is updated with a content produced by a jsp and fetched by an Ajax call (the ajax impementation in the client is done by Prototype JavaScript framework 1.5.1.2). Let's call the content fetched by ajax - Inner.jsp So Outer.jsp is fetching Inner.jsp, which in turn uses js functions in files included by the Outer.jsp. This, I think, is where my problem starts - Inner.jsp is not familiar with any of the functions included in Outer.jsp. And so, almost any operation performed by Inner.jsp is failing miserably. Needless to say - this works perfect when running on environment without WebSeal. Note that the scripting is enabled in WebSeal junction (with the -J option). I also see that the content returned by the Ajax call includes a document.cookie added by WebSeal (not sure it matters to this problem) Can anyone assist? Thanks! Lior

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >