Search Results

Search found 15426 results on 618 pages for 'concordus applications'.

Page 526/618 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • Backing up server and multiple clients

    - by inquam
    I'm running a Amahi server. It's basically a Fedora14 x64 installation. I'm looking for a good solution to backup my 200GB system drive on the server to an external USB/eSATA drive every night. I looked into using dd but since other things might be running on the server at the same time it didn't feel quite safe. I would like the backups to be incremental so the following backups after the initial one would be quite fast. The backup should also be bootable or prehaps be able to produce a bootable disk after booting from a CD or something. I would also like the server to be able to do similar backups of my clients running Ubuntu, Windows 7 x64, Windows 7 Starter, OSX Lion, Windows XP and so on. So no applications backing up only shared folders or something like that. My guess is a client daemon would have to exist that would lock the system to allow backup of a Windows system drive that can otherwise be quite cranky. Booting up a CD in a crashed client and connecting to the server restoring the latest backup and being up running is my ideal goal. Is there anything out there that would fit these needs?

    Read the article

  • Why can't I open file with doubleklick after I moved the application that opens it on windows?

    - by Glen S. Dalton
    I am on windows server 2003, but I guess it is the same on windows xp. I moved some movable applications (usually people create them for usb sticks) to locations like c:\bin\app1\app1.exe. The old location was c:\programs\app1\app1.exe app1.exe can open files of type *.ap1 When I rightclick file.ap1 and choose open with ... the Open with dialog appears. But it is not working how I expect in this situation. I can choose c:\bin\app1\app1.exe with the "Browse" button, but: app1.exe will not appear in the dialog where I just choosed it in the programs list, like I am used to it after clicking OK on it in the browse dialog. app1.exe will not open it when I click ok in the "Open with" dialog, the application that was assigned until then will still open it What could be the reason? my account is member of the administrators group I just changed the permissions of the folder c:\bin\app1\ and made sure that the group "Administrators" has all rights. I also inherited this manually to all subfodlers and subfiles. I also tried to move the application (with the whole folder) to "c:\program files\app1\app1.exe

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by user31459
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • How can you connect to a SQL Server not on your domain?

    - by scotty2012
    I have a test machine that's not allowed on our domain because we are testing corporately unsupported applications (SQL 2008 and Server 2008). I want to use management studio to connect to the SQL2008 server but can't get it working. I have authentication set to mixed-mode, I've checked 'allow remote connections to this server', but when I try to access it, I get the error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53) Since it says the provider is Named Pipes, I enabled Named Pipes on the server, but still no dice. I've tried connecting to the system name, the IP, the system name\instance and IP\instance, all to no avail. Is what I'm trying to do not possible? Edit: Well, through some basic troubleshooting, I've found that I can't ping the server from my client computer, but I can ping the client computer from the server? They are both plugged into the same switch, and are sitting next to each other. The windows firewall on the server is turned on, is there some specific settings I need to enable? DAH! So it was the firewall blocking me. How can I enable the firewall and still connect?

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • management network to a network port for additional ones munin and monit

    - by paolo
    management network to a network port for additional ones munin and monit I want to build a separate Netzwek for server management. I have several network cards a linux / debian / ubuntu with computer. Set both network cards sin in the /etc/network/interfaces. # The primary network interface #allow-hotplug eth0 #iface eth0 inet dhcp auto eth0 iface eth0 inet static address 10.0.0.240 netmast 255.255.255.0 network 10.0.0.0 brodacast 10.0.0.255 gateway 10.0.0.254 auto eth1 iface eth1 inet static address 10.0.10.240 netmast 255.255.255.0 network 10.0.10.0 brodacast 10.0.10.255 post-up ip route add 10.0.0.0/24 dev eth0 src 10.0.0.240 table eth0-WAN post-up ip route add default via 10.0.0.254 table eth0-WAN post-up ip route add 10.0.10.0/24 dev eth1 src 10.0.10.240 table eth1-LAN post-up ip route add default via 10.0.10.200 table eth1-LAN post-up ip rule add from 10.0.0.240 table eth0-WAN post-up ip rule add from 10.0.10.240 table eth1-LAN still i adjusted / etc/iproute2/rt_tables and following routes set up in the /etc/network/interfaces I want to have both applications and the network interface separately as munin and monit only on eth1 and not have to eth0. it goes to the reboot but sometimes not always. # Traceroute-i eth1 10.0.10.200 not go what am I doing wrong?

    Read the article

  • Almost every Inkscape extension yields an error in Mac OS X

    - by andyvn22
    I've run the latest few versions of Inkscape (currently landed on "0.47+devel"), and have been having trouble with the Extensions menu. So far, in every version of Inkscape I've tried, nearly every extension yields the following error: The fantastic lxml wrapper for libxml2 is required by inkex.py and therefore this extension. Please download and install the latest version from http://cheeseshop.python.org/pypi/lxml/, or install it through your package manager by a command like: sudo apt-get install python-lxml I've tried the instructions listed there, of course, with no effect. I've also found many references to this issue on fora, in bug trackers, etc., and as such also tried: sudo easy_install lxml cd /Applications/Inkscape.app/Contents/Resources/lib mv libxml2.2.dylib libxml2.2.dylib.old ln -s /usr/lib/libxml2.dylib and a few similar solutions. Nothing has produced any change in Inkscape's behavior. Does anyone know A) what's really going on here? Because from what I gather the error is not describing the actual problem. And of course B) a simple solution? I need those features! :)

    Read the article

  • Can I use a single SSLCertificateFile for all my VirtualHosts instead of creating one of it for each VirtualHost?

    - by user65567
    I have many Apache VirtualHosts for each of which I use a dedicated SSLCertificateFile. This is an configuration example of a VirtualHost: <VirtualHost *:443> ServerName subdomain.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/users/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/users/publ`enter code here`ic"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl/server.key SSLCertificateChainFile /private/etc/apache2/ssl/ca.crt </VirtualHost> Since I am maintaining more Ruby on Rails applications using Passenger Preference Pane, this is a part of the apache2 httpd.conf file: <IfModule passenger_module> NameVirtualHost *:80 <VirtualHost *:80> ServerName _default_ </VirtualHost> Include /private/etc/apache2/passenger_pane_vhosts/*.conf </IfModule> Can I use a single SSLCertificateFile for all my VirtualHosts (I have heard of wildcards) instead of creating one of it for each VirtualHost? If so, how can I change the files listed above?

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Hyper-V VM Lab + RRAS + RDP

    - by Dennis Evans
    My background is primarily .NET Development with some System Administration skills. I'm trying to set up a VM Lab for me to test System Applications I'm developing but I've only ever done System Administration in already set up environments; I've never set up my own. My current setup: Server 2008 R2 Hyper-V Host on physical machine (only role enabled) with two NICs. First NIC dedicated for Management w/ DHCP address from company's network. Second NIC dedicated to RRAS VM w/ DHCP address from company's network. RRAS VM has two NICS, one is virtual private internal only NIC w/ static entry. The other is the physical NIC mentioned above. I've joined it to my VMLab.net internal domain. My Active Directory Domain Controller server (ADCT) also runs DNS, DHCP, and Certificate Services which I'm familiar with but don't understand completely. RRAS is already set up with NAT to provide the private internal network with Internet access. What I would like to do is be able to RDP into the servers/computers on the VMLab.net domain from my computer. Do I need to add the Remote Desktop Services role and enable the Remote Desktop Gateway service on RRAS in order to do this or is there a way to set up port forwarding on RRAS to just allow a direct connection to the internal servers...or both? What would the best practices be here? Network Diagram http://i.stack.imgur.com/4qfnk.png

    Read the article

  • Setting Outlook Web Access as default mail client in Firefox

    - by Barton Chittenden
    The company that I work for is nice enough to let me use Linux for work, but they use Outlook. I've been using Outlook Web Access (OWA) as my mail client, which is more or less acceptable. The only problem is that whenever I click on a mailto link or use the "Send Link" menu option in firefox, I'm prompted to use evolution. Since connecting to an exchange server through evolution seems to be sketchy at best, I would like to set OWA as my default mail client. I'm using Firefox 3.6.13 Here's what I've found so far: Default mail client can be found at Edit Menu -> Preferences -> Applications Tab -> mailto When I click on the drop down menu, one of the options is "Application Details" This shows two options by default: Google Yahoo! Mail Each of these shows how to launch that service. For Gmail: https://mail.google.com/mail/?extsrc=mailto&url=%s For Yahoo!: http://compose.mail.yahoo.com/?To=%s I presume that Outlook Web Access has something similar. Based on the googling that I've done so far, I think that this should look something like this: https://<server name>/owa/?cmd=compose... A little experimentation on my part shows that the following will compose a message: https://<email server>/owa/?ae=Item&a=New&t=IPM.Note but I still don't know how to specify recipient, subject or body of the email to be composed... What I want to know is a) does anyone know the URL parameters to compose a mailto in Outlook Web Access, including subject, recipient and body? else b) can someone give me a decent pointer for where to get this information?

    Read the article

  • Office 2010 Trusted Locations not working after restart

    - by Josh King
    In Excel 2010, on Windows XP, I am unable to open files - through the open dialog box - from a network drive. The sever has already been added to the Trusted Locations and now most security settings turned down or off. Excel will show "Downloading ..." on that status bar and a progress bar which doesn't progress. We have left Excel sitting in this state for 30+ minutes and no change. A similar problem occurs when saving files to network shares. If we use explorer to navigate to the files and double click them they open flawlessly. No add-ins are active. We also have this problem in Word 2010, but the server was not initially in the Trusted Locations. I added it and it worked until the PC was reset, it now exhibits the same issues as Excel where the server is in the Trusted locations but will not open files. I have tried removing the server from the Trusted Location in both applications, restarting the PC and re-adding them (testing before, after and in-between) and had no luck.

    Read the article

  • Can you transfer a Windows 7 XP-mode VHD to a new PC without the need for reactivation?

    - by Henk
    Hi. At the moment I am running Vista 32 bit and use VPC 2007 a lot. I have several W2000 VMs set up with complex software configurations. For me, one of the advantages of working with VPCs is that a VM can easily be copied to another PC (like a notebook). Also, when I move to a new PC, I can immediately work with my existing VMs without the need to reinstall all applications. I am thinking about buying a new pc with Windows 7 64 bit pro, and using the xp-mode VM that comes with Windows 7. The advantages would be support for more memory (so being able to run more VM's simultaneously) and having a licenced XP VM. I would like to know if you can transfer a VHD that is based on the xp-mode VHD to another machine without Activation issues. The other machine will of course, also be running W7. I would guess that this would not be a problem. However, I did read about a guy who had to reinstall his W7 because of a harddisk failure, and could not access his xp-mode VHD anymore because it demanded re-activation. Thanks. Henk.

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • My processor is running slower than usually it has to run

    - by Soham
    I've Core2Duo E7400 2.80GHz processor on my Intel D945gcnl mobo. From CPU-Z, I've get to know that my processor speed is 1596MHz with X6 multiplier and 266MHz Bus Speed on each core. Why my processor is being operated at 1596 MHz rather than 2.80GHz...!!???? From my side I've tried to disable SpeedStep from my bios by setting EIST to 'Disable' and also tried to change Power Option to 'High Performance' in Windows 7. And also done like suggested in this question:http://superuser.com/questions/119176/processor-not-running-at-max-speed But it gains me nothing. I've also tried to run few massive applications together to check whether it was increasing at that time or not, but it remains same. Should I have to increase my multiplier or overclock to gain that lost speed...??? Should I have to check my power supply for any problem..??? or anything else...??? Please help me on this.... And yeah I've desktop computer so no problem causing by battery. Here's my CPU-Z Screenshot: http://i56.tinypic.com/2lk4mqc.jpg

    Read the article

  • How does one make sure or even guarantee server time are sync correctly between dozens of servers across multiple datacenter on different location?

    - by forestclown
    Currently our web applications contain a logic to check if the data sent to the web server is expired or not by comparing the timestamp of the data with the date/time of the server. Everything goes will, until some dude from data center accidentally modify one of the web server date/time and causes some disruptions in our web services. My managers are of course not happy with this, and said we shouldn't use timestamp to check expiry in the first place...anyway.... Network Time Protocol is implemented, because of data centers are spread across different continents so we have one NTP server in each data center. The servers within the data center will have cron jobs to check against the time with their NTP server from the same data center. If time is out of sync it will auto update the server date/time. But then with our managers not happy with it, and think it could still easily causes the same problem. e.g. what if someone accidentally modify the NTP date/time? what if all the NTP servers are out of sync with each other? which NTP servers we can really trust? and blah blah.. So my questions are: What are the current practice to sync date/time between servers across multiple data centers or locations? How does one manages time stamp between web apps? e.g. Server A send data (contain timestamp of Server A) to Server B (compare timestamp between Server B and the timestamp from the data to see if it has expired or not. This is to avoid HTTP replay) Should we really not use timestamp check? Thanks & Best Regards

    Read the article

  • GNU Screen and Finch Not Playing Nicely

    - by Sean M
    I use finch for instant messaging, and for persistence, finch is one of the things that runs in my screen session. There are three main computers that I access my screen session from, and each works at a different screen resolution. Because of the different resolutions, when I switch computers, I use screen -rd to attach to my screen session. Using screen -x results in problems. When I attach to the session, though, finch experiences display problems. I have to wait up to several minutes for finch to become responsive - it doesn't redraw properly at all. Trying to switch between chats just writes ^n and ^p, or ^(1-9) for numbers. It fixes itself after some time. Using ctrl-l does not help. Switching back and forth between screen windows does not help. This is an annoying behavior that I don't experience with any other applications running in screen. Is this a bug in screen or finch, and if not, what can I change about my configuration to correct it ? (would appreciate it if "finch" could be used as a tag for this instead of or in addition to "pidgin")

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • Pin the Dock to the top

    - by Chris Buchholz
    I wonder if it is possible to pin the Mac OS X Dock to the top in Snow Leopard? I see lets of ideas on how to do this when I google for it, and Secrets (the tweaking app) also provide it as an option, but I don't see any of the ways working for me. I guess it must have worked at some point, since people said it did, but I believe this feature might have been removed from Snow Leopard, and therefore does not work for me. Is this so? Is there really no way to pin the Dock to the top of screen? If not, what ways of "getting rid of the dock" can you guys recommend? I have tried with auto-hiding, but my problem is that this will leave a 4px line at the edge of where the Dock is pinned to, that applications wont cover. Thats not ideal for me. As far as I have understood from googling, this line will not appear if the Dock is pinned to the top, hence my question. What other ways do you guys use to get rid of it?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >