Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 169/344 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • using munin-plugins-rails to monitor rails app perfromance

    - by user2099762
    I have been trying to configure munin-plugins-rails to monitor the performance of our rails apps from Munin. The graphs appear, but no data is shown in the graphs. The log files show Error output from : 2013/06/27-15:39:06 [5540] Request-log-analyzer, by Willem van Bergen and $ 2013/06/27-15:39:06 [5540] Website: http://railsdoctors.com I have tried running Request-log-analyzer manually and pointing it at the production log file, and this reports as % for every item. There is data in the log file. I have tried changing the version of the gems installed, and also the type of the log file, but no luck. Any ideas anyone? Thanks

    Read the article

  • VMware + SQL Server - sqlserver.exe not using both CPU cores

    - by fistameeny
    Hi, I am working on a virtual machine that runs SQL Server Express (as part of Sage Line 50 Manufacturing). The details are as follows: Physical Server (host machine) - Intel Xeon Quad Core 2.1GHz - 4GB RAM - VMDK image stored on RAID-5 500GB SATA drives (7200RPM) - Running Ubuntu 10.04 Server 64 bit - VMware Server 2 Virtual Machine - Windows Small Business Server 2003 - Allocated 2 vCPU's and 2GB RAM - Using 100GB pre-allocated flat VMDK file The problem I have is that there is process that runs in SQL Server that is CPU intensive. On the old physical server that we migrated to the virtual machine from, this would utilise both CPU cores so the sqlserver.exe process would be running 100% on each of the CPU cores. On the virtual machine, it only seems to use one of the two CPU cores, meaning that the process is much slower to run. Question Is there a way to force SQL Server (sqlserver.exe process) to use both of the CPU cores, and distribute it's load between them? Is this a VMware setting that needs changing to allow processes to use both cores?

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • Nginx + PHP-FPM ignores no-cache headers

    - by Eric Winchell
    I'm using the following header on a php page. // Prevent page caching. header('Expires: Tue, 20 Oct 1981 05:00:00 GMT'); header('Cache-Control: no-store, no-cache, must-revalidate'); header('Cache-Control: post-check=0, pre-check=0', FALSE); header('Pragma: no-cache'); I'm also using a rand=999999999 (with a real random number) in the URLs. But pages are still being cached. Reload works, but first load is cached. Anyone know where I can change this?

    Read the article

  • How to control remote access to Sonicwall VPN beyond passwords?

    - by pghcpa
    I have a SonicWall TZ-210. I want an extremely easy way to limit external remote access to the VPN beyond just username and password, but I do not wish to buy/deploy a OTP appliance because that is overkill for my situation. I also do not want to use IPSec because my remote users are roaming. I want the user to be in physical possession of something, whether that is a pre-configured client with an encrypted key or a certificate .cer/.pfx of some sort. SonicWall used to offer "Certificate Services" for authentication, but apparently discontinued that a long time ago. So, what is everyone using in its place? Beyond the "Fortune 500" expensive solution, how do I limit access to the VPN to only those users who have possession of a certificate file or some other file or something beyond passwords? Thanks.

    Read the article

  • How determine keyboard variation when manufacturer changes it

    - by Maksee
    When I decided to purchase Toshiba Z830, I specially noticed at photos that the keyboard was good for me (wide Enter, Left Shift, Backspace), you can query it at images.google.com, on most photos they're all wide. When I finally bought it (Z830-A2S), the keyboard was different, the Enter is narrow and the left Shift is "split" into Shift and backslash keys (probably 5% of photos at images.google.com). Is it normal for manufacturers to change this during the production cycle or this can be variations from different contractors? But the main point, is it possible to determine this from the full model name or somewhere else without visiting a store?

    Read the article

  • ESXi 5.1 - Unable to register host

    - by deanvz
    I download and successfully installed ESXi 5.1. I am however unable to get the licence key I received installed. An error occurred when assigning the specified licence key: The system Memory is not satisfied with the 32 GB of Maximum memory limit. Current with 80.00 GB of Memory. Is there now way around this? A quick google revealed that this is a global problem with no real answer or resolution. The only workaround is to remove the physical RAM chips, but as this is in going to be in production I dont want to do that as it would mean down time when I have to reinsert the memory

    Read the article

  • What do you do with a 15 node decommissioned Itanium cluster?

    - by Gomibushi
    We are decommissioning a 15 node Itanium cluster. We don't know what to do with it. Being geeks we want to put it (or its individual nodes) to some cool use, but since it is Itanium we are a bit unsure what that could/would be. We are not bringing it back as production servers and we are considering giving them away, if anyone wants them. It's not the most spiffy hardware, but being 2U rack servers they pack an ok amount of cpu and memory, they're about 3 years old perhaps. Ideas to what runs well on them? Or something one can use them as?

    Read the article

  • unable to connect to remote sql server from SHDSL router

    - by user529265
    Got a new leased line network to our office that came with a SHDSL router (Watson). Currently, we are unable to use Sql Server management studio to connect to remote Sql databases. It errors out saying A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no longer available.) (Microsoft SQL Server, Error: 64) I logged into the Watson management panel and unblocked all the ports for TCP traffic (specified the range as 0 to 60000 and UDP as well - this include 1443 required for connecting to SQL Server). The router is the only thing that has changed. We are able to connect to it from other networks just fine. Is there something we are missing here. Any help would be greatly appreciated.

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • How to route traffic through a VPN tunnel?

    - by Gabriel
    The problem with our server is that we need to use the bug ridden and awful AT&T network client, which causes our server to bluescreen once per 24 hours. Does any one know how to (or has a good guide) quickly set up a workstation running Windows server 2008 R2 as a proxy server. So this spare workstation would run AT&T and would act as a bridge between our server and the server that can be connected to only via the AT&T VPN software. And this way our own production server would not crash so often (or not at all) and the workstation can happily crash whenever it wants to.

    Read the article

  • fastcgi-mono-server with Nginx is much slower than xsp4

    - by marxin
    We started testing our MVC4 app on xsp4 server compiled with mono-3.0.3, speed was enough and we decided to set up production fastcgi-mono-server4 (version 2.11.0.0) with nginx (1.2.6-r1). Single query that loads some JSON query took ~200ms on XSP4, but Nginx serves the query in about 1.2s and I am wondering where could be such a slow down? I followed nginx configuration: http://www.mono-project.com/FastCGI_Nginx and fastcgi-mono-server4 uses socket for listening nginx. Do you have any ideas how to log some time stamp which will help me? Thanks

    Read the article

  • DNS Resolver Speed Techniques

    - by Rob Olmos
    I recently received a reply to my concerns about some DNS servers being slower than others despite all servers being anycast: In practice, most resolvers won't be impacted by the slower paths to some of the name servers in the set. Most resolvers employ various techniques to provide fast lookups, such as preferring name servers that were previously seen to be faster, sending simultaneous queries to multiple name servers, or pre-fetching queries before the TTL has expired. I was not aware that resolvers used these techniques and I was unsuccessful at searching for more info about this. Are there any names for these techniques? Which resolvers employ which of these techniques?

    Read the article

  • Only ONE Outlook 2010 installation "Cannot connect to Exchange server" when setting up new profile.

    - by Johnny PDEX
    Exchange 2010, one-server installation (small production, I know not best practice) OWA Connectivity has been confirmed, Autodiscover is configured and working properly for EVERY other installation. Other user accounts tested on problem Outlook, none can connect. Windows Firewall is pre-configured by Group Policy, only modifications being related to remote management. Firewall has also been disabled during diagnostic period. Network discovery and file sharing is enabled on workstation as well. Windows 7 Professional, latest updates installed. Driving me nuts. Help, serverfault?

    Read the article

  • One bigger Virtual Machine distributed across many Nodes [on hold]

    - by flyer
    I just setup virtual machines on one hardware with Vagrant (this is just a test environment, not production!). I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- (In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes) After some comments... Is it a language barrier, or? This is only my 'virtual environment'. If we imagine this virtual machines are a separate Nodes (e.g. every has 4 cores) the OpenStack is still the same, right? Can I run one Virtual Machine across many Nodes with OpenStack? Is it possible to aggregate the computation power of separate machines in one virtual distributed operating system?

    Read the article

  • Is there a way to import a scheduled task from windows 2003 (.job) to windows 2008 (.xml)?

    - by Rodrigo
    I had some jobs to be moved from the old production server (windows 2003 server standard) to the new machine (windows 2008 server standard), but the new server is unable to read the old .job format, also the import wizard only imports from .xml job files (same version). Obviously I don't want to rebuild all the jobs by hand, but can't find a tool that makes the process a very little easier. I don't trust in Microsoft for this kind of tools, my previously experiences had been to bad (DTS - SSIS). Any ideas? Thanks in advance.

    Read the article

  • How to generate new CSRs for TLS use in sendmail?

    - by Mikey B
    SendMail 8.13.8 | CentOS 5.x Hi Guys, I'm using ca-signed TLS certificates on my sendmail server and they are up for renewal soon. Our new CA doesn't like our old CSR so I need to generate a new CSR. Can someone point me to the procedure for doing this (without affecting the production certs that are already in use)? I'm paranoid of overwriting the old TLS certs in the process of generating a CSR. Most of the instructions I've found are for implementing self-signed TLS certs -- which isn't an option for me at this time. I'm thinking it would something like: openssl req -new -nodes -out new-tls.csr -keyout new-tls-private.key But I wasn't sure if I was missing some options there such as the -x509 option... -M

    Read the article

  • How do I make a backup of a live server?

    - by Jurily
    At my new job, I have a production server with the following qualities: Windows (XP I think), ancient hardware Absolutely vital database No backups whatsoever Everyone in the company has full admin rights, the passwords are stored in a .txt on the global share No installers, except for the OS The machine itself is sitting on a wooden shelf 5 feet above the ground against an external wall with frequent truck traffic on the other side; the shelf is already bent from the constant load Hasn't been rebooted in $DEITY knows how long, my predecessor wasn't even sure if it would survive it UPS is installed, but since everything is hooked up to it, it would last 10 minutes tops No spare parts or hardware budget How do I make a full backup with minimal impact on the server? I'm not sure how close it is to a total meltdown. For all I know, plugging in a USB stick could kill the company, and of course it will be all my fault, since "it was running fine before you touched it". The ideal solution would be a VM, so I have a test environment as well (separate of course).

    Read the article

  • rsync : Read input from a file and sync accordingly

    - by Dheeraj
    I have a text file which contains the list of files and directories that I want to copy (one on a line). Now I want rsync to take this input from my text file and sync it to the destination that I provide. I've tried playing around with "--include-from=FILE" and "--file-from=FILE" options of rsync but is is just not working I also tried pre-fixing "+" on each line in my file but still it is not working. I have tried coming with various filter PATTERNs as outlined in the rsync man page but it is not working. Could someone provide me correct syntax for this use case. I've tried above things on Fedora 15, RHEL 6.2 and Ubuntu 10.04 and none worked. So i am definitely missing something. Many thanks.

    Read the article

  • Best VoIP VPS Location?

    - by ToiletOverflow
    Our office currently runs a few phones over a VoIP line. Through our VoIP provider, we have a virtual private server. We chose them as the VPS provider because the VPS came pre-loaded with all of the software that was necessary. However, I've discovered that I would rather manage the software myself and would prefer to work on a different platform. The primary reason that we have stayed with them is because as our VoIP provider, I presume that there is some advantage in call quality because they have "direct access to the PSTN", which I would presume is an advantage when it comes to call termination and overall call quality. My question boils down to: What is better from a call quality perspective? 1) A server located 20ms closer to us (60ms), offered by a different company. 2) The current server at the VoIP/SIP provider (80ms)

    Read the article

  • can I display a JPG or PNG to the framebuffer (/dev/fb*)?

    - by ndmweb
    I know I can capture the framebuffer in linux using something like cp /dev/fb0 ~/myimage and re-display that by coping back to the device like so cp ~/myimage /dev/fb0. What format is the framebuffer image data in? and how would I go about displaying a pre-made image (jpg, png) to the framebuffer? Can I convert to this format using imagemagick? p.s. Im using a raspberry pi running raspbian. Update 11-12-2012 I ended up using pygame to display images in my application. Not sure if this uses the frame-buffer to display the images. But it meets my needs quite well.

    Read the article

  • Why the different coarse threaded screws?

    - by Luke
    I'm seeing more and more of these screws (pictured below), which are almost triangular. I find I can only put them into Power Supplies and PCI(e) cards in cases, but they will break/strip away if I put them into a hard drive or a standoff for a motherboard Notice the triangular shape on it? On the Root Access chat, I started asking, but no concrete answer yet. I don't assume it's a production flaw, as I've seen hundreds and replaced them with the "proper" round screws. It is coarse-threaded, not fine-threaded (i.e. for a DVD drive or floppy drive). What are they for, and why do we need them instead of the regular round ones?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • SSL certificates and whether a wildcard common name will support domain.com

    - by timpone
    Sorry, if this is very vendor specific but I purchased an inexpensive SSL Cert from GoDaddy. Right now everything on production is hosted off of www.domain.com. When specifying the common name would a wildcard (ie *.domain.com) cover the case of a lack of a third-level domain such as domain.com? Just to be sure, I made it for www.domain.com rather than a wildcard. If it matters, I will be using with nginx and a mod_passenger. If I want to cover everything including domain.com and staging.domain.com, www.domain.com etc, would a wildcard be the proper cert? Does the inexpensive godaddy cert (12.99 / year) cover wildcard certs (it didn't seem to for me)? Again, sorry for asking vendor specific questions and thx in advance. thx

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >