Search Results

Search found 4561 results on 183 pages for 'production'.

Page 95/183 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How do I capture and playback http web requests against multiple web servers?

    - by KevM
    My overall goal is to not interrupt a production system while capturing HTTP Posts to a web application so that I can reverse engineer the telemetry coming from a closed application. I have control over the transmitter of the HTTP Posts but not the receiving web application. It seems like I need a request "forking" proxy. Sort of a reverse proxy that pushes the request to 2 endpoints, a master and slave, only relaying the response from the master endpoint back to the requester. I am not a server geek so something like this may exist but I don't know the term of art for what I am looking for. Another possibility could be a simple logging proxy. Capture a log of the web requests. Rewrite the log to target my "slave" web application. Playback the log with curl or something. Thank you for your assistance.

    Read the article

  • Does CHECK TABLE add read/write locks?

    - by Ztyx
    Hi, Yesterday I ran CHECK TABLE on a table that is read very frequently. I scanned the MySQL documentation for CHECK TABLE for any mentions of "lock" (and found none) and also noticed that only SELECT privilege was required to run the command. I therefor concluded that the command did not do any read lock and was safe to run even in production. Sadly, running the command took 1 minute and 37 seconds and seemed to block all read access. My question is therefor, does CHECK TABLE do any read lock? Any other reason why I experienced a read block on the table? Thanks

    Read the article

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • Securing internal data accessed by a website on the big, bad internet

    - by aehiilrs
    A close relative of this question on Stack Overflow: When you have a web site in your DMZ that needs to access production data stored on an internal DB, what strategies do you recommend using to lower the risks that come from accessing live data? Is it even considered acceptable to have a connection initiated from the DMZ come inside of your network? An extra detail about the nature of the site that kind of throws a monkey wrench into the machinery is that people using the web site will be competing for "spots" on a first-come, first-serve basis with others using the internal software. Because of this, as close to zero lag time between the two applications as possible is ideal.

    Read the article

  • How to find malformed - corrupted - dos - BOMByte Files in Linux

    - by Syquus
    I've several problems maintaining large production servers, in which some developers drop files from Windows environments, sometime with BOM-bytes (We use UTF8, and no need for that), causing lots of troubles. Other times, I got a "no end of line" and "[DOS]" labels when vim-editing files directly on the server. I recently discovered how to find for the bom byte, and how to delete it in a batch script. What about illegal bytes, bad EOLs? Is it safe to use DOS Text Files on a linux environment? Any drawbacks If I use to convert them with dos2unix cmd ? Regards

    Read the article

  • How to monitor nginx proxy cache?

    - by Isaac
    I would like to see which objects get cached by my nginx reverse proxy (with an apache as a backend). So far I could not find a way, only the info that its not implemented yet. The reason is that I would like to tweak my configuration for best performance without putting too much stress on the server, as the backend is a production system. I know benchmarking would be better, but its not an option right now. So I though an alternative measure would be to monitor the cache. Is that possible, and if yes, how? (despite patching nginx with the patch mentioned in the link above)

    Read the article

  • is using Hosts for resolving a sql-server more performant?

    - by Ice
    Hi, we have a legacy application which uses a access.mdb with hundreds of ODBC-connected tables on a sql-server. the access.mdb contains nothing else than these odbc-connections. Now we consider to use a virtual sql-servername for these odbc connections and resolve it in the local hosts-file with the ip-address of the real sql-server. Like this we can easy switch between a test-sql-database server and the the server for production in changing one single entry in the hosts. EVERYTHING works fine and now comes the question: Could it be that this is more performant because there is one single point on resolving the sql-server (name or ip-address)? Is there something like a network-cache / DNS-Cache? peace Ice

    Read the article

  • Practical way of keeping up-to-date backup servers?

    - by ftkg
    What is the approach generally used when you want to have backup physical servers? Currently I have a Linux server running a database, a samba share, a webapp and some scripts; and a Windows Server, running some third-party software. What I would like was to be able to have a ready backup server to enter in production in case of failure, but how to keep them up-to-date? I've seen some expensive solutions for Windows; for Linux I've wondered if I really have to build an array of scripts.

    Read the article

  • Cannot SSH into Virtual Machine

    - by MasterGberry
    I am running a CentOS VM on my desktop that I use for development testing when coding in python. At my school I have a dedicated IP setup for the VM and my desktop so I never seem to have an issue ssh'ing from desktop into VM. I am now at home for winter break and cannot seem to SSH into the VM using the local ip address behind my router, the external IP with port 22 forwarded to my VM, or anything. Strangely enough I can ssh into my production server and then fromt here ssh into the VM, but not from my desktop to the VM directly What should I do to get this to work? Thanks

    Read the article

  • If a SQL Server Replication Distributor and Subscriber are on the same server, should a PUSH or PULL subsciption be used?

    - by userx
    Thanks in advance for any help. I'm setting up a new Microsoft SQL Server replication and I have the Distributor and Subscriber running on the same server. The Publisher is on a remote server (as it is a production database and MS recommends that for high volumes, the Distributor should be remote). I don't know much about the inner workings of PUSH vs PULL subscriptions, but my gut tells me that a PUSH subscription would be less resource intensive because (1) the Distributor is already remote, so this shouldn't negatively effect the Publisher and (2) pushing the transactions from the Distributor to the Subscriber is more efficient than the Subscriber polling the Distribution database. Does any one have any resources or insight into PUSH vs PULL which would recommend one over the other? Is there really going to be that big of a difference in performance / reliability / security?

    Read the article

  • options for physical architecture of rails site regarding caching server or cdn

    - by timpone
    I have a rails app that is set on a single server currently. On production, I force_ssl for everything. I am interested in using a caching server for images (I'm fine with css and js being served from origin for time being). Would nginx or varnish (which I have no experience with) be a better solution (for October 2012)? I'd imagine that it would be easy to switch these around while still on this single server architecture. Or would something like cloudfront (which I also have no experience with) make sense for hosting image files? I know this is a vague question but appreciate any current feedback. thx in advance

    Read the article

  • Shrink NTFS Partition Windows 2003

    - by Coops
    We have an iSCSI target provided by a CentOS server attached to a Windows Server 2003 Standard box, formatted in NTFS. My question is this - I know we can resize the backend block device fine (LVM et al.), however how do you tell Windows the NTFS filesystem has shrunk afterwards? [note we want to shrink]. I'm imagining a world of pain if it's not done correctly! This is a production box, so ideally we'd like the process to keep the drive mounted and online during the process, but downtime can be scheduled if need be. 90% of what I've found on the subject so far basically involves using the 'ntfsresize' command in Linux to do the job -- but surely Windows can do this itself? Cheers!

    Read the article

  • Varnish with multiple hosts/subdomains

    - by jerhinesmith
    I'm new to Varnish, and I'm hoping it already does this "out of the box", but I'd like to clarify before I consider using it in production: Here's my setup: I have multiple sites running off of the same machine that vary by subdomain (i.e. user1.example.com, user2.example.com, etc.) Each "site" has a profile picture that has the same name (i.e. user1.example.com/profile.png, user2.example.com/profile.png) Will Varnish recognize these as separate resources and cache them accordingly? Or will I need to change something in the VCL to tell it include the full host url when looking up cache hits?

    Read the article

  • UFW blocking webrick on port 3000

    - by t Book
    On a ubuntu 10.0.4 server runs redmine. starting webrick with: ./server webrick -e production -b lvps46-173-79-113.dedicated.hosteurope.de -d makes redmine available in browser. as soon as we enable ufw, webrick can´t be accessed anymore. of course we allowed Port 3000 from anywhere ufw allow 3000/tcp ufw allow 3000/udp also a grep for iptables doesn´t show a deny rule iptables -nL | grep 3000 find the whole iptables output here http://pastebin.com/k6WNqdPU checking lsof -ni tcp:2222 tells me ruby is listening on port 3000 ruby 3457 root 5u IPv4 864846667 0t0 TCP 46.173.79.113:3000 (LISTEN) What else can we check? what´s wrong with the ufw rules for port 3000?

    Read the article

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • Solr performance (tomcat) - High load

    - by Ward Loockx
    I'm relatively new to solr. I have a production site running on a VPS, but now I'm having serious load issues. I don't know where to start in order to get the load down... VPS specs (linode.com 512) 512 MB RAM 4 CPU (1x priority) Looks like my solr server (tomcat) is using a lot of CPU power You can find my solrconfig.xml on http://pastebin.com/qdfi8Med and my schema.xml on http://pastebin.com/rRusDP8b I've tried to increaese the cache size, but this didn't do anything on the load. You can see the stats page below. EDIT - Because the screenshot was unclear, I took smaller screenshots if what (I think) is important. Dismax query handler stats Caches stats Thanks for the help!

    Read the article

  • Redirect Domain Name to Localhost

    - by somebody
    I have a linux test machine which I would like to run a copy of a production webserver. This is a legacy application which does not use a property file for its server name. Throughout the application, the server name is hardcoded (example: open connection to myServer.myCompany.com). Is there any linux trick which I can use to redirect all requests for a certain host back to localhost? I know in Windows that I can add an entry to the hosts file and have it redirect back to localhost. How do I do this in linux?

    Read the article

  • Need ability to set configuration options using single method which will work across multiple server configurations.

    - by JMC Creative
    I'm trying to set post_max_size and upload_max_filesize in a specific directory for a web application I'm building. I've tried the following in a .htaccess file in the script directory. (upload.php is the script that needs the special configuration) <Files upload.php> php_value upload_max_filesize 9998M php_value post_max_size 9999M </Files> That doesn't work at all. I've tried it without the scriptname specificity, where the only thing in the .htaccess file is: php_value upload_max_filesize 9998M php_value post_max_size 9999M This works on my pc-based xampp server, but throws a "500 Misconfiguration Error" on my production server. I've tried also creating a php.ini file in the directory with: post_max_size = 9999M upload_max_filesize = 9998M But this also doesn't always work. And lastly using the following in the php script doesn't work either, supposedly because the settings have already been compiled by the time the parser reaches the line (?): <?php ini_set('post_max_size','9999M'); ini_set('upload_max_filesize','9998M'); ?>

    Read the article

  • php-cgi.exe process on IIS

    - by HYP
    The production server runs a PHP application on IIS 6.0. During the peak hours we have had a few issues where the php-cgi.exe processes increase in numbers and approach around 200. The server comes to a crawl and we have to restart the server a multiple times to restore the normal behavior. When the server is running normally, I have noticed that there are only 10-15 php-cgi.exe processes in the task manager. What could be causing the php-cgi.exe processes to increase in number from 10-15 to around 200 during the peak hours? Where should I look for a cause?

    Read the article

  • For web development which is more important - CPU and Graphics card OR Ram and SSD Harddrive?

    - by adam
    Buying a laptop is always hard work and questions about specific models dont age well on forums. A popular dilema (especially with apple macbooks) is whether to spend more for a faster cpu and graphics card but settle for standard ram and hd OR drop down and spend the savings on more ram and a faster harddrive such as a ssd. Im wondering for web development i.e. ide, unit tests, photoshop work and some user testing screen capturing now and again what would provide better performance. ( No games, music production or spielberg standard video editing.) For examples sake the current apple lineup for their 15inch macbookpros. 2.66 cpu i7 4gb ram 5400rpm drive 4gig ram vs 2.4 cpu i5 8gb ram 124gb sdd roughly the same price.

    Read the article

  • How should a small team using multiple OS's deploy over github?

    - by Toby
    We have a small development team that have recently moved to using github to host our projects. The team consists of three developers, 2 on Windows and 1 on Mac. I am currently researching the best way to deploy applications to our Linux servers (dev and production). Capistrano running locally would be ideal but from what I read this won't work for Windows machines. It looks like the best way is to use a post-receive hook in github, I can see how this would work for auto deploying to dev, but I don't see how we could then deploy to live. I have found paid projects like http://www.deployhq.com/ but it feels like something that a quick bit of code should be able to do for free, I just can't seem to get myself pointed in the right direction! I was wondering what would be considered best practice for small team deployment involving multiple local OS's and github.

    Read the article

  • Is there a way to import a scheduled task from windows 2003 (.job) to windows 2008 (.xml) ?

    - by Rodrigo
    I had some jobs to be moved from the old production server (windows 2003 server standard) to the new machine (windows 2008 server standard), but the new server is unable to read the old .job format, also the import wizard only imports from .xml job files (same version). Obviously I don't want to rebuild all the jobs by hand, but can't find a tool that makes the process a very little easier. I don't trust in Microsoft for this kind of tools, my previously experiences had been to bad (DTS - SSIS). Any ideas? Thanks in advance.

    Read the article

  • Use server git installation in GitHub for Windows

    - by Lg102
    We are using Git as the version control for our website development. I work from a laptop, which is connected to the internal network via a WiFi connection. I've mapped the server drives as network drives in Windows. Commands such as git status take significantly longer for me than they do for my co-workers on wired connections. When connecting to the server using SSH and running commands on the git installation there, performance is even better. Is there a way to configure GitHub for Windows to use the server-installed git (with my credentials)? Note: While our production servers has a user configuration with proper permissions, the development server has only one root user.

    Read the article

  • Too many files open issue (in CentOS)

    - by Ram
    Recently I ran into this issue in one of our production machines. The actual issue from PHP looked like this: fopen(dberror_20110308.txt): failed to open stream: Too many open files I am running LAMP stack along with memcache in this machine. I also run a couple of Java applications in this machine. While I did increase the limit on the number of files that can be opened to 10000 (from 1024), I would really like to know if there is an easy way to track this (# of files open at any moment) as a metric. I know lsof is a command which will list the file descriptors opened by processes. Wondering if there is any other better (in terms of report) way of tracking this using say, nagios.

    Read the article

  • Windows Service stops on Server Patch

    - by Carel
    I'm a developer that has a Windows Service that runs on a production server that sends emails that are entered into a database on a database server. Although the service is set to start automatically, whenever the web server gets patched (which happens every other week), for some reason the service fails to start and various emails don't get sent. I don't actually have access to the server, so I have to request a build administrator to start the service. What I want to know is whether there is any reason for the service to fail to start when the server is patched?

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >