Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 92/344 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • Firefox and Chrome keeps forcing HTTPS on Rails app using nginx/Passenger

    - by Steve
    I've got a really weird problem here where every time I try to browse my Rails app in non-SSL mode Chrome (v16) and Firefox (v7) keeps forcing my website to be served in HTTPS. My Rails application is deployed on a Ubuntu VPS using Capistrano, nginx, Passenger and a wildcard SSL certificate. I have set these parameters for port 80 in the nginx.conf: passenger_set_cgi_param HTTP_X_FORWARDED_PROTO http; passenger_set_cgi_param HTTPS off; The long version of my nginx.conf can be found here: https://gist.github.com/2eab42666c609b015bff The ssl-redirect.include file contains: rewrite ^/sign_up https://$host$request_uri? permanent ; rewrite ^/login https://$host$request_uri? permanent ; rewrite ^/settings/password https://$host$request_uri? permanent ; It is to make sure those three pages use HTTPS when coming from non-SSL request. My production.rb file contains this line: # Enable HTTP and HTTPS in parallel config.middleware.insert_before Rack::Lock, Rack::SSL, :exclude => proc { |env| env['HTTPS'] != 'on' } I have tried redirecting to HTTP via nginx rewrites, Ruby on Rails redirects and also used Rails view url using HTTP protocol. My application.rb file contains this methods used in a before_filter hook: def force_http if Rails.env.production? if request.ssl? redirect_to :protocol => 'http', :status => :moved_permanently end end end Every time I try to redirect to HTTP non-SSL the browser attempts to redirect it back to HTTPS causing an infinite redirect loop. Safari, however, works just fine. Even when I've disabled serving SSL in nginx the browsers still try to connect to the site using HTTPS. I should also mention that when I pushed my app on to Heroku, the Rails redirect work just fine for all browsers. The reason why I want to use non-SSL is that my homepage contains non-secure dynamic embedded objects and a non-secure CDN and I want to prevent security warnings. I don't know what is causing the browser to keep forcing HTTPS requests.

    Read the article

  • Instabilities with Bridged and bonded interfaces

    - by Henry-Nicolas Tourneur
    I did post yesterday to get a working setup with several bridged interfaces used for virtual machines (KVM/libvirt). One of the bridged interface is just using eth3 as its ports while the second one (public traffic) is using an ethernet bonded interface. That setup is working but not all the time ! I can start a download from a vm, then it will stop and freeze! So I don't know if my bridge parameters are correct, could you check the below config ? iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.255 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.255 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp on auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.255 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.255 And yes, it also sometimes speaks about martian on the host: kernel: [249146.055172] martian source 10.160.0.17 from 10.160.0.10, on dev vnet2 kernel: [249146.073122] ll header: ff:ff:ff:ff:ff:ff:54:52:00:76:c3:5c:08:06

    Read the article

  • Cheapest iSCSI SAN for Windows 2008/SQL Server clustering?

    - by MichaelGG
    Are there any production-quality iSCSI SANs suitable for use with Windows Server 2008/SQL Server for failover clustering? So far, I've only seen Dell's MD3000i, and HP's MSA 2000 (2012i), which both are around $6K with a minimal disk configuration. Buffalo (yea, I know), has a $1000 device with iSCSI support, but they say it will not work for 2008 failover clustering. I'm interested in seeing something suitable for failover in a production environment, but with very low IO requirements. (Clustering, say, a 30GB DB.) As for using software: On Windows, StarWind seems to have a great solution. But it's actually more money than buying a hardware SAN. (As I understand, only the enterprise edition supports having replicas, and that's $3000 a license.) I was thinking I could use Linux, something like DRBD + an iSCSI target would be fine. However, I haven't seen any free or low-cost iSCSI software that supports SCSI-3 persistent reservations, which Windows 2008 needs for failover clustering. I know $6K isn't much at all, just curious to see if there are practical cheaper solutions out there. And finally, yes, the software is expensive, but many small business get MS BizSpark, so the Windows 2008 Enterprise / SQL 2008 licenses are completely free.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Mac WLAN 802.11b+g WPA1 connection issues

    - by Peto
    Hi, I have a Telewell TW-EA510v4 ADSL modem+WLAN router configured as follows: Mode: 802.11b+g Security Mode: WPA1 Pre-shared Key WPA Algorithms: TKIP Connections from only certain MAC addresses have been allowed and the MAC address of my Mac is in that list. The WLAN works just fine with iPhone and an old Acer laptop. It has worked for about two months or so with my MacBook Pro (year and a half or so old model). Ocassionally i've had minor problems with it, which have required either reboot of ADSL modem or reboot of my Mac. However now, for the last week or so I haven't been able to connect to it at all. This is what is what i get in the console when i try to connect: 5.5.2010 20.54.53 airportd[73731] Apple80211Associate() failed -3924 (Invalid PMK) 5.5.2010 20.54.53 Apple80211 framework[584] airportd MIG failed (Associate Event) = -3924 (Invalid PMK) (port = 104599) 5.5.2010 20.54.53 SystemUIServer[584] Error joining WLAN-M: Invalid password (-3924 Invalid master key) The pre-shared key I use is not incorrect. I'm 100% sure of that. The Error Log from the router only says this when I try to connect to it: May 05 21:09:54 home.gateway:i802_1x:none: <my mac address> associated May 05 21:10:00 home.gateway:i802_1x:none: <my mac address> disassociated May 05 21:10:01 home.gateway:i802_1x:none: <my mac address> disassociated Any ideas or tips to troubleshoot this further?

    Read the article

  • Mac compatible Web-Development Tools

    - by Derek Adair
    Hi, I'm looking for any Mac compatibile development tools that are tried and tested. I'm quite sick of my MySQL query browser crashing... And i'm sure there is probably better software out there anyways... At the moment my focus is dedicated to application development using PHP/MySQL/Ajax. (although, I will be learning ActionScript/Flex shortly). Here are the apps I regularly use and a general idea of the environment I work in: Hosting - localhost: Mamp Production: Amazon EC2 IDE - Coda: PHP/Mysql/Javascript Mysql - PHPMyAdmin MySQL query browser FTP - Coda FileZilla X11 is used for connecting to my production environment I am mainly looking for... -tools that will help me better manage (create, edit... whatever) databases. -Version control. This is huge as I'm working with a team of three other developers (we reeeeeeaaally need to standardize our environments..) Although any cool little tools to make my job easier are always a plus! I am unable to insert more than one link as I'm not rated on this forum.

    Read the article

  • Does the mysql Client API Library version have to match the installed MySQL/Percona server version?

    - by William Jamieson
    I'm running Scientific Linux 6.3 (binary compaible with Redhat/CentOS/etc..) as a LAMP stack. I've installed Percona server and client v5.5 from the Percona yum repository. However when I run phpinfo() I notice that under the MySQL and mysqli sections, it lists the Client API Library version as 5.1.66, and not 5.5x. I'm guessing these need to match, at least to major versions, and I have no idea what the possible consequences of such a mismatch could be. Do I need to revert to Percona server and client v5.1? This is for a production environment so it needs to be right. I'd appreciate any input or experience people could offer. I'm running Scientific Linux 6.3 (binary compaible with Redhat/CentOS/etc..) as a LAMP stack. I've installed Percona server and client v5.5 from the Percona yum repository. However when I run phpinfo() I notice that under the MySQL and mysqli sections, it lists the Client API Library version as 5.1.66, and not 5.5x. I'm guessing these need to match, at least to major versions, and I have no idea what the possible consequences of such a mismatch could be. Do I need to revert to Percona server and client v5.1? This is for a production environment so it needs to be right. I'd appreciate any input or experience people could offer. (Note I will also be cross posting this on the Percona forums)

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • VirtualBox problems writing to shared folders (Guest Additions installed)

    - by vincent
    I am trying to setup a shared folder from the host (ubuntu 10.10) to mount on a virtualized CentOS 5.5 with Guest Additions (4.0.0) installed (Guest addition features are working ie. seamless mode etc.). I am able to successfully mount the share with: mount -t vboxsf -o rw,exec,uid=48,gid=48 sf_html /var/www/html/ (uid and guid belong to the apache user/group) the only problem is that once mounted and I try to write/create directories and files I get the following: mkdir: cannot create directory `/var/www/html/test': Protocol error I am using the proprietary version of VirtualBox version 4.0.0 r69151. Has anyone had the same problem and been able to fix it or has any idea how to potentially fix this? Another question, the reason for setting this up is this. Our production servers are on CentOS 5.5 however I am a great fan of Ubuntu and would like to develop on Ubuntu rather than CentOS. However in order to stay as close to the production environment I would like to virtualize CentOS to use a web server and use the shared folder as web root. Anyone know whether this isn't a good idea? Has anyone successfully been able to set this up? Thanks guys, your help is always much appreciated and if you need any more information please let me know.

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • Troubleshooting wireless client-bridged networks between two DD-WRT routers?

    - by KronoS
    I recently purchased a Buffalo N600 wireless router which came with DD-WRT pre-installed. I want to take my old wireless router a Linksys WRT54GL, also with DD-WRT pre-installed, and use it as a wireless bridge for my HTPC and Blu-Ray Player in the other room. I other words, I'm trying to connect to WIRED networks via the wireless on the routers. I followed eactly the instruction from DD-WRT's manual for 'Client Bridged' however I'm still not able to connect to two routers correctly, when the encryption is enabled (WPA2-Personal Mixed) however I am able to connect the two routers when there is NO encryption. I've checked, double checked, and triple checked that EVERYTHING is the same on BOTH routers: Routers 1 & 2 Encryption: WPA2-Personal Mixed Wireless Mode: G-Only Wireless Channel: 6 Subnet Mask: 255.255.255.0 Subnet: 192.168.1.0/254 SSID: Krono$ Primary Router #1 (Buffalo N600) IP Address: 192.168.1.1 Firewall: Enables w/ defaults DCHP: Enabled as DHCP Server Secondary Router #2 (Linksys WRT54GL) IP Address: 192.168.1.2 Firewall: Disabled as per DD-WRT instructions I'm looking for any configurations that I may have missed, or settings that may need to happen in order for this work.

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • Debugging apache seg fault with gdb

    - by Joyce Babu
    Apache on a production server of mine is seg faulting intermittently. I have enabled core dump option in apache configuration and have several dumped core files. Unfortunately, since it is a production server, apache or the loaded modules are not compiled with debug symbols. From what I understand, gdb cannot do much without debug symbols. Can I at least find out which module is causing the seg fault, without debug symbols? If so, how? Following is the output from a gdb backtrace (gdb) bt full #0 0xb7f1f832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 No symbol table info available. #1 0xb7be82bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 No symbol table info available. #2 0xb771652a in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #3 0xb75df576 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #4 0xb7715c20 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #5 0xb7be4a49 in start_thread () from /lib/libpthread.so.0 No symbol table info available. #6 0xb7b2a63e in clone () from /lib/libc.so.6 No symbol table info available. Does this mean that /lib/ld-linux.so.2 is causing the seg fault?

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Copy one db diagram from one db to another on different servers? (Same db)

    - by sah302
    I used the copy database wizard to copy my database from our test server to our production server, the database copied everything fine except for the diagram. Okay no problem, first I make sure the target database on production has the support objects created to use database diagraming. Then I select to import data from the other database and chose the dbo.sysdiagrams.Go through with the rest of the import data wizard, but then I get the following error: Validating (Error) Messages Error 0xc0202049: Data Flow Task: Failure inserting into the read-only column "diagram_id". (SQL Server Import and Export Wizard) Error 0xc0202045: Data Flow Task: Column metadata validation failed. (SQL Server Import and Export Wizard) Error 0xc004706b: Data Flow Task: "component "Destination - sysdiagrams" (31)" failed validation and returned validation status "VS_ISBROKEN". (SQL Server Import and Export Wizard) Error 0xc004700c: Data Flow Task: One or more component failed validation. (SQL Server Import and Export Wizard) Error 0xc0024107: Data Flow Task: There were errors during task validation. (SQL Server Import and Export Wizard) So apparently it didn't like that. What's the problem? I am pretty beginner in SQL Server and only do stuff via the GUI usually so am not sure what to do at this point. The databases are the same, but on different servers. Thanks!

    Read the article

  • Disadvantages of enabling 'Low Fragmentation Heap' LFH on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • Issue with multiple bridging for KVM hosts

    - by Henry-Nicolas Tourneur
    I'm using KVM and libvirt on my host (Debian lenny) + 2 bridges per guest (one for mgmt, one for public traffic). That setup isn't stable at all, sometimes I can do pings to a management ip, sometimes not. I don't know if my bridging paramateres are correct, could you check ? or if there is anything wrong ... Please also note that interface on guest doesn't flap and that I got not logs on my host. Of course forwarding is enabled. iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth1 eth2 pre-up ip link set bond0 up down ip link set bond0 down auto br0 iface br0 inet static address 10.160.0.7 netmask 255.255.255.128 bridge_ports eth3 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto br0:1 iface br0:1 inet static address 10.160.0.9 netmask 255.255.255.128 auto br0:2 iface br0:2 inet static address 10.160.0.10 netmask 255.255.255.128 auto br1 iface br1 inet static address 217.4.40.242 netmask 255.255.255.240 gateway 217.4.40.241 pre-up /etc/network/firewall start bridge_ports bond0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off auto br1:1 iface br1:1 inet static address 217.4.40.252 netmask 255.255.255.240 auto br1:2 iface br1:2 inet static address 217.4.40.253 netmask 255.255.255.240

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >