Search Results

Search found 29432 results on 1178 pages for 'mite fine dailes'.

Page 129/1178 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • Logging into Local Statusnet instance on Apache causes browser to download a file

    - by DilbertDave
    I've installed statusnet 0.9.1 on a Windows Server via the WAMP stack and on the whole it seems to be fine. However, when logging in using IE7 or Chrome the browers invoke a file download, i.e. the File Download dialog is displayed. In IE7 the file is called notice with the content below (some parts starred out): <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"> <ShortName>Mumble Notice Search</ShortName> <Contact>david.carson@*****.com</Contact> <Url type="text/html" method="get" template="http://voice.*****.com/mumble/search/notice?q={searchTerms}"></Url> <Image height="16" width="16" type="image/vnd.microsoft.icon">http://voice.*****.com/mumble/favicon.ico</Image> <Image height="50" width="50" type="image/png">http://voice.******.com/mumble/theme/cloudy/logo.png</Image> <AdultContent>false</AdultContent> <Language>en_GB</Language> <OutputEncoding>UTF-8</OutputEncoding> <InputEncoding>UTF-8</InputEncoding> </OpenSearchDescription> In Chrome (Linux and Windows!) the file is called people and contains similar XML. This is not an issue when logging in using FireFox. This is obviously a configuration issue but I'm not having much luck tracking it down. I tested the previous version of Statusnet on an Ubuntu Server VM on our network and it worked fine for months. Thanks In Advance

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • System freezes during boot process

    - by slugster
    Hi everyone, i have a machine running Win7 Ultimate. It was running fine, then it just froze - all the stuff i was doing was still on the screen, but mouse and keyboard input was ignored, any animation that was happening on the screen stopped, the machine literally just froze. So i rebooted (power off button), from then on the machine will reboot, but it ultimately freezes again. The instance when this happens will vary - i have made it as far as the Windows login screen, but mostly it will do the POST, then give me the option to press F1 to continue or Del to enter BIOS settings (but of course pressing a key has no effect - it's frozen!). I have disconnected everything not necessary for the boot process, the only peripheral that remains attached is the keyboard. (even the network cable is disconnected). Prior to this the machine was operating fine. The install of Win7 is only 2 days old, and it was a fresh reinstall (i.e. not an upgrade or repair). Can anyone give me an indication of what may be wrong here? I'm not sure if this question should be here or on SuperUser, please migrate it if i have chosen the wrong board.

    Read the article

  • Windows Remote Desktop: "configuring remote session" closes without error

    - by icelava
    I have a desktop/laptop pair at home operating x64 Windows 7 (the desktop was upgraded from Windows Vista, works just fine). I remote desktop to them on a daily basis when outside. In recent weeks, I would occasionally fail to connect to my desktop. It can connect and authenticate fine, but the "configuring remote session" dialog would simply close and not show me the desktop window or any error message. There is no error event log relating to this on the desktop computer. Some suggestions call for disabling remote audio, which mine already is, but trying different audio modes did not yield any different result. I am not too sure if this is related to video card drivers (they do get auto-updated), since remote desktop video is supposed to steer via a virtual device driver? Nonetheless the desktop operates three monitors via an ATI Radeon HD5770 (1 Displayport, 2 DVI). I do not see a real problem with that since I can mostly connect and operate it remotely. I try to "remote tunnel" via my home laptop but obviously won't work either as the problem lies in the desktop. What other conditions can cause remote desktop to break without error? UPDATE I came home and still couldn't connect to the desktop until I restarted the entire system.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Nginx proxy hangs when proxiing to itself

    - by Thomas
    I have Nginx running as a proxy for a number of services including a Geoserver running on port 8080 with the following config: location ^~ /wms/ { rewrite ^/wms/(.*)$ /geoserver/ows$1 break; proxy_pass http://127.0.0.1:8080; proxy_connect_timeout 60s; proxy_read_timeout 150s; } and a proxy service to avoid SOP problems which works as follows: location ^~ /proxy/?targetURL= { rewrite ^/proxy/?targetURL=(.*)$ $1 break; proxy_pass $1; proxy_connect_timeout 60s; proxy_read_timeout 150s; } My web server is also under the same domain, ran by a jetty on port 8888, handled by the same proxy. location / { proxy_pass http://127.0.0.1:8888; proxy_connect_timeout 60s; proxy_read_timeout 150s; } From my web application I make WMS server calls for data via my proxy service. It works fine for external servers but it hangs when I call my own internal geoserver. My geoserver proxy works fine, I can make WMS service queries with the said URL. The call that hangs is basically: http://mywebappdomain.com/proxy/?targetURL=http://mywebappdomain.com/wms/?my_set_of_parameters Which means that the proxy rule applies and the WMS service is called from the same server. Is there an issue with proxying over itself?

    Read the article

  • ESXi Guests will not boot on IBM x3550 M3

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. VM Host Server: IBM x3550 M3 7944AC1 server w/ 2x Intel Xeon E5607 2.27Ghz CPUs ESXi 5.0.0 Build 623860, built for IBM Hardware downloaded from IBM Storage: 2x500GB SAS local storage 8GB RAM Vt is verified to be ENABLED Server Health Status: Normal The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspucious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space.

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • My Boot order changed. Why?

    - by Chris
    I have a laptop running Windows XP SP3 with one internal hard drive partitioned into C: (system), D: (storage) and I have an external hard drive, F: (external drive). Yesterday the machine was running fine. Today, I go back to it and see that it's just showing a blinking cursor. Checked through the BIOS and the hard drive checked out fine. CTRL-ALT-DELETED the machine a few times, but I was never able to boot back into the operating system. I threw in a live CD and found out that the boot order of the drives has changed. The external drive is now C:, the system partiton is D:, and the storage partition is E:. Does anyone have any idea of how or why this would have occured? Auto system updates are turned off so there should have been no automatic reboot of the system overnight, and anti-virius runs on the machine and has found no infections before this occured. Edit When I was looking through the BIOS of the machine, I did see that the boot order was changed. But still the same question remains, what would have caused this to happen? I can't believe that a random reboot happened and totally changed my hard drive setup.

    Read the article

  • Running Debian as guest operating system on a Hyper-V VM

    - by kce
    Hello. Layer-9 considerations are prompting a migration from Citrix XenServer to Hyper-V as our shop's virtualization platform of choice. This will require me to migrate our existing virtual machines from XenServer to Hyper-V. A hand full of these VMs are running Debian. Unfortunately, Debian does not seem to be on the list of approved/supported guest operating systems. In fact it seems that running Debian as a guest operating system of is rather difficult, although apparently not impossible. I have two interrelated questions: Does anyone have any experience running a Debian guest on Hyper-V? Is it one of those things where it just will not work at all or is more along the lines of "it will probably work fine, but we won't support it". Any experience here, positive or negative, would be helpful. How much of a bad idea is it to deviate from Hyper-V's list of supported guest operating systems? Again, is it either basically asking for Bad Things (TM) to happen or is just another instance of "it will probably work fine, but we won't support it"? Or is it somewhere in the middle? Thank you.

    Read the article

  • Corrupt install of Ruby?

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'digest' then that works fine. Just a couple of days ago I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

  • Remove server hangs, gets stuck. How to debug?

    - by bibstha
    I have an vps running on VmWare ESX with Ubuntu 8.04 LTS. It has been running smoothly for the past 3 months, however recently we've notices two strange bugs. a. The server hangs, today was second time. The nature of the hang is very strange. I can ping to the server server, it sends back response fine. However all other services like sshd, apache, mysql etc do not respond at all. When working, telnet servername 22 Escape character is '^]'. SSH-2.0-OpenSSH_5.X Debian-5ubuntu1 And other web services would run fine. When its hung, I can make tcp connections to 22 as well as 80 but receive no response at all. telnet servername 22 Escape character is '^]'. How can I debug this problem? Is there any daemons I can run that will periodically log status? Please tell me as to how to proceed with it. b. The another strange problem is that, of lately I am unable to transfer files larger than around 100KB, smaller files of around 1-2 KB works file. scp anotherserver:filename . or wget http://www.example.com/file would get stuck. There is still around 6GB of space remaining, so I don't think that is an issue. Any pointers where I should look into?

    Read the article

  • Windows Media Center showing Jerky Video on PC

    - by Kris Erickson
    I had to repave my Windows 7 x64 box last week due to a hard drive crash, and for a while everything was running perfectly but now all videos in Windows Media Center are jerky (the sound is fine, they just seem to skip a ton of frames all the time). This is on the local machine, but the same thing happens when I try to stream to my Xbox. The videos all show fine in VLC and Windows Media Player. I guess I must have installed something recently (in the process of getting all the apps I usually have running on my PC) that caused this but for the life of me I can't figure it out. I have updated to the latest video driver (and then rolled back to the standard Windows 7 driver), I have rolled back all the other drivers that I have installed (I believe). I have uninstalled all the codec packs (I also run TVersity, so I hate the TVersity codec pack installed), and I uninstalled TVersity. Nothing seems to help. I have uninstalled windows media center, and reinstalled it from the Programs and Features. I have basically ran out of things to try to fix this, and am almost thinking about reinstalling Windows again. Any suggestions?

    Read the article

  • I am looking for a tool to measure or detect "unresponsiveness" of a desktop PC

    - by Tom H
    I have a client that provides some server systems to a hospital, and a support ticket was raised that the desktop application was hanging waiting for the server. We did some extensive testing and its pretty clear that the server is responsive, and the network is fine, and that the problem is on the client end. (no requests are received during the hang etc...) We take a look at the desktop machines and they should be fine, so we raise tickets with the software vendor who says that it must be the hardware, the hardware company says that it is the software, etc etc Anyway, so talking to the nurses, they say that these machines often "hang" for 30 seconds at a time, and sometimes during important moments where they need to get data for a patient who is unwell, such as charts and status. So I want to stick a client on these machines that would be able to detect arbitrary "unresponsiveness" of the keyboard/mouse and log that for analysis later. Obviously I am wary to suggest some application that takes resources and makes the problem even worse, so I would interested to see any tools that would detect these (is it correct to say that the keyboard interrupts are being discarded?) scenarios by looking for the OS discarding the interrupts, or whatever is appropriate here. so go on then serverfault, here is your chance to save a life.... ;-) Edit: I am starting to think that some of the tools associated with real time systems might be appropriate, at least as a diagnostic.

    Read the article

  • Fedora 12 on Vmware network disabled on restore

    - by Chaitanya
    I have a fedora 12 guest running on VMWare on windows 7. I use it mainly for the occasional linux dev. Whenever I restart the guest, networking works fine. But if I close the VMware player and save state, the next time I start the image, networking is disabled (red x on the network icon. message saying networking disabled). I can't seem to find a way to restore networking. I have to reboot the guest to get my network access back again. My Ubuntu image doesn't have this problem. I can close the player and when I re run the image, I can pick up where I left off, with all the open firefox windows and application windows as I left them. Fedora saves state, but doesn't seem to enable networking. There is a relevant warning I have seen "SELinux is preventing /sbin/ifconfig "read" access to/var/run/vmware-active-nics." But I am not sure how to solve it. I know fedora isn't officially supported by VMware, but it seems to be working fine for the most part and meeting my needs, except for this one little issue. Any help would be much appreciated.

    Read the article

  • Exchange 2013 attachments too big?

    - by KPS
    I am having the toughest time sending large attachments, everywhere I have checked my file size limit for send/receive is 100mb but yet users are unable to receive files even at the size of 14mb. I'm using a spam filter (Appriver) and have worked with there support for a very long time, we see the following errors in logs 13:32:40.260 4 SMTP-000036([myserverIP]) rsp: 354 Start mail input; end with <CRLF>.<CRLF> 13:33:41.038 3 SMTP-000033([myserverIP]) write failed. Error Code=connection reset by peer 13:33:41.038 3 SMTP-000033([myserverIP]) [659500] failed to send. Error Code=connection reset by peer 13:33:41.038 4 SMTP([myserverIP]) [659500] batch reenqueued into tail Windows firewall is disabled on the exchange server, all other emails that are of smaller value come through just fine. Here is a print out of size limits: ConnectorType ConnectorName MaxReceiveMessageSize MaxSendMessageSize ------------- ------------- --------------------- ------------------ Send InternetSendConnector - 35 MB (36,700,160 bytes) Send Appriver-Smarthost - 35 MB (36,700,160 bytes) Receive Default EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) - Receive Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) - Receive ExchangeRelay 100 MB (104,857,600 bytes) - TransportConfig - 100 MB (104,857,600 bytes) 10 MB (10,485,760 bytes) ADSiteLink DEFAULTIPSITELINK Unlimited Unlimited There is a no anti-virus on the server either that could be interfering, I am out of ideas at this point :( EDIT 1 After running BPA, it gives and error: Exchange Organization: Check whether the incoming message(CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local) size isn't set The maximum incoming message size isn't set in organization 'CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=WG,DC=local'. This can cause reliability problems. Here are the sizes as of now: [PS] C:\Temp>Get-TransportConfig | ft MaxSendSize, MaxReceiveSize MaxSendSize MaxReceiveSize ----------- -------------- Unlimited Unlimited [PS] C:\Temp>Get-ReceiveConnector | ft name, MaxMessageSize Name MaxMessageSize ---- -------------- Default EXCHSRVR 100 MB (104,857,600 bytes) Client Proxy EXCHSRVR 100 MB (104,857,600 bytes) Default Frontend EXCHSRVR 100 MB (104,857,600 bytes) Outbound Proxy Frontend EXCHSRVR 100 MB (104,857,600 bytes) Client Frontend EXCHSRVR 100 MB (104,857,600 bytes) ExchangeRelay 100 MB (104,857,600 bytes) Again, smaller emails come through just fine. Seems like there is a 10mb receive limit somewhere that I cannot find.

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Mongrel Cluster on Ubuntu Server Karmic

    - by trobrock
    I am trying to get mongrel cluster working on my Ubuntu Server Karmic box in preparation to setup Capistrano. I've been trying to get the two to work all day and finally decided to completely remove Capistrano and see if I can just get Mongrel Cluster to work. I ran this to install mongrel cluster: gem install mongrel mongrel_cluster Everything installed fine, when I change into my app's directory... # mongrel_rails -bash: mongrel_rails: command not found I can run it from its install location: # /var/lib/gems/1.8/bin/mongrel_rails Usage: mongrel_rails <command> [options] Available commands are: ... It lets me build the cluster configuration file fine, but when I run the clister:start command: # /var/lib/gems/1.8/bin/mongrel_rails cluster::start starting port 8000 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8000 -P tmp/pids/mongrel.8000.pid -l log/mongrel.8000.log starting port 8001 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8001 -P tmp/pids/mongrel.8001.pid -l log/mongrel.8001.log starting port 8002 /usr/lib/ruby/1.8/rubygems/custom_require.rb:31: command not found: mongrel_rails start -d -e production -p 8002 -P tmp/pids/mongrel.8002.pid -l log/mongrel.8002.log It seems it isnt calling it from the right directory after that command, what can I do to fix this? I tried setting the path previously when trying to set up Capistrano, but the path didnt stay set when Capistrano used ssh to run the commands.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Does MySQL log successful or attempted queries?

    - by Nathan Long
    I'm trying to track down a hit-or-miss bug in a web application. Sometimes a request completes just fine; sometimes it hangs and never finishes. I see that Apache now has several requests listed on the server-status page as "sending reply," and that doesn't change. I'm testing on localhost, so there shouldn't ever be more than one. Out of curiosity, I set MySQL to log all queries and I'm tail -fing the log file. When things go OK, I see a pattern like this: 20 Connect root@localhost on dbname 20 Query (some query #1) 20 Query (some query #2) (etc) 20 Quit 21 Connect (etc) When it hangs, I see a pattern like this: 22 Connect root@localhost on dbname 22 Query (some query #1) //nothing happens, so I try the post again 23 Connect root@localhost on dbname 23 Query (some query #1) //nothing happens; try again 24 Connect (etc) Here's my question: is MySQL logging attempted queries, or successful queries? In other words, if the last line I see is query #1, does that imply that query #1 or query #2 is hanging? My guess is that the one I don't see is the problem, because the last one I see looks fine, but maybe the one I don't see is too screwed-up for MySQL to process. Thoughts?

    Read the article

  • Delete ARP cache on Mac OS when moving from one Wifi network to the other

    - by Puneet
    I am facing wireless connectivity problems when I move from one Wifi network to the other. Here is how it happens: I am at my friends place. I connect to his Wifi. His Wifi router ip address is 192.168.0.1. Everything is fine I close my laptop, come back to my house, open my laptop and I connect to the Wifi Network at my place. Different ESSID, but the Wifi router address is the same 192.168.0.1. At this point I cant get to anything on the internet. To debug I try to see if I can ping the router (192.168.0.1), I cant. I get a no route to host. Meanwhile airport tells me Im connected to Wifi. I see the arp cache and I see a permanent entry for 192.168.0.1 ? (192.168.0.1) at 5c:d9:98:65:73:6c on en1 permanent [ethernet] This permanent bit looks problematic. I go ahead and delete the arp cache entry and all is fine with the world until I go back to my friends place where the same situation plays out. Now my question is, why the hell is this happening? If there is no way around it, can I run a script on Wifi connect/disconnect to clear out the arp cache? Im using Mac OS X $uname -a Darwin 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >