Search Results

Search found 4071 results on 163 pages for 'debugging visualizers'.

Page 120/163 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • How to pass alias through sudo

    - by Tanktalus
    I have an alias that passes in some parameters to a tool that I use often. Sometimes I run as myself, sometimes under sudo. Unfortunately, of course, sudo doesn't recognise the alias. Does anyone have a hint on how to pass the alias through? In this case, I have a bunch of options for perl when I'm debugging: alias pd='perl -Ilib -I/home/myuser/lib -d' Sometimes, I have to debug my tools as root, so, instead of running: pd ./mytool --some params I need to run it under sudo. I've tried many ways: sudo eval $(alias pd)\; pd ./mytool --some params sudo $(alias pd)\; pd ./mytool --some params sudo bash -c "$(alias pd)\; pd ./mytool --some params" sudo bash -c "$(alias pd); pd ./mytool --some params" sudo bash -c eval\ "$(alias pd)\; pd ./mytool --some params" sudo bash -c eval\ "'$(alias pd)\; pd ./mytool --some params'" I was hoping for a nice, concise way to ensure that my current pd alias was fully used (in case I need to tweak it later), though some of my attempts weren't concise at all. My last resort is to put it into a shell script and put that somewhere that sudo will be able to find. But aliases are soooo handy sometimes, so it is a last resort.

    Read the article

  • Is Cloud Print to blame for my inability to print?

    - by VarLogRant
    I have just moved to the Chrome beta (first 9.something and currently 10.0.648.127) on my Windows 7 64-bit machine, and my good web browsers (Firefox Beta and Chrome) can no longer print, and neither can Komodo Edit. The problem seems to have started when I tried to enable Google Cloud Print. Frustratingly, I can still print from IE 9 RC, and also from other machines in the office, which means the problem is entirely on this Windows machine. When I try to print from Chrome, first time I get the printer dialog, and once I click print, it waits, then pops up "Something went wrong", and each time after, it pops up with "No Printer Found". Under Firefox, I get "An unknown error occurred while printing", as does Komodo. I have printed from these applications before, and have even printed from Google Cloud Print. Once. But then it stopped. I really don't know where to go next in debugging this thing, and the Google results I see tend to be from 2007. I'm not ready for the paperless office yet, so please help!

    Read the article

  • Completely remove and freshly install MySql on XP?

    - by Corey Ogburn
    I have read this question and have not found it as a solution and I have even attempted much more. I've uninstalled MySql 5.5.18 and deleted: C:\Program Files\MySql C:\Documents and Settings\All Users\Application Data\MySql After uninstalling, I restart the computer. When I reinstall, in the MySql Server Instance Configuration Wizard I leave everything to their defaults except: I add a firewall exception I check Launch MySQL Server Automatically I check Include BIN directory in windows path Enable root access from remote machines (I'll lock that down later, just debugging for now, I have also tried installing without this option to no avail) I've tried Typical and Complete while installing, as well as with and without strict mode. No combination shows a difference. After all this, it cannot Apply Security Settings and I get a 10061 error (it also said error number 2003) and this article didn't help. I've tried everything I can to completely uninstall and successfully reinstall so I can start from scratch. I've uninstalled and reinstalled about a dozen times with minor changes (including turning off the firewall at times), each time deleting the above folders and any proper registry entries with no success. Note by success, applying security settings and a working remote connection. I can connect locally every time, but it's remotely that counts. I have tried to look for exterior problems such as port forwarding in the router and (even though the installer should add it) I do double check the firewall settings, which have always allowed the default port. I'm out of ideas.

    Read the article

  • Windows 7 mapped drive kicking off OS X users

    - by Collin White
    I've mapped a network drive on my Windows 7 PC at my office. The windows machine has a few TB of storage that is being accessed by my development team (all running mac os 10.7). The share seems to work fine for a little while but will timeout and kick the mac users off and sometimes disallows a connection on the next attempt. Restarting the windows machine fixes the problem. I've tried this tutorial as well as setting the maximum session length in the Local Security Policy section to 99999 (I discovered 0 did not mean unlimited, only a 'reasonable ammount of time') anyway, the setting is now for ~208 days which is sufficient (see attached). I'm having trouble debugging this in general so if anyone has some pointers I'm all ears. This is a intermittent issue which in my opinion are the hardest kinds to debug. If anyone knows of how I might monitor connections from the PC that would also be pretty cool. Previously the files were hosted on a mac mini and everything was working just fine (the mini just didn't have the ability for the storage capacity we needed) so I believe it is some windows setting that is kicking users off. Anyway, thanks for reading.

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • Problem routing between directly connected Subnets w/ ASA-5510

    - by Zephyr Pellerin
    This is an issue I've been struggling with for quite some time, with a seemingly simple answer (Aren't all IT problems?). And that is the problem of passing traffic between two directly connected subnets with an ASA While I'm aware that best practice is to have Internet - Firewall - Router, in many cases this isn't possible. For example, In have an ASA with two interfaces, named OutsideNetwork (10.19.200.3/24) and InternalNetwork (10.19.4.254/24). You'd expect Outside to be able to get to, say, 10.19.4.1, or at LEAST 10.19.4.254, but pinging the interface gives only bad news. Result of the command: "ping OutsideNetwork 10.19.4.254" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.19.4.254, timeout is 2 seconds: ????? Success rate is 0 percent (0/5) Naturally, you'd assume that you could add a static route, to no avail. [ERROR] route Outsidenetwork 10.19.4.0 255.255.255.0 10.19.4.254 1 Cannot add route, connected route exists At this point, you might gander if its a NAT or Access list problem. access-list Outsidenetwork_access_in extended permit ip any any access-list Internalnetwork_access_in extended permit ip any any There is no dynamic nat (or static nat for that matter), and Unnatted traffic is permitted. When I try pinging the above address (10.19.4.254 from Outsidenetwork), I get this error message from level 0 logging (debugging). Routing failed to locate next hop for icmp from NP Identity Ifc:10.19.200.3/0 to Outsidenetwork:10.19.4.1/0 This led me to set same-security traffic permit, and assigned the same, lesser and greater security numbers between the two interfaces. Am I overlooking something obvious? Is there a command to set static routes that are classified higher than connected routes?

    Read the article

  • Intermittent Trouble Entering Hibernate on WinXP

    - by kquinn
    My personal desktop, running 32-bit Windows XP SP2 (with 4GB RAM, 2.75GB addressable, swap disabled, hiberfil.sys existing and contiguous on C:\; SP3 is not installed because SP2 has been working fine and I do not want to re-qualify with SP3 just for sheer perversity) typically gets hibernated at night. For a long time this worked great, but recently the machine has had trouble entering hibernation. Sometimes when I press my power button (configured to hibernate), the box will start the procedure for hibernating (i.e., go to the blue "Windows XP" background logo and display a message about entering hibernation), but before displaying the usual blue-on-black hibernation progress bar it will drop back to the desktop. No error messages appear, on screen or in the system log. The only record of unsuccessful hibernation attempts in the system log, which proudly proclaims that "The Windows Image Acquisition (WIA) service entered the running state." once per failed hibernation attempt. The problem is almost certainly resource related: if I then close one or more applications which are running, and repeat the exact same process, the machine will hibernate perfectly. There does not appear to be a reliable high-water mark for virtual or physical memory use, below which the machine is guaranteed to hibernate; it's different every time (though typically, below about 1.1–1.4 GB memory usage seems to be where hibernate succeeds most often). Memory may not even be the relevant resource; as far as I know, it could also be handles or sockets. This behavior is relatively recent: it has only started in the last few months; before then, I could hibernate reliably no matter what the current resource use of the system. This machine claims to have hotfix Q909095 installed, but since the symptoms of my problem match KB909095 rather well, I'm suspicious if this fix is actually working as intended. Any ideas on how to fix this or where to start debugging?

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • Microsoft Access: computer freezes when user tries to update record

    - by CarlF
    A colleague and I have developed an Access 2003 database which is used throughout our department. Currently about four dozen people do data entry using one of two very similar forms. If 47 of us use them, they work perfectly. If Mr. 48 clicks the "Save" button, Windows XP freezes and a hard reset is needed. The problem has to be on his specific computer (Dell latitude D630) and not in the code because this problem only affects him. Complicating the matter: I don't work for IS, and this project is not supported by IS. If I'm going to get our tech support to fix the problem I had better be able to explain exactly what to do and how to do it, because they aren't going to invest any resources. I don't even have admin rights on the computer (and neither does its regular user). I've asked him to bring his laptop the next time he visits my building. (Just to make matters worse, he doesn't usually work in the same location as me or the other developer.) Any suggestions on debugging the problem? My first try will be to uninstall and reinstall Office, which I can do using corporate utilities without being admin. Note: yes, those are old versions of Office and Windows. We expect to upgrade later this year.

    Read the article

  • Setting up squid proxy server to in turn connect using another proxy server [closed]

    - by AnkurVj
    My institute uses the Squid proxy server and authentication mechanism requires username and password to be entered. This means that, I can log in on only one machine at a time and Internet access for me is restricted to that machine. I sometimes require Internet access on multiple machines simultaneously. What previosuly worked for me was the following : On one of my own machines A, I set up a Squid proxy server that allowed all local machines without any username and password. I configured rest of the machines to use this machine A as the proxy server. On machine A I logged into the institute proxy server using my browser. This way, I could access Internet from all my machines, by effectively channeling my requests through the server A. Recently, I lost that machine and configuration and now I tried to set it up again in the same manner. However, I cant seem to remember exactly how I made it work. I keep getting Connection Refused (111) on other machines. My guess is that my squid server isnt able to forward requests from other machines to the actual squid server. I could use any help for debugging this problem. I don't want to use alternatives such as ssh tunneling. This solution has worked for me in the past, I just don't remember how to set it up the same way again.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • Want to install GDB on Fedora 7 machine..

    - by RBA
    Hi, I want to install GDB GNU debugger for debugging C Programs, onto my fedora machine.. I installed the gzip file from gnu website, but it gives error during MAKE command.. I am doing all the steps correctly which reading from the readme file, and tutorial on internet. Please guide. Also i am trying to do from yum install gdb command and sudo apt-get install gdb, yum is not found in my system, i installed it, but now it is giving some unusual error, some file missing.. So no success with this.. sudo apt-get is working, but it is also giving some following errorss... [oracle@localhost Programs]$ sudo apt-get install gdb Password: Sorry, try again. Password: Sorry, try again. Password: oracle is not in the sudoers file. This incident will be reported. [oracle@localhost Programs]$ sudo apt-get install gdb I am in real nead of this gdb tool.. how to go about it.. Please share your experiences over this.. Thankx..

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >