Search Results

Search found 4065 results on 163 pages for 'psychic debugging'.

Page 120/163 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • GitLab post-receive hook not firing

    - by Ben Graham
    Apologies if this isn't the right stackexchange. I have a GitLab install. It was installed over the top of a gitolite install that was only a few days old, and I assume this non-standard setup is at the root of my problem, but I cannot pin it down. The problem is straightforward: post-receive hooks are not fired. This prevents 'project activity' appearing in GitLab. The problem looks like: $ git push #... error: cannot run hooks/post-receive: No such file or directory Hook Exists The post-receive hook/symlink exists and is executable: -rwxr-xr-x 1 git git 470 Oct 3 2012 .gitolite/hooks/common/post-receive lrwxrwxrwx 1 git git 45 Oct 3 2012 repositories/project.git/hooks/post-receive -> /home/git/.gitolite/hooks/common/post-receive It's Executable By GitLab The gitlab user can execute the script (I have removed the /dev/null redirect and fed in blank input to get an 'OK' as output): sudo su - gitlab -c /home/git/.gitolite/hooks/common/post-receive OK GitLab Can Find It GitLab is looking for hooks in the correct location: $ grep hooks /srv/gitlab/gitlab/config/gitlab.yml hooks_path: /home/git/.gitolite/hooks/ and $ bundle exec rake gitlab:app:status RAILS_ENV=production # ... /home/git/.gitolite/hooks/common/post-receive exists? ............YES Environment The env -i line in the hook is commonly cited as an issue. I think that would occur after this problem, but for completeness, redis-cli is found OK: $ env -i redis-cli redis> I've run out of debugging ideas on this one. Does anybody have any suggestions?

    Read the article

  • pfsense, active directory, local domain

    - by Dalton Conley
    First things first, I have no idea what I'm doing. Certainly not afraid to admit that.. but here is my network setup. I have 2 servers, one of them in a domain controller. Both are running windows server 2008. They have replicated directories. Each server is at a different location and has its own firewall for the network at that location. Both firewalls are using pfsense. Recently a firewall went down and my coworker reinstalled pfsense, and everything seems setup correctly. Again, I have no idea what I'm doing so I'm not sure. I have records from when the previous IT person had setup this network and the firewall settings are the same but those records could have been extremely old. Now, I have a domain name for my network.. we'll call it "mydomain.net". I use to be able to access this domain name and it would bring up the servers replicated drives(i.e. \\mydomain.net). Now I cannot. I can however access the servers individual host names on my network(i.e. \\server1 , \\server2). We didn't change anything on the server which is what makes me think its something to do with the firewall. I know this is probably a very general question and I don't have a lot of detail to add but could anyone give me some insight on to what could be causing this, or some debugging techniques I can apply to this? I'm a programmer, not a network administrator.

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • (manually configured) kernel update leaves wireless in a mess

    - by Mala
    I recently upgraded my kernel from 2.6.31-gentoo-r6 to 2.6.32-gentoo-r7. In both cases, I configured everything manually. However, since the upgrade, my wireless card appears to be on the fritz. It will connect to networks just fine, and remain connected, but can only access the internet (and other hosts on the network) for about 3 seconds after connecting. Reconnecting to the network appears to fix the problem... for another 3 seconds or so. The problem is "solved" by booting into the older kernel. The relevant lspci entry is 02:00.0 Network controller: Intel Corporation PRO/Wireless 5300 AGN [Shiloh] Network Connection I'm pretty sure I have the correct drivers enabled in the kernel Device Drivers ---> Network device support ---> Wireless LAN (IEEE 802.11) ---> <*> Intel Wireless Wifi [*] Enable LED support in iwlagn and iwl3945 drivers [*] Enable Spectrum Measurement in iwlagn driver [*] Enable full debugging output in iwlagn and iwl3945 drivers <*> Intel Wireless WiFi Next Gen AGN (iwlagn) [*] Intel Wireless WiFi 4965AGN [*] Intel Wireless WiFi 5000AGN; Intel WiFi Link 1000, 6000, and 6050 Series I tried with the other intel drivers enabled as well (iwl3945) and no difference. Is there something stupid I'm missing? Is there something I have to recompile after upgrading the kernel (a la nvidia)? Thanks Mala

    Read the article

  • Problem routing between directly connected Subnets w/ ASA-5510

    - by Zephyr Pellerin
    This is an issue I've been struggling with for quite some time, with a seemingly simple answer (Aren't all IT problems?). And that is the problem of passing traffic between two directly connected subnets with an ASA While I'm aware that best practice is to have Internet - Firewall - Router, in many cases this isn't possible. For example, In have an ASA with two interfaces, named OutsideNetwork (10.19.200.3/24) and InternalNetwork (10.19.4.254/24). You'd expect Outside to be able to get to, say, 10.19.4.1, or at LEAST 10.19.4.254, but pinging the interface gives only bad news. Result of the command: "ping OutsideNetwork 10.19.4.254" Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.19.4.254, timeout is 2 seconds: ????? Success rate is 0 percent (0/5) Naturally, you'd assume that you could add a static route, to no avail. [ERROR] route Outsidenetwork 10.19.4.0 255.255.255.0 10.19.4.254 1 Cannot add route, connected route exists At this point, you might gander if its a NAT or Access list problem. access-list Outsidenetwork_access_in extended permit ip any any access-list Internalnetwork_access_in extended permit ip any any There is no dynamic nat (or static nat for that matter), and Unnatted traffic is permitted. When I try pinging the above address (10.19.4.254 from Outsidenetwork), I get this error message from level 0 logging (debugging). Routing failed to locate next hop for icmp from NP Identity Ifc:10.19.200.3/0 to Outsidenetwork:10.19.4.1/0 This led me to set same-security traffic permit, and assigned the same, lesser and greater security numbers between the two interfaces. Am I overlooking something obvious? Is there a command to set static routes that are classified higher than connected routes?

    Read the article

  • Intermittent Trouble Entering Hibernate on WinXP

    - by kquinn
    My personal desktop, running 32-bit Windows XP SP2 (with 4GB RAM, 2.75GB addressable, swap disabled, hiberfil.sys existing and contiguous on C:\; SP3 is not installed because SP2 has been working fine and I do not want to re-qualify with SP3 just for sheer perversity) typically gets hibernated at night. For a long time this worked great, but recently the machine has had trouble entering hibernation. Sometimes when I press my power button (configured to hibernate), the box will start the procedure for hibernating (i.e., go to the blue "Windows XP" background logo and display a message about entering hibernation), but before displaying the usual blue-on-black hibernation progress bar it will drop back to the desktop. No error messages appear, on screen or in the system log. The only record of unsuccessful hibernation attempts in the system log, which proudly proclaims that "The Windows Image Acquisition (WIA) service entered the running state." once per failed hibernation attempt. The problem is almost certainly resource related: if I then close one or more applications which are running, and repeat the exact same process, the machine will hibernate perfectly. There does not appear to be a reliable high-water mark for virtual or physical memory use, below which the machine is guaranteed to hibernate; it's different every time (though typically, below about 1.1–1.4 GB memory usage seems to be where hibernate succeeds most often). Memory may not even be the relevant resource; as far as I know, it could also be handles or sockets. This behavior is relatively recent: it has only started in the last few months; before then, I could hibernate reliably no matter what the current resource use of the system. This machine claims to have hotfix Q909095 installed, but since the symptoms of my problem match KB909095 rather well, I'm suspicious if this fix is actually working as intended. Any ideas on how to fix this or where to start debugging?

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • Authenticate domain-user credentials on unjoined virtual machine?

    - by bwerks
    Hi all, This question may sound silly, and perhaps a bit insane, but--is there any way to run a process on a machine not joined to a domain using credentials from a user in that domain? In my case, I'm running virtual machines installed with release binaries from our build process, as well as Visual Studio. Visual Studio is there to debug our release binaries, however it's being executed with vm-local user credentials. This means that it can't authenticate to our TFS deployment when executing "tf.exe view" to utilize our Source Server for debugging. Team Explorer manages to authenticate to TFS using a UI prompt, however I suspect that it's because we supply it with the TFS deployment's URI, and it's designed to display a prompt to facilitate workgroup scenarios; i.e. it's not like we're getting it for free. My instincts tell me the only way to authenticate on this vm is to join it or somehow form a one-way trust or something, but is there an easier way? For automation we're going to want to script this eventually, but I'm first surveying the feasibility of the thing.

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • Microsoft Access: computer freezes when user tries to update record

    - by CarlF
    A colleague and I have developed an Access 2003 database which is used throughout our department. Currently about four dozen people do data entry using one of two very similar forms. If 47 of us use them, they work perfectly. If Mr. 48 clicks the "Save" button, Windows XP freezes and a hard reset is needed. The problem has to be on his specific computer (Dell latitude D630) and not in the code because this problem only affects him. Complicating the matter: I don't work for IS, and this project is not supported by IS. If I'm going to get our tech support to fix the problem I had better be able to explain exactly what to do and how to do it, because they aren't going to invest any resources. I don't even have admin rights on the computer (and neither does its regular user). I've asked him to bring his laptop the next time he visits my building. (Just to make matters worse, he doesn't usually work in the same location as me or the other developer.) Any suggestions on debugging the problem? My first try will be to uninstall and reinstall Office, which I can do using corporate utilities without being admin. Note: yes, those are old versions of Office and Windows. We expect to upgrade later this year.

    Read the article

  • Setting up squid proxy server to in turn connect using another proxy server [closed]

    - by AnkurVj
    My institute uses the Squid proxy server and authentication mechanism requires username and password to be entered. This means that, I can log in on only one machine at a time and Internet access for me is restricted to that machine. I sometimes require Internet access on multiple machines simultaneously. What previosuly worked for me was the following : On one of my own machines A, I set up a Squid proxy server that allowed all local machines without any username and password. I configured rest of the machines to use this machine A as the proxy server. On machine A I logged into the institute proxy server using my browser. This way, I could access Internet from all my machines, by effectively channeling my requests through the server A. Recently, I lost that machine and configuration and now I tried to set it up again in the same manner. However, I cant seem to remember exactly how I made it work. I keep getting Connection Refused (111) on other machines. My guess is that my squid server isnt able to forward requests from other machines to the actual squid server. I could use any help for debugging this problem. I don't want to use alternatives such as ssh tunneling. This solution has worked for me in the past, I just don't remember how to set it up the same way again.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • Want to install GDB on Fedora 7 machine..

    - by RBA
    Hi, I want to install GDB GNU debugger for debugging C Programs, onto my fedora machine.. I installed the gzip file from gnu website, but it gives error during MAKE command.. I am doing all the steps correctly which reading from the readme file, and tutorial on internet. Please guide. Also i am trying to do from yum install gdb command and sudo apt-get install gdb, yum is not found in my system, i installed it, but now it is giving some unusual error, some file missing.. So no success with this.. sudo apt-get is working, but it is also giving some following errorss... [oracle@localhost Programs]$ sudo apt-get install gdb Password: Sorry, try again. Password: Sorry, try again. Password: oracle is not in the sudoers file. This incident will be reported. [oracle@localhost Programs]$ sudo apt-get install gdb I am in real nead of this gdb tool.. how to go about it.. Please share your experiences over this.. Thankx..

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Synergy: Cannot send media keys from Linux to Mac

    - by CraftyThumber
    I have a Linux Synergy server (Si-Linux) serving just one Mac client (Macbook Pro UK) (SiBook-Pro.local). On my Linux server I am using a USB Apple keyboard with the exact layout of the laptops keyboard (the compact UK aluminium keyboard). I would like to send the media keys to the Mac client at all times and I have attempted the following in my synergy.conf: keystroke(AudioPlay) = keystroke(AudioPlay,SiBook-Pro.local) This did not seem to work so I ran both the server and client as foreground processes and with debugging enabled and observed the following: Server Log: DEBUG1: activate actions DEBUG1: hotkey: keyDown(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyDown id=57523 mask=0x0000 button=0x0000 DEBUG1: send key down to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 DEBUG1: deactivate actions DEBUG1: hotkey: keyUp(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyUp id=57523 mask=0x0000 button=0x0000 DEBUG1: send key up to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 Client Log: DEBUG1: recv key down id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: mapKey e0b3 (57523) with mask 0000, start state: 0000 DEBUG1: key e0b3 is not on keyboard DEBUG1: recv key up id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: recv enter, 1279,386 5 2000 As you can see, the client claims the key received is not on keyboard. I don't understand since it is the same key as is on the Macbook's keyboard. I tried to reverse the client-server config to see if I could capture the key being sent if I pressed the Play button on the Macbook but the key doesn't seem to even make it to Synergy. Almost all keyboard presses get logged but the media keys seem to bypass the logs and just execute their function locally. E.g. I press play on the Macbook (with the Macbook as the server) and the key plays music on the Macbook and the key is not logged to the debug log.

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Print from Linux to Windows networked printer

    - by wonkothenoob
    I want to print from a Debian (Lenny) workstation to a Windows networked printer. I'm not even sure what type of Windows network this is. Our tech-support is friendly but doesn't want to get involved with supporting Linux. I need to use it for a variety of reasons and am completely stumped because I know nothing about Windows networking. They gave me URI smb://msprint.ourorg.edu as the "address" of the printer and further confirmed that the domain is "OURORG" and the share is "PHYS-PRI". I've installed CUPS and made sure that it's running as a daemon, I've clicked on the system-config-printer[1] icon, selected the printer as a Windows printer shared via SAMBA and entered the above URI. Attempting to print a testpage just sees it sit in the queue. I attempted to see if I could access the share using two other methods. Method 1. First I tried the "smbclient" from the CLI: $ smbclient -L //msprint.ourorg.edu -U user23 timeout connecting to 192.168.44.3:445 timeout connecting to 192.168.44.3:139 Connection to msprint.ourorg.edu failed (Error NT_STATUS_ACCESS_DENIED) Method 2. I tried to use the GUI tool Smb4K. This shows me four other toplevel (I'm assuming they're domains?) groupings one of which is the one which our IT department supplied to me. Clicking them shows a bunch of other machines with (what I assume are NetBIOS names?) including my own. I see all sorts of other networked printers belonging to other departments but none within mine. Certainly not the PHYS-PRI one suggested to me by the IT folks. I realize that I'm probably using the wrong terminology for the windows network, but can anyone help me with this? What steps should I be taking in debugging this? Do I need to actually run my machine as a SAMBA server to authenticate to the printer or should I just be able to communicate using CUPS? It's a GUI to CUPS configuration http://cyberelk.net/tim/software/system-config-printer/

    Read the article

  • Emails not sending from outlook / OWA - Not even hitting the mail queue in exchange

    - by webnoob
    We are having an issue this morning where we can receive external emails but cannot send internal or external ones from Outlook or OWA. If I use: Send-MailMessage –From <[email protected]> –To <[email protected]> –Subject “Test #01”-Body “Just a test message.” –SMTPServer <Server-Name> –Credential <domain\user> the email is sent correctly which makes me think there is a connection issue with OWA and Outlook. However, outlook is reporting as Connected with exchange. I have checked the message tracking in exchange tools and emails sent via outlook and OWA do not appear. Nothing has changed on the server on the weekend so I don't really know where to start debugging this issue. We are using Windows SBS 2011. We only have one send connector which isn't using Smart Hosts and is set to use DNS MX records. Use external DNS is not checked and I can ping google.com etc so doesn't appear to be a DNS issue (plus the email sends from the console anyway). EDIT It appears that users using IMAP can send emails correctly, its only ones that rely on the normal exchange connection type that don't work. EDIT Emails from IMAP are hitting the email queue's where as emails from the normal exchange accounts aren't. EDIT It seems that some of the emails we tried to send yesterday sent at about 1am but now it won't work again..

    Read the article

  • Ubuntu in failed state after upgrade from 10.04 to 10.10 - How to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: 1. Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a). 2. gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok] 3. gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ... What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >