Search Results

Search found 20817 results on 833 pages for 'general exception'.

Page 594/833 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Gentoo Linux useful utilities

    - by Alakdae
    I want to make a list of utilities that come in handy in Gentoo (general Linux tools available in all distributions also appreciated). What tools and commands do you use and consider helpful in administration of a Gentoo server? I will update the list with command from answers from time to time. eclean Utility for cleaning distfiles and binary packages. Usage example: eclean distfiles Usage example output: Cleans out the files in /usr/portage/distfiles. Pretty handy. Package: app-portage/gentoolkit eix Very useful tool for getting information about a package. Similar to "emerge -s" but much faster and more precise. Usage example: eix gentoolkit Usage example output: Show information about package such as: available versions, masked versions, installed versions and description. Package: app-portage/eix eix-test-obsolete Check system for obsolete, redundant, uninstalled entries in package.keywords, package.mask, package.unmask, package.use and package.cflags Usage example: eix-test-obsolete Usage example output: Shows non-matching entries, redundant entries, and uninstalled entries. Package: app-portage/eix equery Another very useful tool for getting information about packages (listing package files, checking which files belong to which package and much more) Usage example: equery b emerge Usage example output: Show which packages installed a file called emerge Package: app-portage/gentoolkit genlop Utility for extracting information about emerged ebuilds Usage example: genlop -l --date yesterday Usage example output: Show a list of packages that have been emerged yesterdayPackage: app-portage/genlop glsa-check Checks system if it's affected by GLSAs (security issues) Usage example: glsa-check -l affected Usage example output: List of GLSA that the system is affected by. Package: app-portage/gentoolkit rc-update Utility for managing (adding, deleting) runlevel scripts. Usage example: rc-update add syslog-ng default Usage example output: Adds syslog-ng to default runlevel. Package: sys-apps/baselayout revdep-rebuild Scans libraries and binaries for missing shared library dependencies Usage example: revdep-rebuild Usage example output: Gather binaries and libraries information, check for dependencies, rebuild packages with missing dependencies Package: app-portage/gentoolkit

    Read the article

  • How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid)

    - by Svyatoslavik
    How install ImageMagic 6.6.2 on Ubuntu 10.04 (lucid) Problem that lucid have old ImageMagic version(6.5.2) Its very important because me need work with SVG grafics, In my local pc I have ubuntu 11.04 and ImageMagic 6.6.2 and all work fine, In server I have 6.5... and I have problem. Reinstall ubuntu to 11.* this is no solution. I tried change /etc/apt/source.list from ubuntu 10.04 (lucid) to list from ubuntu 11.04 (natty) and update ImageMagic. After this action I have ImageMagic 6.6.2 (I looked phpinfo()) but ImageMagick is not work now. If I try do any action I get error: [error] 8996#0: *19983 FastCGI sent in stderr: "PHP Fatal error: Uncaught exception 'ImagickException' with message 'no decode delegate for this image format `/tmp/magick-XXnYKWKC' @ error/constitute.c/ReadImage/532' How it fix? Or how return to old version imagemagick? Problem if I try install from sources: /tmp/image/ImageMagick-6.7.2-7# ./configure configuring ImageMagick 6.7.2-7 checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking whether build environment is sane... yes checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... no configure: error: in `/tmp/image/ImageMagick-6.7.2-7': configure: error: C compiler cannot create executables See `config.log' for more details /tmp/image/ImageMagick-6.7.2-7#

    Read the article

  • Server 2008 R2 DNS Lockup / Stops Resolving Internet Names

    - by Richard Maynard
    We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed

    Read the article

  • Logitech webcam device only recognised by one software, without drivers

    - by Ben Franchuk
    A couple weeks ago I purchased a Logitech webcam at a garage sale; It did not come with any driver DVDs or anything like that. I plugged it in, turned on my computer, and continued work as usual. I did not at the time (and still have not) gotten any drivers for the device. Recently, though, I started up an audio software named as Cubase, only to find that it was picking up audio reads off of... something. I checked my sound card, and everything else plugged into my computer, but couldn't find where in the world this audio was being picked up from. There were no microphones listed in the device managers, and no "unknown devices" or whatever. Everything seemed as it always was. Running out of ideas, i blew an air-horn directly into the general area of the webcam, located directly in front of me. Sure enough, the audio peaked, indicating that the microphone was definitely in the webcam and that Cubase was somehow picking up this audio, even without drivers. The software lists the device as a "Universal USB Microphone". Adobe Audition, Soundbooth, and other audio applications cannot find the device either. Why is it that this one software (Cubase) can use this device without a driver, while every other piece of software on the computer can't? Not even the operating system can recognize it. Windows 7 Professional x64 bit

    Read the article

  • Postfix SASL Authentication using PAM_Python

    - by Christian Joudrey
    Cross-post from: http://stackoverflow.com/questions/4337995/postfix-sasl-authentication-using-pam-python Hey guys, I just set up a Postfix server in Ubuntu and I want to add SASL authentication using PAM_Python. I've compiled pam_python.so and made sure that it is in /lib/security. I've also added created the /etc/pam.d/smtp file and added: auth required pam_python.so test.py The test.py file has been placed in /lib/security and contains: # # Duplicates pam_permit.c # DEFAULT_USER = "nobody" def pam_sm_authenticate(pamh, flags, argv): try: user = pamh.get_user(None) except pamh.exception, e: return e.pam_result if user == None: pam.user = DEFAULT_USER return pamh.PAM_SUCCESS def pam_sm_setcred(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_acct_mgmt(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_open_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_close_session(pamh, flags, argv): return pamh.PAM_SUCCESS def pam_sm_chauthtok(pamh, flags, argv): return pamh.PAM_SUCCESS When I test the authentication using auth plain amltbXkAamltbXkAcmVhbC1zZWNyZXQ= I get the following response: 535 5.7.8 Error: authentication failed: no mechanism available In the postfix logs I have this: Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication problem: unknown password verifier Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: SASL authentication failure: Password verification failed Dec 2 00:37:19 duo postfix/smtpd[16487]: warning: localhost.localdomain[127.0.0.1]: SASL plain authentication failed: no mechanism available Any ideas? tl;dr Anyone have step by step instructions on how to set up PAM_Python with Postfix? Christian

    Read the article

  • Windows roaming profile when creating a new user profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • Windows roaming profile when creating a new Windows profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • high virtual memory usage in openvz?

    - by freedrull
    We're having a lot of memory problems on a new OpenVZ box. It is supposed to have 1 gig of memory, I'm not sure how much of that is burstable or guaranteed memory. Programs in general seem to take up more virtual memory than they do on my box at home, and on our other OpenVZ box. I wrote this simple C program: #include <stdio.h> #include <stdlib.h> int main(){ char *thingy = malloc(500); getchar(): return 0; } So it simply allocates 500 bytes and then returns. I ran the program on 3 computers. On my home machine, and our other OpenVZ box it shows about 1k bytes of virtual memory being used. On the new problematic machine its about 3k. I know this is just virtual memory and not resident memory, but why is this machine allocating so much virtual memory? Are there some settings I need to adjust to the OpenVZ memory settings? I tried changing the stack size with ulimit -s 256 and restarting some demons, but I still saw the same results. I'm doing all of my monitoring with htop, is this even a good program to use with a OpenVZ vps? I've read I should be parsing the output of /proc/user_beancounters intead or something.

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • ipv6 port 445 does not accept the request from a global type address

    - by blacktea
    I want to scan the port 445 in windows server 2003, but my scanner only have one type ipv6 address which is global not link-local. When I do this,I find that I can't find port 445 open. But I use the command "netstat -an" to assure the port 445 is listening. Finally I find this confusing phenomenon: 1.when I set a link-local ddress in my scanner, then it will work in scanning port 445. 2.when I only set a global address in my scanner, it doed not work. This means if a host with a link-local address use socket to send a syn packet to the port 445 in server 2003, it will receive a ack packet. But if with a global address it will receive a rst packet. Thus, I can't scan the port 445 in server 2003 with a global address. I need to know why? Can anybody help? And I use the netsh-firewall to check the exception and netsh-interface-ipv6 to turn off the firewall on the specific interface. Still can't establish the connection with port 445, do you have any ideal about this ?

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • Troubleshooting iptables and configuring it to drop the priority of long-term connections

    - by intuited
    I'm somewhat familiar with the general concepts of iptables, and would like to learn it in more detail. I'm hoping that my learning experience can also be useful. The situation: I'm running dd-wrt on my router. Despite its purported QoS skills, I'm still seeing connection latency shoot up hugely whenever there's an ongoing http connection, eg some large download. Under such conditions, it can take 10 seconds or more to load a basic webpage; sometimes the connections are dropped entirely. I've tried adjusting the parameters, dropping the allotted bandwidth for up and download to well under my limit, but nothing seems to work. dd-wrt is configured to use HTB as the QoS algorithm; HFSC, although presented as an option, seems to cause the router to crash, and is rumoured to not actually work on any linux system. I'd like to be able to troubleshoot this issue and hopefully improve the settings that dd-wrt is using, but I'm finding the learning curve a bit overwhelming. For starters I am not sure what HTB actually specifies: is this a set of iptables commands, or do some of those commands specify how HTB is to be used? I would like it to prioritize based on protocol the way that it already supposed to, and in addition I'd like to have it drop the priority of connections which have a high total byte count, say over 400KB. Also tips on utilities that can be run under dd-wrt to get more info on what's going on in there are appreciated. I've tried to get iftop to work but there were issues running curses. I'm leaning towards replacing dd-wrt with openwrt; comments on this strategy are also welcome. I suspect that I would be well advised to get a second router as a standin before trying that. It may be worth noting that my total bandwidth is pretty limited (256Kbit/s).

    Read the article

  • AWS Application - Subscription Payments Scheduled but Not Initializing

    - by nicorellius
    I briefly browsed the AWS forums but these are not nearly as easy to use and efficient as Stack Exchange derived ones, so here I am... the company I work for has an app on the AWS cloud. In general the app works well. However, since it is new, we haven't had many customers and the ones who signed up are now coming to the point where their subscriptions to our service should be renewed. In comes my question. When I query the database for payment status of certain subscriptions using the console, I get what I would expect: Instrument XYZ is on a Monthly subscription (subscription=xxxyyyzz). The subscription is active with an expiration date of 19 Apr 2010 10:43 Z The payment token will expire on 1 Dec 2012 12:00 Z There are xy runs remaining. The subscription fee of $XX.xx will be charged on 17 Apr 2010 10:43 Z OK, this is great. According to this, I would expect the next payment to be initiated on 17 Apr. The reason I am asking is because for a different user last month I got this same output but the payment never went through, i.e., was not initiated through Amazon payments. The user didn't see the payment go through and neither did we. It should be noted that the initial payments were received. The sign up process works and if the user goes to "pay" from within our app, they will be directed to https://payments.amazon.com to make their payment. Any ideas?

    Read the article

  • PORT FORWARDING TO PUT MY WEB SERVER ON THE INTERNET

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • Acer Aspire one is falling apart on Ubuntu

    - by Narcolapser
    Question: How do I have my netbook detect and install hardware? Info: I have an Acer Aspire One, it came with windows XP and I loaded it with Win7. I decided I wanted to change to Ubuntu so I tried Ubuntu netbook remix, which failed horribly, and so after attempting 3 or so other OS's I ended with Ubuntu Desktop 9.10. Which worked fine for a while, but there were some minor issues so I asked a question about it here and decided to change my OS again. This last weekend I tried mandriva like that guy in my other question suggested, no success. when I had though my netbook lost the ability to use it's touch pad, I didn't think much of it, just thought it must be a driver or something. But When Mandriva failed, and I also while I was at it tried Damn small linux and Debian, which both failed to, I decided to switch back to Ubuntu Desktop(some where in here my keyboard stopped working for one attempt to). But first I gave the netbook remix one more try. it worked this time, with the exception of it didn't have any networking. I thought it was a driver issue again and finished the weekend with ubuntu desktop 9.10 again. But now things get really crazy. it doesn't know it has a wireless card or an ethernet card. It doesn't know my phone is connected trying to provide wireless broadband either. I'm clueless on what could be the problem. And with only a minimal amount of experience with Ubuntu can't navigate the entire interface with only my keyboard(it doesn't detect a USB mouse when I plug it in, it had when I installed it. in fact the network interfaces were working just fine when I live boot ubuntu to installed it). Even so, I don't know where to go or what to do to make it recognize it's hardware. I'm in a dire situation, any help is welcome.

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Windows update stuck. Fixt it stuck. So is KB947821. What should I do?

    - by Jim Thio
    I installed a new computer. After I installed I update everything and let the computer runs for days. Then I don't know what my daughter do. The computer stop responding. Windows update no longer work. People said to run fix it. Fix it run and after that the problem still persist and it still doesn't work. The problem changes though. Before there is some error code. Now windows update simply "updating" never end. So I downloaded KB947821. It's been 3 hour and it's still installing. Looks like it hit firewall or something. i don't see windows update on firewall exception list. However, I've never heard that this is an issue. Firewall only protect against incoming transaction, not outgoing right? Or what am I missing. What should I do?

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Ubuntu 13.10 on Acer V5-472 with HD 4000

    - by Hyperboreus
    I have an Acer V5-472 with intel HD4000 graphics chipset and a built-in 1366*768 display. I have installed ubuntu 13.10 amd64 in legacy boot mode with an external monitor. Installation showed no problems, I can boot from HDD and log into my system. The internal display doesn't work and I have to use an external monitor. I have tried the following (found in other threads) to no avail: Setting grub option "acpi_backlight=vendor" or "acpi_osi=Linux" or both. Installing the intel HD drivers for Linux from their homepage. Running in circles, screaming and shouting. The internal display lights up (I can change the brightness with Fn-Left and Fn-Right) but that's all. When I boot, I get a purple splash screen and from then only the external monitor works. I read somewhere that this might be a problem with kernel 3.11? Has anybody ubuntu running on an Acer V5-472? Should I change ubuntu version or use 32-bit instead? In general, how can I get the internal display to work? Edit: The settings-display dialogue shows the internal display correctly with supported resolution of 1366.

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • iRedMail home setup - use different SMTP relay for different destination domains

    - by John
    Hello helpful server folks, I'm messing with iRedMail. I've mostly been successful, I think I have an SMTP problem. I have changed RoundCube (webmail) to use BrightHouse's, my ISP's, SMTP server for outgoing. It works fine, I click send and poof, I have gmail. I can reply from gmail to my email server, and it works. It took 10 hours for the email to show up, which is a different problem, I think, but it does work. But when I send from my server TO my own server, my ISP's Postmaster account sends me a cryptic blurb. I just got off the phone with them, and they say it "should work", and that they can't reach my pop3 server. (pop3, pop3s, imap, and imaps are all open on my router and forwarded to the server, I'm not sure what I need, I'm just covering my bases...) pop3 and/or imap as external interfaces are just formalities, I really just want webmail to work. Roundcube only takes one SMTP server in its configs. How can I configure Postfix to relay / forward emails to my ISP's SMTP, while taking messages bound for my own domain and processing them? Since my ISP won't let me "bounce" my emails off of it. Maybe I'm vastly misunderstanding how e-mail works in general: To receive mail, I should only need port 25, SMTP, open to the internet, correct? Should I be concerned about some authentication failure from the outside to my relay? (My relay requires user/pass to use, my ISP's requires none.)

    Read the article

  • cannot connect to sql server express from sql server standard

    - by Jackson Sunuwar
    ... like my title says... I cannot connect to my instance on sql server express from sql server standard... I have tried disabling firing wall and checked sqlbrowser is started but for some reason I cannnot connect to my datbase... called server_name\sqlexpress.. I have a virtual machine and a full scale MS SQL Server 2008 R2 running on it... and I have several other vm running sqlexpress. they run fine and I can connect to them using sqlexpress... but when i try to access from sqlserver... I get this error. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) Digging deep into the error, I found this Error Number: -1 Severity: 20 State: 0 and finally this... Program Location: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.SqlServer.Management.SqlStudio.Explorer.ObjectExplorerService.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser() Firewall is turned off on the VM that's running mssqlserver... I turned of firewall on one of the vm that's running the sqlexpress but I still get the error... can someone please help... thank you

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >