Search Results

Search found 709 results on 29 pages for 'matthew carriere'.

Page 14/29 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • On freenode, how do I keep NickServ from messaging me when I log in?

    - by Matthew
    I have Empathy set to run whenever I log in to Ubuntu. As soon as Empathy connects to freenode, I get these messages: This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify . You are now identified for [my name]. This is pretty annoying, since Empathy handles identification for me anyway. Is there any way to keep this from happening?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • How can I use an SSH tunnel for all traffic from a single application, without knowing the ports used?

    - by Matthew Read
    I have an application that opens connections on dozens of ports, and doesn't provide documentation about which ports it uses. I could use Wireshark or something to capture the traffic and export the ports from that, but I think it should be simpler than that. (And I'm not sure I would be able to cover all use cases and ensure the app used every single port it can ever use.) So I'm looking for a way to just say "forward all traffic from this application" (bonus points for all traffic from child processes as well) without needing to worry about specific ports. I'm sure there must be a way, but I couldn't hit on the right keywords while searching Google. How can I do this?

    Read the article

  • How do I get details about how Network and Sharing Center detected a problem?

    - by Matthew Scouten
    When I open Network and Sharing Center in Windows 7, it puts a red X between the network and the internet, but when I run Troubleshoot Problems it tells me that it does not know what the problem is. Is there any way to tell what test Windows used to place that red X, and how it failed? The system obviously knows something that it is not telling me. Knowing the details would help me solve this problem.

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Good process/software for organizing photos past/present

    - by Matthew
    So I have tons of photos taken all the time. I have a lot from years past that I never went through (meaning deleting duplicates, etc). I've got a new pc with windows 7, and I'm wondering what a good process is to organize those photos. They're in folders that have really no meaning (it used to be people would put them in a folder wherever, even the desktop or somewhere else, not just the My Pictures folder). I'm going to keep all pictures in the "My Pictures" folder from now on. I've used Picasa from g=Google, and it works great. Is this the recommended free software for this? What process do I use to move the old pictures over in to new "organized" folders? Lately in Picasa when I import off my camera card, I would just select something that names the folder after the date it was taken. Is this advised? Just give me ideas on how to stay organized with photos. Should I tag them also? Should I rename the file names? Keep in mind I have over 16,000 photos I'll have to go through, so it can't be anything to thorough.

    Read the article

  • Is it dangerous to use both Sky Drive and Dropbox?

    - by Matthew
    I'd like to experiment with Sky Drive, but keep using my Dropbox account unless I decide to switch. This answer gives instructions for how to set up both at the same time, but I'm a little worried about data integrity. Is there any danger involved here? Will Sky Drive and Dropbox fight each other? Note that I am using Sky Drive/Dropbox on multiple computers, so they will be writing data as well as reading it. Is this safe? Edit: I can use them with different folders if necessary, but I'm particularly curious what would happen if they sync from the same folder.

    Read the article

  • Delay init from starting a service for a period of time?

    - by Matthew
    I am trying to get a rudimentary NFS server up and running. Right now the server is configured as an NFS server due to a workaround for a vendor issue not supporting direct attached clustered storage, which we are trying to get them to resolve. The vendor software is Splunk. The splunk feature we are using requires files be located on shared storage (which for us is /mnt/nfs until they support a real clustered filesystem). Currently the server has a GFS2 filesystem mounted at bootup (it is the only server with the filesystem actively mounted so there should be no problems with locking). We went with GFS2 so switching over to a clustered filesystem is easy should the vendor begin supporting it. NFS is configured to mount that filesystem at /mnt/nfs, which the splunk installation than sees. Splunk is configured to find it's configuration files in /mnt/nfs. However, I am running into a problem where the splunk daemon starts before nfs is finished loading, and because it sees nothing at /mnt/nfs it starts creating files there, and then when the files disappear (nfs finishes mounting the share), splunk craps out. Splunk is set to run at runlevel 3, S90. NFS is set at runlevels 2-5, S60. Is there any way to delay the startup of the splunk process further?

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • PC powers up but there is no display

    - by Matthew
    I built a computer with standard components I bought from Newegg about three years ago. It ran great for 2 years and has sat powered down for the last year. I tried to power it up today and the display was blank. It powers up, lights come on, drives start spinning but there is nothing on the display. I verified that the monitor and video adapter work. I also tried the video adapter in a different slot on the mother board with no luck. What's the next thing I should try? Is the mother board shot? Thanks.

    Read the article

  • International Calls For Free

    Are you sick of paying high fees for making international calls? There are much better ways out there and doing your research on the best product for you will save you a ton of money. First of course... [Author: Matthew Bailey - Computers and Internet - April 21, 2010]

    Read the article

  • How do I remove the ServerSignature added by mod_fcgid?

    - by matthew
    I'm running Mod_Security and I'm using the SecServerSignature to customize the Server header that Apache returns. This part works fine, however I'm also running mod_fcgid which appends "mod_fcgid/2.3.5" to the header. Is there any way I can turn this off? Setting ServerSignature off doesn't do anything. I was able to get it to go away by changing the ServerTokens but that removed the customization I had added.

    Read the article

  • A web app provider has asked for specific browser config

    - by Matthew
    They have asks to turn off caching on our browsers. I was aghast that they would ask such a thing. I said to them; To avoid caching it is best practice to use; <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="cache-control" content="no-cache" /> This should work across all browsers. Their reply was; We need to refresh javascript at runtime, this will not help us – any more ideas? I replied; Unsure what you mean by “refresh javascript at runtime”. If you are using ajax, browser caching can effect the XMLHttpRequest open method. Adding these meta tags to the source has fixed this for me in the past. Browser caching only caches resources, it should have no effect on site scripting. These meta tags will bypass browser caching. This is a reasonable request, isn't it?

    Read the article

  • Why won't my files push to my SFTP server?

    - by Matthew
    I'm having trouble pushing my branch to an SFTP server. I'm following the instructions here. When I push the branch, everything seems to complete successfully. I get the message "Created new branch.", and if I do "bzr push" again, it says "No new revisions to push." But when I ssh to the SFTP server to look at the directory I put my branch in, only the .bzr directory is there. None of my files are there. Does anyone have any idea why this might be?

    Read the article

  • Terminal Server 2003 Gaining Time when Windows 7 Client Connects

    - by Matthew
    A Windows 2003 Terminal Server keeps time perfectly until a Windows 7 Home client connects. Then it gains time at a rate of several seconds per minute. The client connects through a firewall with only the RDP port open. The client runs the same apps on the terminal server that XP clients run with no issues. Using the Microsoft Terminal Server Client application copied to the W7 computer from an XPsp3 computer gives the same results. Current workaround is to sync time every 5 minutes. Any better ideas?

    Read the article

  • How do I connect to my home's primary wired network through an extra (wireless) router?

    - by Matthew Patrick Cashatt
    Thanks for looking! I have set up a desktop PC in my workshop. The Cat 5 cable connects from this PC to a wireless router which is connected to my home network. The Internet connection is working just fine. However, the "wired" network this is on shows up as a different wired network than the one that the PCs inside my house are connected to. This is a problem because I would like to connect this workshop PC to various shared resources like printers, HD Homerun (cable tv card), shared drives, etc. When I go to "Network and Sharing" and attempt to find the network that the PCs inside my home are connected to, I don't see it. Any help is appreciated. Thanks!

    Read the article

  • Dns - wildcard vs. cname subdomains

    - by Matthew
    Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read this and it says: The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used.

    Read the article

  • running autobench (httperf)

    - by Matthew
    So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found I think it's a perl script but if I run perl autobench, it says: root@example:/tmp/autobench-2.1.2# perl autobench Autobench configuration file not found - installing new copy in /root/.autobench.conf cp: cannot stat `/etc/autobench.conf': No such file or directory Installation complete - please rerun autobench Even if I run it again it says the same thing.

    Read the article

  • What causes Mac OS X Permission errors?

    - by Matthew Savage
    This is out of interest rather then looking for a fix to a problem. What actually causes permissions on Mac OS X Systems to become messed up? Its an easily fixed problem (i.e. there's a quick and easy fix via Disk Utility) but its something I'd encountered a few times doing support in a Mac-reseller store without actually understanding the causes. I'd guess that part of it is due to some applications not playing nicely, but what else might be the source of this issue?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >