Search Results

Search found 716 results on 29 pages for 'matthew lanham'.

Page 14/29 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • What are potential reasons a user could be connected to a home network, but not to the internet?

    - by Matthew
    I have a friend that recently started using Ubuntu, and I've been answering his questions via the internet. However, I'm stuck on this one. He bought Linksys WPC11 wireless card, and says he was able to create a network connection, but was unable to ping or use a browser. I'm not quite sure where to start in figuring this out--what are some common causes of this sort of problem?

    Read the article

  • where does https on apache get it's configuration?

    - by Matthew
    So originally my host (mediatemple dv) has two default directories for the roots: 1)httpdocs/ 2)httpsdocs/ In the conf directory I changed the vhosts.conf and httpd.include and others to change from httpdocs to custom folders. Now I installed a new ssl certificate and https://example.com goes to the default page located at httpsdocs. I'm just wondering where configurations for apache are stored for ssl. Ideas?

    Read the article

  • What Makes an Interesting Custom Logo Design

    As we all know that a custom logo design is an essential and crucial part of an effective brand identity and marketing strategy. Since it acts like the visual representation of the brand or the busin... [Author: Emily Matthew - Web Design and Development - March 31, 2010]

    Read the article

  • Windows startup Powershell script not closing after Start-Process

    - by Matthew Phipps
    I've got a Powershell V2.0 startup script for my work computer (XP Professional 64-bit), as follows: start "C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE" -ArgumentList "/recycle" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "https://mail.google.com" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "-new-window https://www.google.com/calendar" sleep -S 2 start "C:\Program Files (x86)\Skype\Phone\Skype.exe" The sleeps are to ensure that the windows appear on the taskbar in the correct order. I run this from a shortcut on my Quick Launch with the following Target: C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\scripts\initialize.ps1 (Yes, this is 2.0: powershell -Version 2.0 works, as does -Version 1.0, but not -Version 3.0) Problem is, the command window stays open until the Firefox windows are closed, which is not what I want. Looking at Process Explorer when I run the script, here's what happens: powershell.exe appears under explorer.exe and the Powershell window appears (with a black background, oddly. But it's not cmd.exe, since when I was debugging the script error messages would appear in red). outlook.exe appears under powershell.exe and the Outlook window appears. firefox.exe appears under powershell.exe and a Firefox window appears. A second firefox.exe appears under powershell.exe and another Firefox window appears. The second Firefox process then exits, as expected, since Firefox only uses one process. skype.exe appears under powershell.exe and the Skype window appears. The powershell.exe process inexplicably sticks around, as does the Powershell window. If I close both Firefox windows, the powershell.exe process exits and the Powershell window closes, and the outlook.exe and skype.exe processes appear under explorer.exe as expected. I suspect this has something to do with Firefox's standard input, output and error: I wouldn't expect Outlook or Skype to ever output anything to the console, but Firefox has command-line options that allow it to do so. I've looked over my about:config's user set values and didn't find anything suspicious. Finally, if I have a firefox.exe instance already running (started from the desktop shortcut) the problem doesn't occur (the powershell.exe process exits as it ought to). So what's going on here? I'm going to try adding -WindowStyle hidden to the shortcut next (gotta close this Firefox to test it), but I want to get to the bottom of this, if only to improve my understanding of how Windows consoles work.

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • Best way to replicate servers

    - by Matthew
    I currently have two servers both with linux software RAID1 configurations. They use heartbeat and DRBD to create a shared DRBD device that hosts a a exported NFS directory. The servers run Ubuntu Server with a LXDE GUI and some IP These servers are going to be placed on fishing vessels to act has redundant storage for IP cameras. My boss wants me to figure out the most efficient way to create these servers. We might be looking at pushing out several systems a week. Each configuration will be almost identical besides IP addressing. What would be the best method to automate the configuration process? We are trying to cut down on labor costs to set these up. Imaging and Proceeding are both on my mind right now

    Read the article

  • Control Panel as menu includes a blank item

    - by Matthew Ferreira
    When viewed as a menu attached to the Start Menu in Windows 7 Ultimate x64, the Control Panel contains a blank item. It looks like this: This item cannot be deleted or removed. I also cannot create a shortcut to it. No error message is displayed, instead simply nothing happens. I've tried using Shell Object Editor (using Run as Administrator) to find out if there is an errant entry on the Control Panel, but many entries (almost two dozen) are blank. There are several valid entries as well. I've looked through the registry and through C:\Windows, \system32, and \SysWOW64 but have had no success. I looked at this question, but I am not using Windows XP and thus have no option to use Tweak UI's Rebuild Icons function. Please note that this is no empty entry in the Control Panel when opened normally, only when attached to the Start Menu as a menu. I have compared the list of entries on the attached menu to the normal Control Panel and other than the blank entry, they are exactly the same. Nothing is missing from one or the other. I've also compared the menu and the normal view to reference images and lists of Control Panel items and have found no irregularities. Is anyone familiar with this problem or know of a solution? I've performed virus and malware scans and found nothing. I've used CCleaner with no change. Nothing with Shell Object Editor. Nothing with Registry Editor. Certainly someone here knows how to fix this. My only guess is the many blank entries visible in Shell Object Editor, but I am reluctant to delete that many items without further analysis and guidance. I appreciate your time and consideration.

    Read the article

  • User reduced LVM logical volume without resizing filesystem

    - by Matthew
    I received an email yesterday that one of our users was trying to make room for a heartbeat/clustering package which requires its own partition to act as a voting disk. To do this, he attempted to reduce the size of the root partition's logical volume, and then create a new logical volume for this purpose. However, he forgot to resize the filesystem first (or include the -r switch in the command). He also forgot to unmount the root partition by running this process from a rescue cd. The system is now refusing to boot into the OS with the following error: Either the superblock or the partition table is likely to be corrupt! Unexpected Inconsistency; run fsck manually. The system them drops the user into single user mode. Is it possible to rescue the filesystem, or is it hosed? Its running ext3.

    Read the article

  • Apple Software Update Server for Windows

    - by Matthew Iselin
    We have just added a few iMacs to our system and we've found that they all want to download about 1.6 GB of updates... not so great when we only have limited monthly bandwidth! All of our Windows machines just use WSUS, which works great in our environment. It'd be nice if we could do the same for the iMacs without purchasing an additional Mac for a server role. So is it possible to run Apple's Software Update Server on a Windows server? Or do we need to look at purchasing a Mac in order to distribute updates across our clients? Alternatively, could we set up one of the iMacs to run the update server for the other iMacs whilst it is being used as a standard machine (ie, not installing OSX server, and keeping it available for staff to use)?

    Read the article

  • On freenode, how do I keep NickServ from messaging me when I log in?

    - by Matthew
    I have Empathy set to run whenever I log in to Ubuntu. As soon as Empathy connects to freenode, I get these messages: This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify . You are now identified for [my name]. This is pretty annoying, since Empathy handles identification for me anyway. Is there any way to keep this from happening?

    Read the article

  • How do I get details about how Network and Sharing Center detected a problem?

    - by Matthew Scouten
    When I open Network and Sharing Center in Windows 7, it puts a red X between the network and the internet, but when I run Troubleshoot Problems it tells me that it does not know what the problem is. Is there any way to tell what test Windows used to place that red X, and how it failed? The system obviously knows something that it is not telling me. Knowing the details would help me solve this problem.

    Read the article

  • How can I use an SSH tunnel for all traffic from a single application, without knowing the ports used?

    - by Matthew Read
    I have an application that opens connections on dozens of ports, and doesn't provide documentation about which ports it uses. I could use Wireshark or something to capture the traffic and export the ports from that, but I think it should be simpler than that. (And I'm not sure I would be able to cover all use cases and ensure the app used every single port it can ever use.) So I'm looking for a way to just say "forward all traffic from this application" (bonus points for all traffic from child processes as well) without needing to worry about specific ports. I'm sure there must be a way, but I couldn't hit on the right keywords while searching Google. How can I do this?

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Is it dangerous to use both Sky Drive and Dropbox?

    - by Matthew
    I'd like to experiment with Sky Drive, but keep using my Dropbox account unless I decide to switch. This answer gives instructions for how to set up both at the same time, but I'm a little worried about data integrity. Is there any danger involved here? Will Sky Drive and Dropbox fight each other? Note that I am using Sky Drive/Dropbox on multiple computers, so they will be writing data as well as reading it. Is this safe? Edit: I can use them with different folders if necessary, but I'm particularly curious what would happen if they sync from the same folder.

    Read the article

  • Good process/software for organizing photos past/present

    - by Matthew
    So I have tons of photos taken all the time. I have a lot from years past that I never went through (meaning deleting duplicates, etc). I've got a new pc with windows 7, and I'm wondering what a good process is to organize those photos. They're in folders that have really no meaning (it used to be people would put them in a folder wherever, even the desktop or somewhere else, not just the My Pictures folder). I'm going to keep all pictures in the "My Pictures" folder from now on. I've used Picasa from g=Google, and it works great. Is this the recommended free software for this? What process do I use to move the old pictures over in to new "organized" folders? Lately in Picasa when I import off my camera card, I would just select something that names the folder after the date it was taken. Is this advised? Just give me ideas on how to stay organized with photos. Should I tag them also? Should I rename the file names? Keep in mind I have over 16,000 photos I'll have to go through, so it can't be anything to thorough.

    Read the article

  • Delay init from starting a service for a period of time?

    - by Matthew
    I am trying to get a rudimentary NFS server up and running. Right now the server is configured as an NFS server due to a workaround for a vendor issue not supporting direct attached clustered storage, which we are trying to get them to resolve. The vendor software is Splunk. The splunk feature we are using requires files be located on shared storage (which for us is /mnt/nfs until they support a real clustered filesystem). Currently the server has a GFS2 filesystem mounted at bootup (it is the only server with the filesystem actively mounted so there should be no problems with locking). We went with GFS2 so switching over to a clustered filesystem is easy should the vendor begin supporting it. NFS is configured to mount that filesystem at /mnt/nfs, which the splunk installation than sees. Splunk is configured to find it's configuration files in /mnt/nfs. However, I am running into a problem where the splunk daemon starts before nfs is finished loading, and because it sees nothing at /mnt/nfs it starts creating files there, and then when the files disappear (nfs finishes mounting the share), splunk craps out. Splunk is set to run at runlevel 3, S90. NFS is set at runlevels 2-5, S60. Is there any way to delay the startup of the splunk process further?

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >