Search Results

Search found 4244 results on 170 pages for 'nocturnal thoughts'.

Page 117/170 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Understanding Authorized Access to your Google Account

    - by firebush
    I'm having trouble understanding what I'm am granting to sites when they have "Authorized Access to my Google Account." This is how I see what has authorized access: Log into gmail. Click on the link that is my name in the upper-right corner, and from the drop-down select Account. From the list of links to the left, select Security. Click on Edit next to Authorized applications and sites. Authenticate again. At the top of the page, I see a set of sites that have authorized access to my account in various ways. I'm having trouble finding out information about what is being told to me here. There's no "help" link anywhere on the page and my Google searches are coming up unproductive. From the looks of what I see there, Google has access to my Google calendar. I feel comfortable about that, I think. But other sites have authorization to "Sign in using your Google account". My question is, what exactly does that authorization mean? What do the sites that have authorization to "Sign in using my Google account" have the power to do? I hope that this simply means that they authorize using the same criterion that gmail does. I assume that this doesn't grant them the ability to access my email. Can someone please calm my paranoia by describing (or simply pointing me to a site that describes) what these terms mean exactly? Also, if you have any thoughts about the safety of this feature, please share. Thanks!

    Read the article

  • Boot drive not found issue after cloning using Apricorn EZgig

    - by TomWilsonFL
    A couple days ago I cloned a drive for someone using the EZgig software. Usually this goes without a hitch, but this particular drive I was cloning is quite old. When I restarted with the new drive I received the typical bootable disk not found message, so I turned it off, messed with the BIOS, restarted and it came up fine. That night I was working remotely on the computer and had to restart it. It didn't come back up; not a good sign. When the user came to the computer in the morning it was giving the same message. I have found that to make the computer boot, all I have to do is go into the BIOS and "Load Defaults", then restart. It will boot and runs great. Any thoughts on what is causing this situation? Is it MBR corruption? Are some settings being saved in the CMOS? A couple points of mention: I have already attempted looking for a BIOS update for the computer, but the newest is already installed (from 2003). When the computer reboots it either shows "None" for Primary Master, or sometimes it will just not show anything. Thanks, Tom

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Power Outage Interrupted Upgrade from Windows Vista Ultimate to 7 Ultimate, Reverted to Vista, Now Vista is failing... What Next?

    - by tednewk
    I was in the midst of what seemed to be a successful upgrade from Vista Ultimate to 7 Ultimate when there was a brief blackout. The upgrade failed and Windows reverted back to Vista. Now Vista is very slow to boot, has problems waking back-up from inactivity and quickly loses it's wireless connection. The wake-up problem manifests itself as the mouse is clearly shown on a black screen but I have no access to the Desktop or Taskbar or Explorer. Even Alt-Ctrl-Delete doesn't seem to work. No task menu, no reboot. Hitting the reset button reboots the machine with the usual Black Screen warnings offering Safe Mode. I tried to do a system restore to a point before the upgrade. That didn't seem to work. My guess is that my system is a mutant with parts of Vista and parts of 7 crashing each other. I would like avoid a clean install if at all possible to avoid reinstalling other software. What should I try now? My thoughts are: My a system back-up to lock the computer in place Trying a second 7 upgrade If that appears to be working make another back-up If not reload back-up and try a repairing Vista from DVD. If that appears to work make another back-up, let system stablize about a week then try 7 install again If that doesn't work are there any other options to try before settling for a clean install? Another complication, I am doing this by "remote control". I'm traveling with my job and I'll be talking my son through it over the phone. (Kind of like the landing the 747 cliche from all the 70's adventure shows!) So is there a way of simplifying the steps? Thanks Ted

    Read the article

  • All applications quit when printing on Mac OS X 10.5.8

    - by Tamany
    I recently ran a software update. I'm not sure if my problems are associated with this but I'm pretty sure they are as I printed successfully before update. I checked the log at time of printing: 03/05/2010 22:03:15 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x17a82b50 03/05/2010 22:03:15 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:16 [0x0-0x51051].com.microsoft.Word[697] Ignoring Quickdraw drawing between QDBeginCGContext and QDEndCGContext 03/05/2010 22:03:17 [0x0-0x51051].com.microsoft.Word[697] Mon May 3 22:03:17 leopards-imac-2.local Word[697] <Error>: The function `CGPDFDocumentGetMediaBox' is obsolete and will be removed in an upcoming update. Unfortunately, this application, or a library it uses, is using this obsolete function, and is thereby contributing to an overall degradation of system performance. Please use `CGPDFPageGetBoxRect' instead. 03/05/2010 22:22:09 Microsoft Word[697] *** -[NSCFString _getValue:forType:]: unrecognized selector sent to instance 0x1b036500 Any thoughts on how to fix this?

    Read the article

  • Fill down in Excel, but based on multiple values

    - by Jenn D.
    I have spreadsheets (not created by me) that have blank entries in one column where they should really have data. I want to take every empty cell and fill it with the nearest value above it. I'm looking for as little manual intervention as possible, because I'll have to do it repeatedly. I thought some previous version of Excel, or maybe another spreadsheet from the distant past, would do this by default -- that is, if you selected the column with foo and bar, and chose the equivalent of "fill down", you would get what's in the WANT column. What I actually get in Excel is the GET column. HAVE: WANT: GET: foo 1 foo 1 foo 1 2 foo 2 foo 2 bar 1 bar 1 foo 1 2 bar 2 foo 2 3 bar 3 foo 3 I'm worried that this might need a macro to be done properly. I used to be a whiz with Excel macros, and then suddenly they were all in VB. My fallback position will be to dump the whole thing to CSV and write a Python script, but if there's any way to do it in Excel that would be much preferable. Even if it involves a couple of different manual steps, that's fine; just not one step per group of lines. That is, a process of "copy the column, do X to it, cut and paste it back" would work, but "do X for each occurrence of foo or bar" won't. The files are too big for that. Any thoughts are appreciated!

    Read the article

  • Vyatta masquerade out bridge interface

    - by miquella
    We have set up a Vyatta Core 6.1 gateway on our network with three interfaces: eth0 - 1.1.1.1 - public gateway/router IP (to public upstream router) eth1 - 2.2.2.1/24 - public subnet (connected to a second firewall 2.2.2.2) eth2 - 10.10.0.1/24 - private subnet Our ISP provided the 1.1.1.1 address for us to use as our gateway. The 2.2.2.1 address is so the other firewall (2.2.2.2) can communicate to this gateway which then routes the traffic out through the eth0 interface. Here is our current configuration: interfaces { bridge br100 { address 2.2.2.1/24 } ethernet eth0 { address 1.1.1.1/30 vif 100 { bridge-group { bridge br100 } } } ethernet eth1 { bridge-group { bridge br100 } } ethernet eth2 { address 10.10.0.1/24 } loopback lo { } } service { nat { rule 100 { outbound-interface eth0 source { address 10.10.0.1/24 } type masquerade } } } With this configuration, it routes everything, but the source address after masquerading is 1.1.1.1, which is correct, because that's the interface it's bound to. But because of some of our requirements here, we need it to source from the 2.2.2.1 address instead (what's the point of paying for a class C public subnet if the only address we can send from is our gateway!?). I've tried binding to br100 instead of eth0, but it doesn't seem to route anything if I do that. I imagine I'm just missing something simple. Any thoughts?

    Read the article

  • ext3: maximum recommended partition size / handling large partitions

    - by Hansi
    Hi! I would like to do an encrypted install of Ubuntu on a 2 Terabyte drive (i.e., using LUKS/DMcrypt). In order to not have to type in passwords too often, the partitioning scheme will be 50 GB for / and about 1 TB for /home (and the rest for Windows 7), just for clarity. Even though by now LVM is regarded as being stable, I don't want to bother having more room for errors by introducing unnecessary layers of complexity. For both Ubuntu partitions I want encrypted ext3 with the default blocksize of ext3 (4k?). Thoughts: When I look at most partition schemes here on this site or elsewhere, I usually see at most about 400 or 500 GB partitions (maybe I didn't see enough). There may be different reasons for this, but is reliability an issue here? Are larger ext3 partitions, like about 1 TB, harder to handle for the OS or filesystem driver or at some other level? If I make the partition too large, will it be harder to repair in case of corruptions? Are there some default settings for ext3 that I should change for 1 TB partitions? Question: What maximum partition size for ext3 do you recommend and why? Thanks!

    Read the article

  • How can I get ffmpeg to convert a .mov to a .gif?

    - by user29336
    I'm trying to convert a .mov to a .gif and I'm not having success. Here's the error: ffmpeg -pix_fmt rgb24 -i yesbuddy.mov output.gif ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers built on Jun 12 2012 17:47:34 with clang 2.1 (tags/Apple/clang-163.7.1) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.11.1 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --enable-libvo-aacenc --disable-ffplay libavutil 51. 54.100 / 51. 54.100 libavcodec 54. 23.100 / 54. 23.100 libavformat 54. 6.100 / 54. 6.100 libavdevice 54. 0.100 / 54. 0.100 libavfilter 2. 77.100 / 2. 77.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Option pixel_format not found. If I leave out the -pix_fmt rgb24 part it complains. Thoughts on how to fix?

    Read the article

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • sql server 2005 instance unresponsive and all db's are 'in recovery'

    - by user44650
    we've got a sql server 2005 instance that one of our guys messed up, i believe they killed the sql server service and restarted the computer, and when it came back all of our databases are "in recovery" and it times out every time we try to connect to it. it's been 'in recovery'and unable to connect to 'msbd' (also in recovery) whenever we try to use SSMC, for the last 4 days now. i'm unsure how to use the DBCC CHECKDB command to check the db integrity. we have backups(which we can't recover from because it keeps timing out), and it's a testing server, so nothing in production is really lost. is there any way to get it out of recovery mode? we have another sqlserver instance running that's just fine, but this instance keeps timing out. the errors i keep seeing are database msdb is being recovered. wait until recovery is finished and an exception occurred while executing a transact-sql statement or batch Timeout expired. any thoughts? we don't really have a DBA here, or anyone with much sql experience.

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • K8NDRE motherboard in server fails to complete BIOS load with error 0078

    - by John
    K8NDRE motherboard with 4 sata drives, was running fine. Drives had raid-0 and raid-1 partitions, using mdadm. The onboard raid is disabled. Upon reformatting the drives, setting a new partition structure and new raid partitions, the bios fails to finish loading, with 0078 in the bottom right corner. Tried using completely new set of drives, and bios worked fine. Able to boot from a usb, format the drives, partition them, start raid, and then installed os. Reboot and received the same error from the bios, 0078. Works fine if I unplug the sata drives. Any thoughts? Physical inspection reveals no damage cables, connectors, or capacitors. Server was running happily for over a year, and this is the first problem it has had. Per Michael Hampton's answer: The drives, unjumpered and supporting sata III worked fine originally, and worked fine for formatting and having new partitions and raid installed on them. I did try jumpering one, with no change. If I put a brand new unformatted drive in, the motherboard recognizes it and I can proceed with formatting and installing. When I reboot, I get the 0078. I have 4 sata cables-the board supports 4 drives, so I tried each and no change. I am close to calling the motherboard done.

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated.

    Read the article

  • Best SSD tweaks for Windows 7

    - by Nick Berardi
    I have seen many articles about tweaking an SSD, but many of them seem outdated, or too broad (read all Windows XP, Vista, and Windows 7 general tweaks). And I know that Windows 7 has been specifically tweaked for SSD by the Windows team, so I don't want to do something that was written for Windows XP in mind and end up circumventing something the Windows team has specifically designed in to Windows 7. So my question is what are the best SSD tweaks for Windows 7 that you have found to get the performance out of your drive? I hope to make a comprehensive list in the answers below so there won't be so much disinformation in the forums about what to do and what not to do. Here are a few that I see posted up on the forums alot, and some questions to get the discussion started: Disabling Superfetch. Yes or No Disabling Page File or limiting it to a really small size such as 500 MB. Disabling Indexing. Yes or No Disabling Defragmenting. Yes or No What are your thoughts do you have any that have worked for you? When providing an answer please do your best to back it up with a reason and possibly some documentation from MSDN, TechNet, or another credible source.

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Can we do a DNSSEC 101? [closed]

    - by PAStheLoD
    Please share your opinions, FAQs, HOWTOs, best practices (or links to the one you think is the best) and your fears and thoughts about the whole migration (or should I just call it a new piece of tech?). Is DNSSEC just for DNS providers (name server operators)? What ought John Doe to do, who hosts johndoe.com at some random provider (GoDaddy, DreamHost and such)? Also, what if the provider's name server doesn't do automatic signing magic, can John do it manually? In a fire-and-forget way, without touching KSKs and ZSKs rollovers and updating and headaches?) Does it bring any change regarding CERT records? Do browsers support it? How come it became so complex? Why didn't they just merged it with SSL? DKIM is pretty straightforward, IANA/IETF could've opted for something like that. (Yes I know that creating a trust anchor would be still problematic, but browsers are already full of CA certs. So, they could've just let anyone get a cert for a domain for shiny green padlocks, or just generate one for a poor blue lock, put it into a TXT record, encrypt the other records and let the parent zone sign the whole for you with its cert.) Thanks! And for disclosure (it seemed like the customary thing to do around here), I've asked the same on the netsec subreddit.

    Read the article

  • Are there any tools to migrate your files, applications, and settings to a new Windows computer?

    - by calbar
    I've decided to upgrade my laptop on a regular basis and one of my main concerns is recreating my entire Windows 7 environment every time I do this. I'm talking toolbar positions, login settings, start menu items, applications and all their customizations... everything but my drivers. It literally takes weeks to fully recreate my working environment, not to mention the risk of user error or just simply forgetting "how I liked it." I'm assuming I won't find something as painless as Apple's Migration Assistant for Windows, but maybe there's something out there that can at least package up your apps and their settings? Bonus points if you can point it to your personal files, too - whatever's the quickest way to get from one machine to the next. I intend to install Windows fresh to remove bloatware on every machine that I buy, then selectively install the drivers I need. Something that accommodates loading my old apps into this newly prepared environment would be ideal. One random point of concern is in regard to application settings that refer to old hardware. I'm not sure if there's anything that can be done about this. If you have any thoughts, feel free to share. Thanks for your help!

    Read the article

  • CentOS 5.8 - Can't login to tty1 as root after updates?

    - by slashp
    I've ran a yum update on my CentOS 5.8 box and now I am unable to log into the console as root. Basically what happens is I receive the login prompt, enter the correct username and password, and am immediately spit back to the login prompt. If I enter an incorrect password, I am told the password is incorrect, therefore I know that I am using the proper credentials. The only log I can seem to find of what's going on is /var/log/secure which simply contains: 15:33:41 centosbox login: pam_unix(login:session): session opened for user root by (uid=0) 15:33:41 centosbox login: ROOT LOGIN ON tty1 15:33:42 centosbox login: pam_unix(login:session): session closed for user root The shell is never spawned. I've checked my inittab which looks like so: 1:2345:respawn:/sbin/mingetty tty1 2:2345:respawn:/sbin/mingetty tty2 3:2345:respawn:/sbin/mingetty tty3 4:2345:respawn:/sbin/mingetty tty4 5:2345:respawn:/sbin/mingetty tty5 6:2345:respawn:/sbin/mingetty tty6 And my /etc/passwd which properly has bash listed for my root user: root:x:0:0:root:/root:/bin/bash As well as permissions on /tmp (1777) & /root (750). I've attempted re-installing bash, pam, and mingetty to no avail, and confirmed /bin/login exists. Any thoughts would be greatly appreciated. Thanks!! -slashp

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • VMware Workstation Bridged Network Host UnReachable

    - by user2097818
    VMware Workstation 7 on Win7-64 (Home Premium). I have confirmed this on any guest running on this machine (from winxp to debian). I am using a bridged network connection for my guests (Automatic on VMnet0). All of the network configuration is done with DHCP (including on the host). Problem What I can not do: Ping my host machine from inside any VM. (either shows me "Destination Host Unreachable" or will just timeout) What I CAN do right after power up, with no problems at all. I can connect to the internet from inside the VM I can ping my router from inside the VM I can ping other machines on my network from inside the VM Other machines can ping the VM Other machines can ping the host My host machine can ping the VM (this one is important. read further) Details So I have my router assigned as 192.168.2.1/255.255.255.0, and the router provides the DHCP service (and it seems to be doing so successfully). There are no IP conflicts on the network that I am aware of. All Gateways and Subnet masks are appropriate and matching. My entire workshop is on one single subnet, with one single DHCP server and gateway. There is one method in which I can ping successfully, but it requires an active connection initiated from the host (I start pinging from host to VM). During the period of the active connection, I can successfully ping from VM to host, using explicit IP address. As soon as the host connection is closed, the VM ping starts hanging with the same old messages. My Thoughts This really feels like a firewall problem, but I have turned off all firewalls on host and VM, powered down the network, powered back up, and the problem still persists. And if it was firewall, why would only the IP address associated with bridged VM networks be blocked. I feel as though my host operating system (Win7) is somehow configured incorrectly, or, VMware Workstation is configured incorrectly from the host side. Although I have done my best to put everything in default, I feel like I am missing something silly.

    Read the article

  • Google analytics ignoring "required step" in goals

    - by Matt Huggins
    I am A/B testing a landing page to see which converts better. The funnels are set up exactly the same in terms of the goal completion URL and funnel steps, with one exception: the first step (which is a required step) has a different URL to represent each of the two landing pages. Unfortunately, Google is tracking a conversion for both of these goals regardless of which landing page a user is reaching! It looks like the "required step" is broken...perhaps it is a deeper issue if others haven't noticed it, such as it only not working when the goal URL is the same between multiple goals. Here is an example of what I mean. Goal 1: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index1 (required step) 2. /users/register 3. /users/edit Goal 2: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index2 (required step) 2. /users/register 3. /users/edit As you can see, the only difference is step #1 of the funnel. Since I am A/B testing the landing page of the site, this should be the only difference. However, when I look at the goal page of Google Analytics, I see that the goal is being recorded for both of these regardless of the landing page being reached. The only tinkering I've done is attempting to wrap each funnel step's goal in ^ and $ characters for an exact regular expression match, but this didn't make a difference. Thoughts?

    Read the article

  • USB Device Not Recognized (Mac)

    - by Nargis
    Fortunately, my Mac-pro also made one of my USB storage devices inoperable. My data loss in that USB device but such as another USB device and USB keyboard are unaffected. I have heard that my friend usually trigger this problem by having at least two devices plugged in - typically thumb drives/USB flash drives, and then once a second flash drive is plugged in that become unrecognized. I have only two USB ports and first I think port loose when I connect two USB devices. But later I found these hidden files (“.Spotlight-V100”, “.TemporaryItems”, “.Trashes”, and “._.Trashes”) are created by Mac OS. And before unrecognized that USB device I have deleted these files and my friend had also done the same action. Now I don’t want to test for next USB device to become unrecognized and I won’t deleted any hidden system file inside the flash drives. But I really want to know why these problems happened. Can I delete these hidden files when I only connect to virtual machine (Vista), because I used to delete all useless hidden files from USB flash drives? Any suggestions or thoughts to prevent this or alternative suggestions to fix the problem that take lossless would be much appreciated.

    Read the article

  • Win 7 firewall won't turn on, nor the McAfee firewall. Hit by "Win 7 Anti-virus 2012" trojan. Removed, but a downed firewall is a lasting legacy

    - by PhxTitan
    I caught the Trojan right away, I think, but both my McAfee & Win 7 (x64) firewalls are not able to be engaged/turned on now. MS Error Code 0x80070424 when attempting to turn on Win 7 firewall. No viruses. Swept it with McAfee AV, Malwarebytes Anti-Malware, Microsoft malware removal tools. Followed Microsoft's three courses of alternative actions they posted for instructions for getting the Win 7 firewall back up and on. Nothing. Same error code. The post just said see MS support if those fixes failed. So I removed McAfee altogether. Still Win 7 (professional version) firewall won't come on; and clean of detectable bugs. And I'm fully updated with MS Windows 7 updates as well, which is no longer automatic, that too a legacy of the trojan bug I think. Any thoughts on how to get the Win 7 firewall operational??? And auto updating reengaged?

    Read the article

  • ATI Radeon 5670 Won't Show Resolutions over 1400x900

    - by Phil Sandler
    Just got my new Dell computer with Windows 7 and an ATI Radeon 5670. I attached it to my current monitor, which is a Samsung 24" (2443bwt). Windows 7 does not allow me to display in resolutions greater than 1400 x 900. The setup through a VGA cable into the VGA port of the card. The card also has a DVI port, but I need to use the VGA port because a KVM that supports VGA only. My old PC (which is Windows XP, GeForce 8600 video) can display in 1900 x 1200 on the same monitor (which is what I want) and even higher. It does this through a vga cable also connected to the KVM (through the DVI port but using an adapter). I have tried the same setup (DVI = VGA adapter) on the new PC and nothing changed. I have tried: Updating the drivers via Windows "Update Driver" (says they are current) Installing the updated version of the drivers from ATI (made no difference) Installing Powerstrip (all the options I would need for a custom resolution are greyed out) Installing the drivers/software from ATI caused the ATI Catalyst Control Center software to stop functioning, so I can no longer even start it. I have found some references to other people having this problem and instructions on cleaning the software off and reinstalling it (as uninstalling normally doesn't solve it). I will try this tonight. In any case, I didn't see any options in CCC that would allow me to override the settings for max resolution. However I didn't tinker with it too much before I tried updating the drivers, so I may have missed a setting. I contacted Samsung via online chat and they say it's a problem with the video card/driver (of course--what else would they say?). Any thoughts on what else I could try?

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >