Search Results

Search found 14080 results on 564 pages for 'known types'.

Page 352/564 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Wifi network undetectable on a Lenovo G470

    - by Rex
    I have a WPA2 secured network with a visible SSID at home that works perfectly fine with a Dell laptop,a HP netbook and sundry mobile phones. When I try connecting my sister's Lenovo G470, it refuses to detect the wifi network no matter what, but shows up the neighbors' networks. The Lenovo also works correctly at her office. Both laptops run Windows 7. Already tried/checked the following: Manually configure wifi network settings (copied over from the Dell) Ensure there is no MAC address filtering on the router. Ensure router DHCP server is not running out of addresses to assign (I have set it to allocate upto 10). Reboot laptop, router etc. Is this a known problem, and is there anything else one could try? Update - The problematic Lenovo uses Windows 7 Home Basic while the Dell that works uses Home Premium and the HP netbook uses Starter edition - if that makes any difference. Further update - It is able to connect if I reboot into safe mode with networking. However in 'normal' mode it shows up the network sporadically, and then says there was an error connecting to it. All the network parameters, password, encryption, etc etc are EXACTLY the same as they are on the Dell.

    Read the article

  • Is there a reason to use internal DNS over 8.8.8.8 ?

    - by skylarking
    I've inherited a LAN where there is really no name resolution being done for local resources... i.e. all users enter IP addresses manually to access printers and network shares. There are no LDAP servers or domains either....workstations simply connect to the network without authentication. DHCP is handled via a core switch... And DNS settings are also handed out by this same core switch. Currently, the DNS assignments are as such, and in this order: 10.1.1.50 / old Pentium III Windows 2003 box running DNS service- 128 MB RAM 169.200.x.x / ISP 4.2.2.2. / the well known public one There a couple thousand clients on the LAN....and most of the activity is web browsing ( this is an educational setting ). First of all, the server seems woefully underpowered for this task...yet there is virtually no slowness when web surfing by clients.... How much horsepower should a heavily used DNS server have ? I have also heard using 4.2.2.2 is a bad idea .... since it has been so overused... Finally, wouldn't it make sense to have a robust external DNS server listed first? ( Google's 8.8.8.8 would seem to be a logical candidate )

    Read the article

  • Unable to login to Amazon EC2 compute server

    - by MasterGaurav
    I am unable to login to the EC2 server. Here's the log of the connection-attempt: $ ssh -v -i ec2-key-incoleg-x002.pem [email protected] OpenSSH_5.6p1, OpenSSL 0.9.8p 16 Nov 2010 debug1: Reading configuration data /home/gvaish/.ssh/config debug1: Applying options for * debug1: Connecting to ec2-50-16-0-207.compute-1.amazonaws.com [50.16.0.207] port 22. debug1: Connection established. debug1: identity file ec2-key-incoleg-x002.pem type -1 debug1: identity file ec2-key-incoleg-x002.pem-cert type -1 debug1: identity file /home/gvaish/.ssh/id_rsa type -1 debug1: identity file /home/gvaish/.ssh/id_rsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-50-16-0-207.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/gvaish/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-key-incoleg-x002.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: Trying private key: /home/gvaish/.ssh/id_rsa debug1: No more authentication methods to try. Permission denied (publickey). What can be the possible reason? How do I fix the issue?

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • Help Installing SQL Server 2008 Express Edition

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do? Also I rebooted my system and checked for Windows Updates and it says that Windows it up to date.

    Read the article

  • Does Windows XP automatically reassemble UDP fragments?

    - by Matt Davis
    I've got a Windows application that receives and processes XML messages transmitted via UDP. The application collects the data using Windows "raw" sockets, so the entire layer 3 packet is visible. We've recently run across a problem that has me stumped. If the XML messages (i.e., UDP packets) are large (i.e., 1500 bytes), they get fragmented as expected. Ordinarily, this will cause the XML processor to fail because it attempts to process each UDP packet as if it is a complete XML message. This is a known short-coming in the system at this stage of its development. On Windows 7, this is exactly what happens. The fragments are received and logged, but no processing occurs. On Windows XP, however, the same fragments are seen, and the XML processor seems to handle everything just fine. Does Windows XP automatically reassemble UDP fragments? I guess I could expect this for a normal UDP socket, but it's not expected behavior for a "raw" socket, IMO. Further, if this is the case on Windows XP, why isn't the behavior the same on Windows 7? Is there a way to enable this?

    Read the article

  • Is "DSLAM congestion" a legitimate reason for slow DSL?

    - by Jay Bazuzi
    My DSL has been extremely slow in the evenings recently. To test it, I telnet to my DSL Modem, and ping the gateway. This way I eliminate internet congestion and local network issues. In the mornings I get 30ms - 50ms pings. In the evenings, it bounces around a lot, but 10000ms pings are common. I complained to Qwest support, and they said it was a known issue on their end, their engineers were working on it, and wouldn't say anything else. A couple days later I complained again, and they sent out a technician. He tested my house wiring and found that one of them had a short. It was an unused line, so we disconnected it, and he said things looked better and left. My daytime speeds improved at this point, but evening is still bad. I complained to Qwest support again, and they said it was a problem with DSLAM congestion at their end, and that they were working on it, but no ETA. My neighbor has Qwest DSL and doesn't seem to have these problems. That seems strange. I go use her network when I absolutely must get online and mine is behaving badly. I can't tell if they're yanking my chain or not. Regardless, these speeds are crap. I'm paying for 7Mpbs but am lucky if I get 1/10th that in the evenings. My kids like to watch Netflix streaming movies, and it's just impossible after 5pm or so. Should I wait it out? Will complaining again produce any results? Should I change my subscription to a lower speed until they fix their end? Or switch to cable?

    Read the article

  • Can't find created directory on zfs

    - by maniat1k
    I'm using openSUSE 13.1. I created a new directory on a zpool zfs create zpgd0/iSCSI -o compression=lz4 -o atime=off but I'm not looking on that... So I do it again but I'm getting... zfs create zpgd0/iSCSI -o compression=lz4 -o atime=off cannot create 'zpgd0/iSCSI': dataset already exists adding some data zpool history History for 'zpgd0': 2014-08-11.13:38:21 zpool create -f zpgd0 raidz2 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0490461 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0603473 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0606817 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0670246 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0673599 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0715212 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0722699 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0731193 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0732862 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0806663 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0807385 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0816943 2014-08-11.14:13:09 zpool set autoexpand=on zpgd0 2014-08-11.14:14:32 zfs create zpgd0/espacio 2014-08-19.11:47:47 zfs create zpgd0/iSCSI -o compression=lz4 -o atime=off zpool status -v pool: zpgd0 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zpgd0 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0490461 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0603473 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0606817 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0670246 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0673599 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0715212 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0722699 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0731193 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0732862 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0806663 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0807385 ONLINE 0 0 0 scsi-SATA_WDC_WD4001FAEX-0_WD-WMC1F0816943 ONLINE 0 0 0 errors: No known data errors I have no errors but the folder does not appear, so what Can I do? sorry add it zfs list NAME USED AVAIL REFER MOUNTPOINT zpgd0 933K 35,5T 54,7K /zpgd0 zpgd0/iSCSI 54,7K 35,5T 54,7K /zpgd0/iSCSI

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • How to disable auto insert notification in Windows 7?

    - by White Phoenix
    Alright, here's the problem. My hard drive activity light on my custom built PC is blinking exactly once every second. Microsoft has this to say on the issue: http://support.microsoft.com/kb/138598 There has been discussion on this issue several months ago: Why does my hard drive LED light blink every second? The problem seems to stem from primarily Windows 7 polling the CD-ROM/DVD drive every second to see if something is inserted. The Windows 7 users in the thread that was linked in the superuser question, https://social.technet.microsoft.com/Forums/fi-FI/w7itprohardware/thread/4f6f63b3-4b58-4154-9298-1566100f9d00, have confirmed that this IS a known issue with Windows 7. Some people point at the motherboard circuitry causing the CD-ROM and SATA activity to both be linked to that hard drive activity, but whatever the case, the temporary solution seems to be to disable the CD/DVD-ROM drive in Device Manager. In fact, disabling the CD/DVD-ROM does stop the blinking, but of course this solution is counterproductive, because I shouldn't have to entirely disable a device to fix this problem. I've done the following suggestions in that thread: Change the autorun registry entry to 0 Completely disable autoplay in the autoplay control panel Disable autoplay in the Local Group Policy Editor. None of these stop the blinking from happening - apparently these solutions work for both XP and Vista, but it seems to be different in Windows 7. So I'm wondering if anyone has found out how to completely disable the polling in Windows 7, or if this will just have to be an issue we will have to deal with. There's no option to disable the auto insert notification when you go to the device within device manager (there was in XP), so I got no idea where this option is hidden, or if there's a registry key entry I could change to stop the polling. Anyone have any idea?

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • Does a 3ware "ECC-ERROR" matter on a JBOD when I have ZFS?

    - by Stefan Lasiewski
    I have a FreeBSD 8.x machine running ZFS and with a 3ware 9690SA controller. The 3ware controller shows an ECC-ERROR with one of the disks: //host> /c0 show VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 279.39 GB SAS 0 - SEAGATE ST3300657SS p1 OK u0 279.39 GB SAS 1 - SEAGATE ST3300657SS p2 OK u1 931.51 GB SAS 2 - SEAGATE ST31000640SS p3 ECC-ERROR u2 931.51 GB SAS 3 - SEAGATE ST31000640SS p4 OK u3 931.51 GB SAS 4 - SEAGATE ST31000640SS /c0 show events shows no ECC errors in it's recent history. ZFS does not currently detect any errors. zpool status says No known data errors My question: Is this ECC-ERROR something that I need to be concerned about? According to the 3ware CLI 9.5.2 Manual, an ECC-ERROR means that the 3ware controller caught a read-error for one or more sectors on this drive. This sometimes occurs when a RAID array is recovering from a failed disk. I believe that ECC-ERRORS can also be detected when the 3ware Controller verifies each disk. None of the drives have failed and thus there was no drive rebuild, so I assume that 3ware discovered a bad sector when it ran it's weekly auto-verify scan of the disks. Is this a safe assumption? According to our logs, ZFS has not detected any bad sectors on this drive. ZFS can work around read errors -- if ZFS detects a bad sector on the drive, it will simply mark that sector as bad and never use it again. From the ZFS perspective one bad sector isn't a big deal, although it might indicate that the drive is starting to go bad.

    Read the article

  • Opera 10.5 RAM usage and Google Reader?

    - by David
    Hi all, Today I upgraded to Opera 10.5 from Google Chrome and I have two really important questions about it. 1) Is it normal for it to use SO MUCH RAM!!!!? Closing tabs doesn't help, but opening new ones add on to the usage. I can have just 4 tabs open and it goes up to the 300MB mark and I only have 1.5GB in my laptop, 596MB of it used by the graphics card so this really unacceptable. Is there a way to fix it? 2) Why does Google Reader feel so slow and unresponsive on it? It lags so bad when I just try scrolling through the page. I know Opera is known for being really smooth while scrolling through pages. There's also a white bar at the bottom of the page that I can get rid of. It blocks the "Next" and "Previous" buttons. The test between articles is also sort of intersecting each other and that just looks completely unattractive and that's something i'm not used with any web browser. I realize there's a built-in RSS reader, but it doesn't sync across multiple computers and is very late at updating. Here are my specs: Windows 7 Ultimate (x86), Intel Pentium M 1.86 GHz, 1.5GB RAM, ATI Mobility Radeon X600 (64MB dedicated, 596MB shared)

    Read the article

  • BGP Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about bgp multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • How to restore broken Ethernet functionality on Mac G5 running Mac OS 10.4.11 (Tiger)

    - by willc2
    I had a disk error that rendered my Mac unbootable. I repaired it with Tech Tool 4, but now networking does not work. Network Preferences reports that my Ethernet cable is unplugged. I know this is bogus because when I boot from an emergency partition, networking works correctly. Furthermore, wireless networking is also broken, which I tested with a known-good Wi-Fi dongle. Whenever I try to change Network Port Configurations by creating a New or Renaming an existing one, example: I get this message in the console: Error - PortScanner - setDevice, device == nil! Error - PortScanner - setDevice, device == nil! In sets of two as shown. When I try to invoke the Network Diagnostics app, it immediately crashes. My first thought is to reinstall Tiger with the Archive and Install method so I don't have to reinstall all my applications but I have lost my Tiger installer disk. My next thought is to buy Leopard for $107 on Amazon. If there is any way I can just repair my Tiger install I would be happy to save that money, though. This is not my main machine and I am loathe to put more money into it. How can I recover my network functionality? UPDATE: I found my Tiger install disk and tried an Archive and Install. It failed with an unhelpful error message along the lines of "Can't install, try again". I tried again but had the same error. My guess is, some corrupt or missing file in my User folder is preventing migration. I have a backup created with Super Duper that is a bit out of date but will startup the machine (with functional networking). I would love to just copy over the file(s) that got messed up but I don't even know where to look. What is the likely location of the System files that would cause the aforementioned symptoms?

    Read the article

  • USB mouse disconnecting and reconnecting in windows and linux

    - by Kalak
    I have a problem similar to what is described at "Why is my USB mouse disconnecting and reconnecting randomly and often?" except is is happening in both Windows 7 and Linux (Ubuntu 12/04TLS, fully patched), multiple mice, multiple OSs. It stops responding to input for about 3-5 seconds, then starts responding again. It's more frequent and lasts longer when running games (TF2, L4D, Dishonored, Borderlands 2, and more), but happens when just running the OS as well. I was hoping it was the motherboard so I bought a USB 2.0 PCI card to try that, but it's still happening. I've stripped it down to just the keyboard and mouse (different keyboard too just in case the keyboard was the problem), but it's still happening. All the hardware (mice and keyboards) work fine on other computers. I have literally pulled the mouse and keyboard out and plugged them into another computer (laptop) and re-joined the online game and had no problem with the keyboard and mouse combo that just failed on my gaming rig. Please no driver / Windows or Linux only suggestions, as that wouldn't effect both OSs. edit: known good mice I've brought home are now going bad. I suspect the hardware is messed up (voltage?) and has been frying the mice.

    Read the article

  • How do I determine whether this email bounce is my fault?

    - by David Zaslavsky
    I use Google Apps to handle email for my personal website, so I have an email address [email protected] through that, and I also have a Gmail account [email protected]. Now, I've been trying to send emails to a particular recipient who shall be known as [email protected]. When I send the email from my Gmail account with the @gmail.com address, it works fine. However, when I send it from my Google Apps account with the @ellipsix.net address, I get a bounce message which includes the following text: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 mail server permanently rejected message (#5.3.0) (state 17). The bounce message suggests that it is up to the mail administrator of the recipient domain example.com to fix the problem, whatever it is. But I would like to be as sure as possible that nothing needs to be fixed on my end. I already have DKIM signatures enabled for my domain, and I have published an SPF DNS record. Is there something else I should check or do, or can I be confident that it's up to the recipient to fix this issue? Does the "state 17" in the bounce message mean something relevant? I've included my domain name in the question so people who know more than me about this stuff can independently check the relevant DNS records or other information. This other question seems similar, but I've already investigated everything suggested in the answers there (except for contacting Google, which I don't want to do unless I suspect it's their issue to fix).

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • 32bit SQLServer with AWE NOT enabled. Buffer Cache Hit Ratio High, Disk Read Queue VERY HIGH, WHY?

    - by chenwq
    We have a "SQLServer 2005 SP3 32bit Enterprise Edition" running on a 32 bit Windows 2003 32bit Enterprise Edition 12GB RAM with AWE enabled using RAID5(5 pysical disks). We tuned AWE to enabled and restart sqlserver this afternoon after work, hope the performance will be better than old time. But there is something that we are very confused. On working days, SQLServer has a very bad performance. When we are looking for reasons, we check Windows Performance counter. Avg. Disk Read Queue Lenght > 140 Avg. Disk Write Queue Length < 1 SQL Server Buffer Cache Hit Ratio > 96% %Processor Time < 30% SQL Server Total Server Memory < 1.8G Obviously, without AWE enabled, SQL Server can use only less than 2G memory. My Question is: why "SQL Server Total server Memory" is less than 2G?I think SQL Server will use all 2G process address space. Does this counter count anything out? we known that sql server is sufferring lack of memory, but why "buffer hit ratio“ is as high as 96? Any advice is welcomed!

    Read the article

  • How To Map Fn+Key (Multiple Keys) To Other Key in Windows?

    - by Mohamed Meligy
    I've got a ThinkPad W530 laptop, which replaces the Application Key (also known as Menu Key or Mouse Right Click key, usually between Right Alt and Right Ctrl) with the Print Screen key. So, I need to replace Prt/Scr with Application key. That's easy, all key mapping software like SharpKeys or whatever can do that. There are even a few threads in SuperUser about dozen of those. But then, the different part to this question from the others (which is why it's not a duplicate, I think) is that I also don't want to completely lose the Prt/Scr key. I'm thinking about replacing it with either: Fn + Prt/Scr Fn + F2 These seemed to have no special meaning, so, I'm not overriding anything, just adding functionality to either of them (one of them, not both), to be the new Prt/Scr key. I couldn't find any key mapping software that can detect or let me select more than one key to map, even when the other key is something like Fn key (although they all can map Fn key itself, without combination). I know it may make sense why this restriction exists, but it'll be really useful if I can override it though. Do you know any program that can do that? Thanks a lot.

    Read the article

  • SQL transaction log backups conflicting with full backups?

    - by BradC
    On our SQL servers (2000, 2005, and 2008), we run full backups once a day in the evening, and transaction log backups every 2 hrs. We haven't really worried about these two processes conflicting, but lately we've run into some of the following issues: On one server, the trans log backup occasionally blocks the full backup, and must be manually stopped before the full backup can complete We sometimes end up with a massively-sized trans log backup file (sometimes larger than the full backup!) that seems to occur at the same time the full backup is running. I found a reference that indicate that these are "not allowed" to run at the same time, whatever that means: SQL 2000 Books Online and SQL 2005 Books Online. I'm not sure whether that means that the server will simply prevent them from running simultaneously, or if we ought to be explicitly stopping the log backups while the full backups are running. So are there known conflicts/issues between these? Does the answer differ between SQL versions? Should I have the trans log backup job check to see if the full backup is running before it executes? (and how do I do that...?)

    Read the article

  • Can't login via ssh after upgrading to Ubuntu 12.10

    - by user42899
    I have an Ubuntu 12.04LTS instance on AWS EC2 and I upgraded it to 12.10 following the instructions at https://help.ubuntu.com/community/QuantalUpgrades. After upgrading I can no longer ssh into my VM. It isn't accepting my ssh key and my password is also rejected. The VM is running, reachable, and SSH is started. The problem seems to be about the authentication part. SSH has been the only way for me to access that VM. What are my options? ubuntu@alice:~$ ssh -v -i .ssh/sos.pem [email protected] OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/ubuntu/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to www.hostname.com [37.37.37.37] port 22. debug1: Connection established. debug1: identity file .ssh/sos.pem type -1 debug1: identity file .ssh/sos.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA 33:33:33:33:33:33:33:33:33:33:33:33:33:33 debug1: Host '[www.hostname.com]:22' is known and matches the RSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: .ssh/sos.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentications that can continue: publickey,password Permission denied, please try again.

    Read the article

  • Passwordless SSH not working - keys copied and permissions set

    - by Comcar
    I know this question has been asked, but I'm certain I've done what all the other answers suggest. Machine A: used keygen -t rsa to create id_rsa.pub in ~/.ssh/ copied Machine A's id_rsa.pub to Machine B user's home directory Made the file permissions of id_rsa.pub 600 Machine B added Machine A's pub key to authorised_keys and authorised_keys2: cat ~/id_rsa.pub ~/.ssh/authorised_keys2 made the file permissions of id_rsa.pub 600 I've also ensured both the .ssh directories have the permission 700 on both machine A and B. If I try to login to machine B from machine A, I get asked for the password, not the ssh pass phrase. I've got the root users on both machines to talk to each other using password-less ssh, but I can't get a normal user to do it. Do the user names have to be the same on both sides? Or is there some setting else where I've missed. Machine A is a Ubuntu 10.04 virtual machine running inside VirtualBox on a Windows 7 PC, Machine B is a dedicated Ubuntu 9.10 server UPDATE : I've run ssh with the option -vvv, which provides many many lines of output, but this is the last few commands: debug3: check_host_in_hostfile: filename /home/pete/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host '192.168.1.19' is known and matches the RSA host key. debug1: Found key in /home/pete/.ssh/known_hosts:1 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: Wrote 16 bytes for a total of 1015 debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug3: Wrote 48 bytes for a total of 1063 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/pete/.ssh/identity ((nil)) debug2: key: /home/pete/.ssh/id_rsa (0x7ffe1baab9d0) debug2: key: /home/pete/.ssh/id_dsa ((nil)) debug3: Wrote 64 bytes for a total of 1127 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/pete/.ssh/identity debug3: no such identity: /home/pete/.ssh/identity debug1: Offering public key: /home/pete/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1495 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/pete/.ssh/id_dsa debug3: no such identity: /home/pete/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >