Search Results

Search found 36081 results on 1444 pages for 'object expected'.

Page 605/1444 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

    - by Easter Sunshine
    I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL: $ dig bogus. ; <<>> DiG 9.8.1 <<>> bogus. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected: $ dig bogus. @128.59.59.70 ; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A ;; AUTHORITY SECTION: . 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400 ;; Query time: 18 msec ;; SERVER: 128.59.59.70#53(128.59.59.70) ;; WHEN: Wed Jan 25 14:09:14 2012 ;; MSG SIZE rcvd: 98 Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected. This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates. EDIT Here is a debug log. The client is 160.39.114.110, which is my workstation. 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Rcv 160.39.114.110 2e94 Q [0001 D NOERROR] A (5)bogus(0) UDP question info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0) UDP response info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x8182 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 2 (SERVFAIL) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

    Read the article

  • PowerDNS 3+ - Recursive queries for subdomains

    - by PDNS Troubles
    We are trying to find functionality in the PDNS 3.x that existed in PDNS < 2.9.2.5. Whereby if we have a domain in the database backend with records, if a query is unable to resolve a subdomain it would then query the recursor setup in the pdns.conf file. We have found that on Centos 6.x the rpm packages are the latest verison of pdns where by 5.x available was pdns-2.9.22-4.el5. The pdns-2.9.22-4.el5 package works as expected but when upgrading servers to Centos 6.x we loose this required functionality. pdns-backend-mysql-2.9.22-4.el5.rpm fails to install on Centos 6.x due to mysql libs that aren't availble, this is caused by an upgrade in the mysql version whereby the pdns backend mysql requires older mysql libs then what is available on centos 6.x . Installing from source is also troublesome with the following errors - http://pastebin.com/B5cUuD08

    Read the article

  • Linux process management

    - by tanascius
    Hello, I started a long running background-process (dd with /etc/urandom) in my ssh console. Later I had to disconnect. When I logged in, again (this time directly, without ssh), the process still seemed to to run. I am not sure what happened - I did not use disown. When I logged in later, the process was not listed in top at first, but after a while it reclaimed a high CPU percentage, as I expected. So I assume dd is still running. Now, I'd like to see the progress. I use kill -USR1 <pid> but nothing is printed. Is there any way to get the output again?

    Read the article

  • RDMA architecture - do you need adapters on both ends?

    - by Bobb
    I know Linux can use RDMA NICs like Solarflare... I just found Intel has something like that NetEffect cards. But Intel is talking all about clusters.. Can someone please explain. If I want low-latency networking and install RDMA NIC on my server. Is there limitation on where the cable can go? Is there a specific device expected on the other end? Is it special RDMA switch or RDMA adapter before switch or what? Why is this cluster talk? What if I want a single server with Windows (I can install HPC Windows or Windows 2008 R2)?

    Read the article

  • Varnish Running VCC-compiler failed on purge

    - by FLX
    I've been following this guide which uses this default.vcl. However, when starting Varnish I get the following error: * Starting HTTP accelerator [fail] storage_malloc: max size 1024 MB. Message from VCC-compiler: Expected '(' got ';' (program line 341), at (input Line 43 Pos 22) purge; ---------------------# Running VCC-compiler failed, exit 1 VCL compilation failed Which means that there is something wrong with purge here: sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "Purged."; } } I don't see anything wrong, can someone explain? Thanks!

    Read the article

  • .inputrc settings: delete-char and [] keybindings not working

    - by tanascius
    Hello, I am using mingw under windows. When I am using ruby (irb) my 'special' characters like []{} and \ are not working. This is because of my german keyboard, where these keys are used together with AltGr (Alt + Ctrl). I found a solution for this here or here. Now, when I add the line "\M-[": "[" to my .inputrc file the delete-key no longer works. It is defined as usual: "\e[3~": delete-char Pressing delete just returns [3, while Ctrl + v, delete returns ^[[3~ as expected. Somehow these two definitions in .inputrc do not work together. Any ideas? EDIT: It is only the delete key that is not working, my other bindings all work, like: "\e[1~": beginning-of-line # home (ok) "\e[2~": paste-from-clipboard # insert (ok) "\e[3~": delete-char # delete (PROBLEM) "\e[4~": end-of-line # end (ok) "\e[5~": history-search-backward # pageup (ok) "\e[6~": history-search-forward # pagedown (ok)

    Read the article

  • Require TLS on RDP for all connections

    - by MarkM
    I have a 2008 DC and a 2008 AD CS server and a Windows 7 client. What I would like is to require the certificate to be used when RDPing to the server. The certificate is valid, and if I connect using the FQDN I am shown that i was authenticated by both the certificate and Kerberos as expected. When I connect with just the hostname I am allowed to connect and am only authenticated by Kerberos, even though I have Require TLS 1.0 set on the server that I am RDPing to. I fully understand that the certificate will not be valid unless the server is accessed by FQDN, what I want to do is disallow connections that do not use the certificate AND Kerberos. I thought that setting Require TLS 1.0 would do it. What am I missing?

    Read the article

  • ls returns nothing only in certain directories

    - by Jakobud
    I have a raid drive mounted here: /data/ And certain directories like this one: /data/somedir/somesubdir/ when I run ls w/ or w/o any flags, terminal doesn't return anything. It does not return an empty directory listing. It simply goes to the next line and sits there blank with no prompt coming up. I cannot CTRL-C out of it. I have to close this terminal instance and start over. At first I thought it was something to do with the ls command, but its pointing to /bin/ls and I can ls other directories just fine. Also, running this find /data/somedir/somesubdir immediately finds all the files just as expected.

    Read the article

  • Security log overflowing with filtering blocks

    - by Jacob
    I have a Windows 7 workstation whose security log is overflowing with the following errors: Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5157 Filtering Platform Connection "The Windows Filtering Platform has blocked a connection." Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5152 Filtering Platform Packet Drop "The Windows Filtering Platform has blocked a packet." These are not unexpected events; the firewall is expected to drop unsolicited traffic. However, I can't figure out how to tell Windows to stop writing these events to the security log. I've seen this problem before and have been able to find an answer with the use of Google, but I wasn't able to locate on this this time. Thanks!

    Read the article

  • What FTP clients securely handle FTP/TLS where the server has a self-signed cert?

    - by billpg
    I'm trying to connect to an FTP server that uses TLS on port 990. Unfortunately, the server uses a self-signed cert. What FTP clients for Windows handle this type of connection securely, such that I can securely verify the cert before continuing with the connection and logging in? (The server admin has supplied me with the expected certificate thumbprint to look for.) As an example of doing it wrongly, Core FTP LE 2.2 presents a dialog with basic information about the cert presented, inviting me to accept-once, accept-always or cancel. The dialog does not include the cert's hash/thumbprint and without that thumprint, I can't verify if the cert I'm being presented is the right one.

    Read the article

  • Enable H.264 (or x264) in AVIDemux

    - by Thomas
    I am trying to get AVIDemux set up with the X264 codec using this tutorial. The following is what goes down when I get to the ./configure --enable-mp4-output command Thomas-Phillipss-MacBook:x264 tomdabomb2u$ sudo ./configure --enable-mp4-output Password: Unknown option --enable-mp4-output, ignored Found no assembler Minimum version is yasm-0.6.2 If you really want to compile without asm, configure with --disable-asm. So I tried it. Thomas-Phillipss-MacBook:x264 tomdabomb2u$ sudo ./configure --enable-mp4-output --disable-asm Unknown option --enable-mp4-output, ignored Warning: gpac is too old, update to 2007-06-21 UTC or later Platform: X86_64 System: MACOSX asm: no avs: no lavf: no ffms: no gpac: no pthread: yes filters: crop select_every debug: no gprof: no PIC: no shared: no visualize: no bit depth: 8 You can run 'make' or 'make fprofiled' now. I issued make, and then Thomas-Phillipss-MacBook:x264 tomdabomb2u$ ./x264 -v -q 20 -o foreman.mp4 foreman_part_qcif.yuv 176x144. And as expected, the results are: x264 [error]: not compiled with MP4 output support So I'm stuck. Any ideas?

    Read the article

  • Red Hat cluster: Failure of one of two services sharing the same virtual IP tears down IP

    - by js01
    I'm creating a 2+1 failover cluster under Red Hat 5.5 with 4 services of which 2 have to run on the same node, sharing the same virtual IP address. One of the services on each node needs a (SAN) disk, the other doesn't. I'm using HA-LVM. When I shut down (via ifdown) the two interfaces connected to the SAN to simulate SAN failure, the service needing the disk is disabled, the other keeps running, as expected. Surprisingly (and unfortunately), the virtual IP address shared by the two services on the same machine is also removed, rendering the still-running service useless. How can I configure the cluster to keep the IP address up?

    Read the article

  • HTTP/1.1 Status Codes 400 and 417, cannot choose which

    - by TheDeadLike
    I have been referred to here that it might be of better help, I've got a processing file which handles the user sent data, before that, however, it compares the input from client to the expected values to ensure no client-side data change. I can say I don't know lot about HTTP status codes, but I have made up some research on it, and to choose which one is the best for unexpected input handling. So I came up with: 400 Bad Request: The request cannot be fulfilled due to bad syntax 417 Expectation Failed: The server cannot meet the requirements of the Expect request-header field Now, I cannot be really sure which one to use, I have seen 400 Bad Request being used alot, however, whatI get from explanation is that the error is due to an unexistent request rather than an illegal input. On the other side 417 Expectation Failed seems to just fit for my use, however, I have never seen or experimented this header status before. I need your experience and opinions, thanks alot! For a full detailed with form/process page drafts, and my experiments, follow this link.

    Read the article

  • What is a plain text password and why can it be decypted?

    - by Misha
    I was trying to understand the level of security offered by Windows picture passwords and ran across this claim on this website. Some of our password recovery utilities already implement Windows 8 plain-text password decryption. The upcoming release of Windows Password Recovery is expected to have a full-fledged Vault analyzer and offline decoder. I'm trying to understand what a plain text password is and if it is the default kind of password when I add a password to my account. My head is a bit muddled on this one so any clarification can help. It seems there are passwords that can be decrypted and those that can't. What can be decrypted? Is the password I enter in Windows exposed?

    Read the article

  • Theoretical Wi-Fi decay

    - by lithiium
    Is there a way to (theoretically at least) calculate the decay on bandwith of a Wifi related to the streght signal? For example, I know that I can theoretically expect 54Mbps of a 802.11g at 100%, which will be the bandwith expected at a 30% of signal? is it lineal? is it the same? I could not find any source for this, but considering the error replay involved, I guess it should be possible to calculate something like this. Anybody knows?

    Read the article

  • How to set WAN side buffers for WRT54GL running Tomato Firmware

    - by Vickash
    I've recently set up a machine running m0n0wall to try and fight buffer bloat and do some traffic shaping. It was more convenient (geographically speaking) to connect the cable modem directly to my old WRT54GL, then pass everything to the m0n0wall machine and have that do the real routing work. It took a bit of work, but it's working pretty well. I have a cable connection. I have m0n0wall set up to utilize only 90% of the specified speed of my subscription, which is fine. But I've noticed that at certain times of the day (possibly when my true bandwidth drops below that 90%), there's more latency if the connection is used heavily, and traffic shaping doesn't seem to work as well. I suspect this is caused by the buffers on the WRT54GL still being unnecessarily large. If the connection is working as expected, they won't get filled, but in times of reduced bandwidth they would. Does anyone know the command I need to execute, on the WRT54GL running Tomato Firmware, to reduce the buffers on the WAN interface to the minimum size possible?

    Read the article

  • jump to page of a pdf in google docs / drive / apps

    - by Aaron - Solution Evangelist
    i want to jump to a specific page of a pdf file via the google docs via the editor url https://docs.google.com/file/d/xxx/edit or the embed url https://docs.google.com/file/d/xxx/preview i am not looking to use the http://docs.google.com/gview?url= referenced in the stackoverflow question how to open specific page on Google's docs viewer as i want to do this for documents where authentication is required the the document is not available via public url. is there some way of appending an anchor (i would have expected it to be https://docs.google.com/file/d/xxx/preview#10) or a query (e.g. https://docs.google.com/file/d/xxx/preview?page=10) to the google docs / drive / apps viewer?

    Read the article

  • grep pattern interpretted differently in 2 different systems with same grep version

    - by Lance Woodson
    We manufacture a linux appliance for data centers, and all are running fedora installed from the same kickstart process. There are different hardware versions, some with IDE hard drives and some SCSI, so the filesystems may be at /dev/sdaN or /dev/hdaN. We have a web interface into these appliances that show disk usage, which is generated using "df | grep /dev/*da". This generally works for both hardware versions, giving an output like follows: /dev/sda2 5952284 3507816 2137228 63% / /dev/sda5 67670876 9128796 55049152 15% /data /dev/sda1 101086 11976 83891 13% /boot However, for one machine, we get the following result from that command: Binary file /dev/sda matches It seems that its grepping files matching /dev/*da for an unknown pattern for some reason, only on this box that is seemingly identical in grep version, packages, kernel, and hardware. I switched the grep pattern to be "/dev/.da" and everything works as expected on this troublesome box, but I hate not knowing why this is happening. Anyone have any ideas? Or perhaps some other tests to try?

    Read the article

  • Mac Joining Active Directory Still Prompts For Authentication

    - by David Potter
    My Mac is joined to an Active Directory domain. What I expected to see was the same ease of access to file shares and internal websites that Windows computers joined to the domain experience (i.e., no authentication needed; it just uses Windows Integrated Authentication). Instead I am asked for credentials each time I try to access those shares and protected websites (e.g. SharePoint). Is this normal behavior, or is something wrong with my Mac that it prompts me for my username and password for the domain when I access Windows file shares or intranet sites protected by NTLM/Kerberos? Machines include: MacBook Pros running Mountain Lion MacBook Pros running Lion MacServer running Lion Server

    Read the article

  • How to configure LAN IP Filter on Huawei E586 UMTS Modem

    - by Mose
    I need to prevent clients of an Huawei E586 UMTS to WiFi modem from downloading much data from specific servers eg. Windows update or OSX update. On the config page of the device there's an "LAN IP Filter" which seems pretty good but I can't figure out the right settings. The mask looks like this: The help pages states the following: My problem with this is, I want some wildcard for the local ip and port. In my opinion the help page can not be correct by "lan port: enter 80" because the source port is normally dynamic. I tried to set it up like stated there but as expected it doesn't work. As wildcards I tried * and "ALL" but nothing worked as it prevents me from saving settings with an "wrong value" error. Any suggestions? Thanks in advance!

    Read the article

  • Powershell (sqlps) lastbackupdate not changing despite having run a sqlserver backup

    - by user1666376
    I'm using Powershell to check last backup times across all our sqlserver databases. This seems to work really well, but I've got a question If I run this (a cut-down version of the actual script): dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate I get: Parent Name LastBackupDate ------ ---- -------------- [Server1] ADBA 10/09/2012 21:15:37 [Server1] ReportServer 10/09/2012 21:00:17 [Server1] ReportServerTempDB 10/09/2012 21:00:18 [Server1] db1 10/09/2012 21:15:35 If I then run a sql backup of the Server1 default instance, and run the same query the last backup date doesn't change: PS C:\temp> dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate Parent Name LastBackupDate ------ ---- -------------- [Server1] ADBA 10/09/2012 21:15:37 [Server1] ReportServer 10/09/2012 21:00:17 [Server1] ReportServerTempDB 10/09/2012 21:00:18 [Server1] db1 10/09/2012 21:15:35 ..but if I open a new powershell window, it shows the backup I just took: PS SQLSERVER:\> dir SQLSERVER:\SQL\Server1\default\databases | select parent, name, lastbackupdate Parent Name LastBackupDate ------ ---- -------------- [server1] ADBA 12/09/2012 09:03:23 [server1] ReportServer 12/09/2012 08:48:03 [server1] ReportServerTempDB 12/09/2012 08:48:04 [server1] db1 12/09/2012 09:03:21 My guess is that this is expected behaviour, but could anybody show me where it's documented/explained - I just want to understand what's going on. This is running the SQlps which came with 2008, against a 2008 instance. Thanks Matt

    Read the article

  • HP Recovery Manager failed creating a backup disc

    - by Baehr
    I had to restore my computer (running Windows 7) back to its factory default, so I ran the HP recovery manager disc and chose to reformat my computer. Before doing this I was given an option to backup my current files to a DVD, and I did so; getting the confirmation of a successful DVD burn. Booting up my computer everything looks like it did when I first purchased it (like expected). But when I pop in the backup DVD and run the recovery EXE inside it, the application creates a directory inside C:\System Recovery Files which contains the empty directories: [(^_^)] and Program Data. So essentially what I'm getting out of this is a failure in creating a back-up. Which is not only confusing but incredibly frusterating. Is there any way that I can recover the files lost in the

    Read the article

  • SSH keys fail for one user

    - by Eli
    I just set up a new Debian server. I disabled root SSH and password auth, so you've gotta use a key file. For my primary user, everything works exactly as expected. I used ssh-keygen -t dsa and got myself a public and private key. Put one in authorized keys, put the other in a pem file locally. I wanted to create a user that I can deploy things with, so I did basically the same process. I addusered it, made a .ssh folder, ran ssh-keygen -t dsa (I also tried RSA), put the keys in their appropriate locations. No luck. I'm getting a Permission denied (publickey) error. When I use the exact same keys as the account that works, same error. When I enable password authentication, I can log in via SSH with the password. How do I debug this?

    Read the article

  • Problem with Windows activation on a VM (Virtual machine)

    - by Daisetsu
    I backed up a number of laptops to virtual machines before they are to be re-purposed, in case I need the data at some later time. While the Physical to VM processes worked fine I am encountering issues on some of the VMs. When I boot them I get an error message saying I MUST activate windows in order to login. This is expected because the hardware changed (from physical hardware to virtualized hardware). I click the OK button and expect to be prompted with ways to activate, windows sits there for quite a while then tells me that "Windows has already been activated". I click OK at that message and get take back to the beginning where I am asked to activate Windows. I have done some fairly intensive googling but haven't been able to find a real solution. EDIT: The laptops with the issues are 2 Sony Vaios, I believe that they have the OEM version of the OS originally installed by the factory.

    Read the article

  • How to disable Tcp/Ip settings in windows 7 via GPO?

    - by Akash Kava
    I have enabled following policies, "Prohibit TCP/IP advanced connection" "Prohibit access to properties of components of a LAN connection" "Enable Windows 2000 Network Connections setings for Administrators" after doing all these, all machines running windows xp, 2000 and vista have network settings properties button disabled as expected. However all machines running windows 7 have no effect, I believe there are few more steps, all Windows 7 machines are on domain and we want to control this via Domain Controler's GPO. Please let me know, what I need to do to have Windows 7 disable the properties of network connection, I am not network expert, I read few articles about what new has been added in GPO of windows 7 but I am blank. Everything works fine on Windows XP, Vista, 2003 Server. Only Windows 7 is a problem.

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >