Search Results

Search found 13889 results on 556 pages for 'results'.

Page 492/556 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Email been marked as spam

    - by Rodrigo Ferrari
    Hello friends! Friends, I tried a lot of changes, but no success to send the email correctly formated, I'm using the same domain to send mail and the email pass trough spf and authentication, but has been marked as spam for some accounts using gmail ou google app's. The header's are: Delivered-To: [email protected] Received: by 10.231.208.5 with SMTP id ga5cs194453ibb; Mon, 17 Jan 2011 11:08:33 -0800 (PST) Received: by 10.142.213.18 with SMTP id l18mr4141524wfg.192.1295291312735; Mon, 17 Jan 2011 11:08:32 -0800 (PST) Return-Path: <[email protected]> Received: from hm1315-29.locaweb.com.br (hm1315-29.locaweb.com.br [201.76.49.185]) by mx.google.com with ESMTP id a70si8528144yhd.33.2011.01.17.11.08.32; Mon, 17 Jan 2011 11:08:32 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 201.76.49.185 as permitted sender) client-ip=201.76.49.185; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 201.76.49.185 as permitted sender) [email protected] Received: from hm1974.locaweb.com.br (189.126.112.86) by hm1315-38.locaweb.com.br (PowerMTA(TM) v3.5r15) id h6i9r00nvfo8 for <[email protected]>; Mon, 17 Jan 2011 17:08:31 -0200 (envelope-from <[email protected]>) X-Spam-Status: No Received: from bart0020.locaweb.com.br (bart0020.email.locaweb.com.br [200.234.210.22]) by hm1974.locaweb.com.br (Postfix) with ESMTP id 9C03511E00B5; Mon, 17 Jan 2011 17:08:31 -0200 (BRST) X-LocaWeb-COR: locaweb_2009_x-mail Received: from admin.domain.com.br (hm686.locaweb.com.br [200.234.200.116]) (Authenticated sender: [email protected]) by bart0020.locaweb.com.br (Postfix) with ESMTPA id 4B2F08CAFD6B; Mon, 17 Jan 2011 17:08:31 -0200 (BRST) Message-ID: <[email protected]> Date: Mon, 17 Jan 2011 17:08:31 -0200 Subject: Domain - Assunto From: Sistema <[email protected]> Reply-To: rodrigo <[email protected]> To: balucia <[email protected]> MIME-Version: 1.0 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Virus-Scanned: clamav-milter 0.96.5 at hm1974 X-Virus-Status: Clean This header has been marked as spam, I had no more ideas how to fix it and there are people borrowing me about this. Thanks and best regard's.

    Read the article

  • Proxy / Squid 2.7 / Debian Wheezy 6.7 / lots of TCP Timed-out

    - by Maroon Ibrahim
    i'm facing a lot of TCP timed-out on a busy cache server and here below my sysctl.conf configuration as well as an output of "netstat -st" Kernel 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3 x86_64 GNU/Linux Any advice or help would be highly appreciated #################### Sysctl.conf cat /etc/sysctl.conf net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 65536 net.ipv4.tcp_low_latency = 1 net.core.wmem_max = 8388608 net.core.rmem_max = 8388608 net.ipv4.ip_local_port_range = 1024 65000 fs.aio-max-nr = 131072 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 3 kernel.threads-max = 131072 kernel.msgmax = 32768 kernel.msgmni = 64 kernel.msgmnb = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_forward = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_dynaddr = 1 vm.swappiness = 0 vm.drop_caches = 3 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_ecn = 0 net.ipv4.tcp_max_orphans = 131072 net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 32768 net.core.netdev_max_backlog = 131072 net.ipv4.tcp_mem = 6085248 16227328 67108864 net.ipv4.tcp_wmem = 4096 131072 33554432 net.ipv4.tcp_rmem = 4096 174760 33554432 net.core.rmem_default = 33554432 net.core.rmem_max = 33554432 net.core.wmem_default = 33554432 net.core.wmem_max = 33554432 net.core.somaxconn = 10000 # ################ Netstat results /# netstat -st IcmpMsg: InType0: 2 InType3: 233754 InType8: 56251 InType11: 23192 OutType0: 56251 OutType3: 437 OutType8: 4 Tcp: 20680741 active connections openings 63642431 passive connection openings 1126690 failed connection attempts 2093143 connection resets received 13059 connections established 2649651696 segments received 2195445642 segments send out 183401499 segments retransmited 38299 bad segments received. 14648899 resets sent UdpLite: TcpExt: 507 SYN cookies sent 178 SYN cookies received 1376771 invalid SYN cookies received 1014577 resets received for embryonic SYN_RECV sockets 4530970 packets pruned from receive queue because of socket buffer overrun 7233 packets pruned from receive queue 688 packets dropped from out-of-order queue because of socket buffer overrun 12445 ICMP packets dropped because they were out-of-window 446 ICMP packets dropped because socket was locked 33812202 TCP sockets finished time wait in fast timer 622 TCP sockets finished time wait in slow timer 573656 packets rejects in established connections because of timestamp 133357718 delayed acks sent 23593 delayed acks further delayed because of locked socket Quick ack mode was activated 21288857 times 839 times the listen queue of a socket overflowed 839 SYNs to LISTEN sockets dropped 41 packets directly queued to recvmsg prequeue. 79166 bytes directly in process context from backlog 24 bytes directly received in process context from prequeue 2713742130 packet headers predicted 84 packets header predicted and directly queued to user 1925423249 acknowledgments not containing data payload received 877898013 predicted acknowledgments 16449673 times recovered from packet loss due to fast retransmit 17687820 times recovered from packet loss by selective acknowledgements 5047 bad SACK blocks received Detected reordering 11 times using FACK Detected reordering 1778091 times using SACK Detected reordering 97955 times using reno fast retransmit Detected reordering 280414 times using time stamp 839369 congestion windows fully recovered without slow start 4173098 congestion windows partially recovered using Hoe heuristic 305254 congestion windows recovered without slow start by DSACK 933682 congestion windows recovered without slow start after partial ack 77828 TCP data loss events TCPLostRetransmit: 5066 2618430 timeouts after reno fast retransmit 2927294 timeouts after SACK recovery 3059394 timeouts in loss state 75953830 fast retransmits 11929429 forward retransmits 51963833 retransmits in slow start 19418337 other TCP timeouts 2330398 classic Reno fast retransmits failed 2177787 SACK retransmits failed 742371590 packets collapsed in receive queue due to low socket buffer 13595689 DSACKs sent for old packets 50523 DSACKs sent for out of order packets 4658236 DSACKs received 175441 DSACKs for out of order packets received 880664 connections reset due to unexpected data 346356 connections reset due to early user close 2364841 connections aborted due to timeout TCPSACKDiscard: 1590 TCPDSACKIgnoredOld: 241849 TCPDSACKIgnoredNoUndo: 1636687 TCPSpuriousRTOs: 766073 TCPSackShifted: 74562088 TCPSackMerged: 169015212 TCPSackShiftFallback: 78391303 TCPBacklogDrop: 29 TCPReqQFullDoCookies: 507 TCPChallengeACK: 424921 TCPSYNChallenge: 170388 IpExt: InBcastPkts: 351510 InOctets: -609466797 OutOctets: -1057794685 InBcastOctets: 75631402 #

    Read the article

  • Hylafax / Capi4hylafax: faxgetty does not recognize number of lines

    - by Wrikken
    We've got a T.30 card, 30 working lines on it, but for some reason, if I add more then 30 faxes in the queue at any time (and we're busy enough at peak times that this happens a lot), faxgetty sends faxes to non-existent lines and they appear in the error queue as a 'busy' signal on the line, which results in a lot of failed faxes because the counter of max 3 tries increases rapidly. This is using faxgetty (USE_FAXGETTY="y" in /etc/default/hylafax). I've inherited this thing, so I'm not entirely sure how faxgetty is supposed to know the number of lines. However, if I alter the script to faxmodem (USE_FAXGETTY="n" in /etc/default/hylafax and manually enabling 30 modems), this behavior goes away (new faxes 'wait' for a line to be available before trying to send, so each try / fail is a valid one on a working line, majorly descreasing the amount of failed faxes. However, when researching this almost anyone talks about faxgetty being the preferred, more robust, method, and on top of that for some unexplained reason all FIFO's disappeared for some reason after several errorless hours with faxmodem, forcing a hylafax restart using faxgetty until we figured out why this faxmodem solution failed (which is another question, and somewhat out of scope here). Environment: Debian 2.6.26-2-amd64 capi4hylafax 1:01.03.00.99.svn.300-12 hylafax-client 2:4.4.4-10.1 hylafax-server 2:4.4.4-10.1 Config --hfaxd.conf-- LogFacility: daemon ServerTracing: 0x1ff --hyla.conf-- Host: localhost Verbose: No VRes: 196 TimeZone: local DialRules: "/etc/hylafax/dialrules.europe" --/etc/hylafax/config -- InternationalPrefix: 00 LongDistancePrefix: 0 AreaCode: 99999 CountryCode: 31 DialStringRules: "etc/dialrules.europe" ModemGroup: any:faxCAPI SendFaxCmd: "/usr/bin/wrapc2faxsend" --/etc/hylafax/config.faxCAPI -- SpoolDir: /var/spool/hylafax FaxRcvdCmd: /var/spool/hylafax/bin/faxrcvd PollRcvdCmd: /var/spool/hylafax/bin/pollrcvd FaxReceiveUser: uucp FaxReceiveGroup: dialout LogFile: /var/spool/hylafax/log/capi4hylafax #no, checking this log did not yield anything interesting LogTraceLevel: 4 LogFileMode: 0600 ModemGroup: any:faxCAPI #repeats of faxCAPI2 = faxCAPI30, with of course another devicename/local ident: { HylafaxDeviceName: faxCAPI RecvFileMode: 0600 FAXNumber: ****redacted**** LocalIdentifier: ****some-ident-per-device*** MaxConcurrentRecvs: 0 OutgoingController: 1 OutgoingMSN: SuppressMSN: 0 NumberPrefix: NumberPlusReplacer: "00" UseISDNFaxService: 0 RingingDuration: 0 { Controller: 1 AcceptSpeech: 0 UseDDI: 0 DDIOffset: DDILength: 0 IncomingDDIs: IncomingMSNs: AcceptGlobalCall: 1 } } So in short: How does faxgetty determine the number of lines available? (the man page isn't terribly revealing, and I can't find an appropriate setting in hylafax-config. And how can I get a capi4hylafax/hylafax setup which queues more faxes then lines are available correctly without immediately incrementing the fail count? We will not be receiving any faxes on this machine b.t.w. As I said, I've inherited this thing, so if there are important configuration options I'm not including, please let me know.

    Read the article

  • Email Tracking - By IP

    - by disasm
    I am trying to track an Email I received to see where it originated from (harmful scam Email). Here is the full header: Received-SPF: pass (domain of aol.com designates 205.188.109.203 as permitted sender) ZSwgeW91ciBpbmZvcm1hdGlvbiBoYXMgYmVlbiBhY2NlcHRlZCBhbmQgYXBw cm92ZWQgYnkgb3VyIFJlY3J1aXRpbmcgRGVwYXJ0bWVudC4gWU9VUiBGSVJT VCBBU1NJR05NRU5UIERFVEFJTFMuIFdlc3Rlcm4gVW5pb24gTW9uZXkgVHJh bnNmZXIgaGFkIHJlcG9ydHMgYWJvdXQgbGFwc2VzIGluIHRoZSBzZXJ2aWNl cyBvZiBzb21lIG9mIHRoZWlyIG91dGxldHMgYW5kIHNvbWUgbwEwAQEBAQN0 ZXh0L3BsYWluAwMwAgN0ZXh0L2h0bWwDAzIx X-YMailISG: HeDj35sWLDvNbQaO2B72kIm7hhiubE2qUysyBFo2m3GE4wsk FOs6uaYWwBVUBbL_ubGEs5Gitm1b86QFPuQYdS3g5s2f_uY3dyHhXh7DcBIB Ad4mJBzeRozs5.0s6vbqhsIEYlaKI4EDrsocJEbDbTUiUq2UyxZ7Ery8Iqow _sBVN0msHJvcI09KwmaXUteV_qCL7qFlj7WNSmdMM.wVUm3pyiWzw1VUZlyD nwoEzdEImQdwmJqoTM1YE8XU6BE8IkmUkh1Z8XkfLtHqmAsPi1_.Msbi8ubk QD71BcTABjb7ixELg5NfomsyZKVN.9G.TnuISlX5umByjS701ITyQ2PUYXai hoVCWg37bKWNR9MAWdogUK8PIV3MWPv5gdglNAKuPdS5Z7.01J39UGyH7R60 aIiIWdAsQ7_3VQBgIi9Seg2YM2j1U5g9QtdcJxBe0.1oigmj7G2sC9.YXNGX 3abQ1EcWVlJLuSuBbQ4Flpbe_Y3_ssz8nZIK2YjKy0U8WWe77vfnxdEBsknf w2OA_PAzHtuAAuxETnAOU_MeMIssgRAtihKC_26Au1LnKYCGPGADFBLaLNHF 30itI.kBvUjdvUfqV11dnGe50kFVzBGDMJFm8mXvb5WtIKq6qU1ZZmUroCew EgXjVZ.JKbux2KQmHh2zZbIJO3nOmLGkzuRczYiWCUNBDtmUZE6imuIQ4P6S RjusSbMITf2fIL_xe.qFCnW563sOdc4u.uXLDx.lq30740l8lWkkLX6KaDMF k9TY0VQKsMynqa4vXKpkTVNdukAcGd0p2i3newxY4q_9eZLn9czsJimfpKNW SX1bqjs0iCQHb4FTydf1Zpa2b.6lIhdjVlIM8tiWhfGhlUeM267T3njEM6nz 0vxyjparR_G_s0VnIVhSeLw2F5KpAL226w2yA.WBcqoG2ROSa7fK.0ZYwy36 Qcmk8C.HKj8Fng1qFLtEfaI4F66rCEJi7h1d6EK0Jk4a_TJnBBub1VQVoU.s SJ2ehs8aDjDqJw27_Ia4vYekKhIU8Oak0vYSmMXhZ5IvJfGfOHYVy4ebkoQf IDE3lSfex1nHZqcMqq0agPOZUOdznSIGJVx4T8m6MGwrEouvL.grhT6KUJQ5 g8UX6DVTgj.8lHuTyOzj3A3NRwDFs2JqicprOMJRS4UWYX8eQ1y4j.4ora36 LnWYm7k1n6X0lDBW5ZdZlsLy7.0al0G1uCIAZwBNo7FnHr6q2mQNwgFaPkNO FOiykqFHu0khLO_cZw87MpDslZO_3lFbJGlnchSs81hkESSQsldUxqdNkIV. yWsS58p1uuwVNksp4NB.QW41wfBtY5FU.Q80g8KiOZIz0daou3GlzoahcHoQ GPgSa86GKtSo.ew2xEUKk6c.ffAT9RjqNh5fzyhBdzEYURxJBYgMlL5DQp2G yYIGhlIS5h9JzPFVkk2XhBoY2NgEAAfJfAfqKoMNNKIW.bEwbgNa9xtSzHNg YdmDfOSkYkAGZDqwa.uONguq5.jqtnWDnx3GDyuoVg-- X-Originating-IP: [205.188.109.203] Authentication-Results: mta1343.mail.bf1.yahoo.com from=aol.com; domainkeys=neutral (no sig); from=mx.aol.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO omr-d06.mx.aol.com) (205.188.109.203) by mta1343.mail.bf1.yahoo.com with SMTP; Fri, 18 Oct 2013 10:11:15 -0700 Received: from mtaomg-da05.r1000.mx.aol.com (mtaomg-da05.r1000.mx.aol.com [172.29.51.141]) by omr-d06.mx.aol.com (Outbound Mail Relay) with ESMTP id ACFAD7012DFB4 for <.confidential.; Fri, 18 Oct 2013 13:11:15 -0400 (EDT) Received: from core-mfb002c.r1000.mail.aol.com (core-mfb002.r1000.mail.aol.com [172.29.47.199]) by mtaomg-da05.r1000.mx.aol.com (OMAG/Core Interface) with ESMTP id 2ADB6E000089 for <.confidential.; Fri, 18 Oct 2013 13:11:15 -0400 (EDT) References: <[email protected] <[email protected] <[email protected] To: .confidential. Subject:.confidential. In-Reply-To: <[email protected] X-MB-Message-Source: WebUI MIME-Version: 1.0 From: .confidential. X-MB-Message-Type: User Content-Type: multipart/alternative; boundary="--------MB_8D09A3C2FD3105D_1338_33A66_webmail-d257.sysops.aol.com" X-Mailer: AOL Webmail 38109-STANDARD Received: from 66.199.226.81 by webmail-d257.sysops.aol.com (205.188.17.42) with HTTP (WebMailUI); Fri, 18 Oct 2013 13:11:15 -0400 Message-Id: <[email protected] X-Originating-IP: [66.199.226.81] Date: Fri, 18 Oct 2013 13:11:15 -0400 (EDT) x-aol-global-disposition: G DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mx.aol.com; s=20121107; t=1382116275; bh=9TXLF90L8beaMnjNzoKDwcv3Eq06jiZGN40YTBw2YOI=; h=From:To:Subject:Message-Id:Date:MIME-Version:Content-Type; b=xHVjoH5AccrOpPZoZZW+b41uJ7nzHDrryGsO6WzvtBOFGWX3xJMO3RB1ILFlJAsF6 P9olk8Gz6LDydX9SOZ4w/yPI8y8eU6z1AauwOPxw9F1lu82goIGwK3jIcvOv72koB5 Izq9By7L6PESEmmJ5nFc4ko9vH2CBMcJKPV95HTg= x-aol-sid: 3039ac1d338d52616bb37d53 Content-Length: 26445 The Email was from the aol domain, so I understand the IP of aol. My question is, looking at 66.199.226.81 , would it be safe to say that the Email originated from "Access Integrated Technologies"? Thanks for any help!

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • Is it reasonable that a random disk seek & read costs ~16ms?

    - by fzhang
    I am frustrated about the latency of random reading from a non-ssd disk. Based on results from following test program, it speeds ~16 ms for a random read of just 512 bytes without help of os cache. I tried changing 512 to larger values, such as 25k, and the latency did not increase as much. I guess it is because the disk seek dominates the time. I understand that random reading is inherently slow, but just want to be sure that ~16ms is reasonable, even for non-ssd disk. #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> #include <sys/unistd.h> #include <fcntl.h> #include <limits.h> #include <stdio.h> #include <string.h> int main(int argc, char** argv) { int fd = open(argv[1], O_RDONLY); if (fd < 0) { fprintf(stderr, "Failed open %s\n", argv[1]); return -1; } const size_t count = 512; const off_t offset = 25990611 / 2; char buffer[count] = { '\0' }; struct timeval start_time; gettimeofday(&start_time, NULL); off_t ret = lseek(fd, offset, SEEK_SET); if (ret != offset) { perror("lseek error"); close(fd); return -1; } ret = read(fd, buffer, count); if (ret != count) { fprintf(stderr, "Failed reading all: %ld\n", ret); close(fd); return -1; } struct timeval end_time; gettimeofday(&end_time, NULL); printf("tv_sec: %ld, tv_usec: %ld\n", end_time.tv_sec - start_time.tv_sec, end_time.tv_usec - start_time.tv_usec); close(fd); return 0; }

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • Reproducible file corruption for files on windows share

    - by bbuser
    We have about 40 file servers in our intranet to distribute software packages. The servers have names like example01, example02 etc. Every name resolves to a single IP-address (A-record) and the IP resolves back to that name (PTR) for every single server. The thing is, that for a certain file (mypackage.cab) I get different results depending on whether I use: \\192.0.2.01\fs\pkg\X12345678 or \\example01.foo\fs\pkg\X12345678 While in one case the file is correct in the other case the file has exactly the right size, but it is all zeros. For a certain combination of client and server I can reproduce this reliably. It doesn´t matter if I download in Windows Explorer, via robocopy or even from Linux with smbclient. It´s always the same, one file corrupt, the other ok. It happens only for certain combinations of clients and servers, not others. For example: client01 example01.foo -> OK (192.0.2.01 is also OK) client01 example02.foo -> broken (but 192.0.2.02 is OK) client02 example01.foo -> broken (but 192.0.2.01 is OK) client02 example02.foo -> OK (192.0.2.02 is also OK) client03 example06.foo -> OK (but 192.0.2.06 is broken) client03 example07.foo -> OK (192.0.2.07 is also OK) etc... In some cases I get the broken file when I use the IP address in other cases when I use the name. For every client the majority of servers is Ok, but from every client I tested I have at least 4 cases of broken files. All this happens only for mypackage.cab (about 5k in size), it never happened for any of the other files in the same directory. Confused? Certainly I am. Any idea what can cause this or any idea what to try to figure it out is welcome. Clients are Windows XP. Servers are NetApp filers I don´t have access to. I can (and will) contact the filer team again, but first I have to have an idea what is going on.

    Read the article

  • Reproducible file corruption for files on windows share

    - by bbuser
    We have about 40 file servers in our intranet to distribute software packages. The servers have names like example01, example02 etc. Every name resolves to a single IP-address (A-record) and the IP resolves back to that name (PTR) for every single server. The thing is, that for a certain file (mypackage.cab) I get different results depending on whether I use: \\192.0.2.01\fs\pkg\X12345678 or \\example01.foo\fs\pkg\X12345678 While in one case the file is correct in the other case the file has exactly the right size, but it is all zeros. For a certain combination of client and server I can reproduce this reliably. It doesn´t matter if I download in Windows Explorer, via robocopy or even from Linux with smbclient. It´s always the same, one file corrupt, the other ok. It happens only for certain combinations of clients and servers, not others. For example: client01 example01.foo -> OK (192.0.2.01 is also OK) client01 example02.foo -> broken (but 192.0.2.02 is OK) client02 example01.foo -> broken (but 192.0.2.01 is OK) client02 example02.foo -> OK (192.0.2.02 is also OK) client03 example06.foo -> OK (but 192.0.2.06 is broken) client03 example07.foo -> OK (192.0.2.07 is also OK) etc... In some cases I get the broken file when I use the IP address in other cases when I use the name. For every client the majority of servers is Ok, but from every client I tested I have at least 4 cases of broken files. All this happens only for mypackage.cab (about 5k in size), it never happened for any of the other files in the same directory. Confused? Certainly I am. Any idea what can cause this or any idea what to try to figure it out is welcome. Clients are Windows XP. Servers are NetApp filers I don´t have access to. I can (and will) contact the filer team again, but first I have to have an idea what is going on.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • OpenWRT based gateway with dnsmasq and internal server with bind

    - by Peter
    I have router based on OpenWRT which has dnsmasq 2.59. Inside my local area network I have a NS server bind. This server has internal and external views for a couple of my domains. My router forwards port 53 TCP and UDP from outside IP (router WAN) to this server. For the external clients everything works fine. In order to organize the internal view, I decided to add the exception to /etc/dnsmasq.conf server=/mydomain1.com/192.168.1.1 server=/mydomain2.com/192.168.1.1 server=/mydomain3.com/192.168.1.1 (192.168.1.1 - IP address of the NS server) According to dnsmasq manstrong text: More specific domains take precendence over less specific domains, so: --server=/google.com/1.2.3.4 --server=/www.google.com/2.3.4.5 will send queries for *.google.com to 1.2.3.4, except *www.google.com, which will go to 2.3.4.5 this domain name with all the sub-domains is supposed to be forward to my NS server. Everything works (SOA, NS, MX, CNAME, TXT, SRV etc.) except for A-record: # nslookup -type=a mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 *** Can't find mydomain1.com: No answer 192.168.1.100 - IP address of my router (dnsmasq) However, I can get the answer for the TXT-record query: # nslookup -type=txt mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 mydomain1.com text = "v=spf1 include:mydomain1.com -all" When I just specify the local IP of my NS server (direct access to the server without using dnsmasq) then the results are: # nslookup -type=a mydomain1.com 192.168.1.1 Server: 192.168.1.1 Address: 192.168.1.1#53 Name: mydomain1.com Address: 192.168.1.1 There is a similar situation with the MX-record: C:\>nslookup -type=mx mydomain1.com Server: router.lan Address: 192.168.1.100 mydomain1.com MX preference = 10, mail exchanger = mail.mydomain1.com mydomain1.com nameserver = ns.mydomain1.com mail.mydomain1.com internet address = 192.168.1.1 ns.mydomain1.com internet address = 192.168.1.1 C:\>nslookup -type=a mail.mydomain1.com Server: router.lan Address: 192.168.1.100 *** No address (A) records available for mail.mydomain1.com This is a dig result: # dig +nocmd mydomain1.com any +multiline +noall +answer mydomain1.com. 86400 IN SOA ns.mydomain1.com. hostmaster.mydomain1.com. ( 121204007 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 604800 ; expire (1 week) 3600 ; minimum (1 hour) ) mydomain1.com. 86400 IN NS ns.mydomain1.com. mydomain1.com. 86400 IN A 192.168.1.1 mydomain1.com. 604800 IN MX 10 mail.mydomain1.com. mydomain1.com. 3600 IN TXT "v=spf1 include:mydomain1.com -all" When I try to ping: # ping mydomain1.com ping: cannot resolve mydomain1.com: Unknown host Is it a bug of dnsmasq 2.59? How to manage this problem?

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • How do I correct a directory incorrectly copied into itself?

    - by Peter Boughton
    Given the following situation... <path>/mydir1/mydir2 ...where mydir2 should have overwritten mydir1, but was instead placed inside, and both directories actually have the same filename. How is that fixed? Attempting to do mv <path>/mydir/mydir/* <path>/mydir/ or mv <path>/mydir <path>/ results in: mv: cannot move `<path>/mydir/mydir` to a subdirectory of itself, `<path>/mydir` This seems stupidly simple, but it's late here and I can't figure it out. There are seventeen such directories to fix (path differs for each, but same mydir name). To confirm, the error message can be caused with this: # cd /path/to/directory # mv mydir/mydir ./ mv: cannot move `mydir/mydir' to a subdirectory of itself, `./mydir' Also tried: # mv mydir/mydir/* mydir/ mv: cannot move `mydir/mydir/otherdir1' to a subdirectory of itself, `mydir/otherdir1' mv: cannot move `mydir/mydir/otherdir2' to a subdirectory of itself, `mydir/otherdir2' and... # mv /path/to/directory/mydir/mydir/otherdir1 /path/to/directory/mydir/ mv: cannot move `/path/to/directory/mydir/mydir/otherdir1' to a subdirectory of itself, `/path/to/directory/mydir/otherdir1' and using a temporary directory: # mv mydir/mydir ./mydir-temp # mv mydir-temp/* mydir/ mv: cannot move `mydir-temp/otherdir1' to a subdirectory of itself, `mydir/otherdir1' mv: cannot move `mydir-temp/otherdir2' to a subdirectory of itself, `mydir/otherdir2' I found a similar question "How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?" which suggested that mv bar/{,.}* . would do this. But this also gives the same errors, as well as confusingly picking up . and .. from somewhere. # cd mydir # mv mydir/{,.}* . mv: cannot move `mydir/otherdir1' to a subdirectory of itself, `./otherdir1' mv: cannot move `mydir/otherdir2' to a subdirectory of itself, `./otherdir2' mv: cannot move `mydir/.' to `./.': Device or resource busy mv: cannot move `mydir/..' to `./..': Device or resource busy mv: overwrite `./.file'? y Another similar question "linux mv command weirdness" suggests that mv doesn't overwrite and a copy is required. # cd mydir # cp -rf ./mydir/* ./ cp: overwrite `./otherdir1/file1'? y cp: overwrite `./otherdir1/file2'? y cp: overwrite `./otherdir1/file3'? This appears to be working... except there's a lot of files (and dirs) - I don't want to confirm every one! Isn't the f there supposed to prevent this? Ok, so cp was aliased to cp -i (which I found out with type cp), and bypassed by using \cp -rf ./mydir/* ./ which seems to have worked. Although I've solved the problem of getting dirs/files from one place to another, I'm still curious as to what's going on with the mv stuff - is this really a deliberate feature as suggested by Warner?

    Read the article

  • How to get rid of messages addressed to not existing subdomains?

    - by user71061
    Hi! I have small problem with my sendmail server and need your little help :-) My situation is as follow: User mailboxes are placed on MS exchanege server and all mail to and from outside world are relayed trough my sendmail box. Exchange server ----- sendmail server ------ Internet My servers accept messages for one main domain (say, my.domain.com) and for few other domains (let we narrow it too just one, say my_other.domain.com). After configuring sendmail with showed bellow abbreviated sendmail.mc file, essentially everything works ok, but there is small problem. I want to reject messages addressed to not existing recipients as soon as possible (to avoid sending non delivery reports), so my sendmail server make LDAP queries to exchange server, validating every recipient address. This works well both domains but not for subdomains. Such subdomains do not exist, but someone (I'm mean those heated spamers :-) could try addresses like this: user@any_host.my.domain.com or user@any_host.my_other.domain.com and for those addresses results are as follows: Messages to user@sendmail_hostname.my.domain.com are rejected with error "Unknown user" (due to additional LDAPROUTE_DOMAIN line in my sendmail.mc file, and this is expected behaviour) Messages to user@any_other_hostname.my.domain.com are rejected with error "Relaying denied". Little strange to me, why this time the error is different, but still ok. After all message was rejected and I don't care very much what error code will be returned to sender (spamer). Messages to user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com are rejected with error "Unknown user" but only when, there is no user@my_other.domain.com mailbox (on exchange server). If such mailbox exist, then all three addresses (i.e. user@my_other.domain.com, user@sendmail_hostname.my_other.domain.com and user@any_other_hostname.my_other.domain.com) will be accepted. (adding additional line LDAPROUTE_DOMAIN(my_sendmail_host.my_other.domain.com) to my sendmail.mc file don't change anything) My abbreviated sendmail.mc file is as follows (sendmail 8.14.3-5). Both domains are listed in /etc/mail/local-host-names file (FEATURE(use_cw_file) ): define(`_USE_ETC_MAIL_')dnl include(`/usr/share/sendmail/cf/m4/cf.m4')dnl OSTYPE(`debian')dnl DOMAIN(`debian-mta')dnl undefine(`confHOST_STATUS_DIRECTORY')dnl define(`confRUN_AS_USER',`smmta:smmsp')dnl FEATURE(`no_default_msa')dnl define(`confPRIVACY_FLAGS',`needmailhelo,needexpnhelo,needvrfyhelo,restrictqrun,restrictexpand,nobodyreturn,authwarnings')dnl FEATURE(`use_cw_file')dnl FEATURE(`access_db', , `skip')dnl FEATURE(`always_add_domain')dnl MASQUERADE_AS(`my.domain.com')dnl FEATURE(`allmasquerade')dnl FEATURE(`masquerade_envelope')dnl dnl define(`confLDAP_DEFAULT_SPEC',`-p 389 -h my_exchange_server.my.domain.com -b dc=my,dc=domain,dc=com')dnl dnl define(`ALIAS_FILE',`/etc/aliases,ldap:-k (&(|(objectclass=user)(objectclass=group))(proxyAddresses=smtp:%0)) -v mail')dnl FEATURE(`ldap_routing',, `ldap -1 -T<TMPF> -v mail -k proxyAddresses=SMTP:%0', `bounce')dnl LDAPROUTE_DOMAIN(`my.domain.com')dnl LDAPROUTE_DOMAIN(`my_other.domain.com ')dnl LDAPROUTE_DOMAIN(`my_sendmail_host.my.domain.com')dnl define(`confLDAP_DEFAULT_SPEC', `-p 389 -h "my_exchange_server.my.domain.com" -d "CN=sendmail,CN=Users,DC=my,DC=domain,DC=com" -M simple -P /etc/mail/ldap-secret -b "DC=my,DC=domain,DC=com"')dnl FEATURE(`nouucp',`reject')dnl undefine(`UUCP_RELAY')dnl undefine(`BITNET_RELAY')dnl define(`confTRY_NULL_MX_LIST',true)dnl define(`confDONT_PROBE_INTERFACES',true)dnl define(`MAIL_HUB',` my_exchange_server.my.domain.com.')dnl FEATURE(`stickyhost')dnl MAILER_DEFINITIONS MAILER(smtp)dnl Could someone more experienced with sendmail advice my how to reject messages to those unwanted subdomains? P.S. Mailboxes @my_other.domain.com are used only for receiving messages and never for sending.

    Read the article

  • Why does my mail get marked as spam?

    - by schoen
    I Have the server "afspraakmanager.be". It matches everything not to be a spam server.(it isn't by the way): it has reverse dns, spf,dkim,... . But hotmail marks it as spam. I think the problem is the SPF/DKIM records. when i sent an email to my gmail it says: "Received-SPF: neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=2a02:348:8e:6048::1; Authentication-Results: mx.google.com; spf=neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=neutral (bad format) [email protected]" So i guess my SPF and DKIM records aren't set up right. But I also don't have a clue what is wrong with them. this is the zone file: ; zone file for afspraakmanager.be $ORIGIN afspraakmanager.be. $TTL 3600 @ 86400 IN SOA ns1.eurodns.com. hostmaster.eurodns.com. ( 2013102003 ; serial 86400 ; refresh 7200 ; retry 604800 ; expire 86400 ; minimum ) @ 86400 IN NS ns1.eurodns.com. @ 86400 IN NS ns2.eurodns.com. @ 86400 IN NS ns3.eurodns.com. @ 86400 IN NS ns4.eurodns.com. ; Mail Exchanger definition @ 600 IN MX 10 smtp ; IPv4 Address definition @ IN A 37.230.96.72 afspraakmanager.be 600 IN A 37.230.96.72 localhost 86400 IN A 127.0.0.1 smtp 600 IN A 37.230.96.72 www 600 IN A 37.230.96.72 ; Text definition default._domainkey 600 IN TXT "v=DKIM1\\; k=rsa\\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC6pvlZKnbSVXg1Bf3MF2l8xRrKPmqIw2i9Rn1yZ3HEny9qH1vyGXUjdv2O0aQbd5YShSGjtg5H/GedRMLpB0Qb+hBj1yGofOQTdcVtZZfj8qBY5Z7vEkhvtdaogQ0vLjgcwhg0BBuTewEkLxrl9IIzkPMZ1SCtM2Y0RtiUhg2cjQIDAQAB" ; Sender Policy Framework definition afspraakmanager.be 600 IN SPF "v=spf1 a mx ptr +all" The DKIM signature in the header: DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=afspraakmanager.be; s=mail; t=1382361029; bh=4pDpXBY8rCbX8+MfrklZzpQxaUsa3vSPUYjcDR3KAnU=; h=Date:From:To:Subject:From; b=SoBBaAlrueD8qID8txl2SBSqnZgN2lkPCdSPI/m7/YLezIcBedkgIX1NswYiZFl6Z AmF8dES73WUaaJjItVHSrdCJK2mJ/Az+vrgNsyk+GqZZ1YPiIlH3gqRrsguhoofXUX /gqLlqsLxqxkKKd9EbSzKRHuDGlJCLm5SlL8wnL0=

    Read the article

  • Hibernating and booting into another OS: will my filesystems be corrupted?

    - by Ryan Thompson
    Suppose I have Windows and Linux installed on the same computer. If I hibernate Windows, can I boot into Linux without corrupting the Windows filesystem when I resume Windows? What about the other way around? What if I hibernate one, boot into the other, and mount the hibernated filesystem read/write? Read-only? If this is unsafe, is there any way to detect the hibernated state of the other OS and prevent mounting its filesystem? Basically, how far can I push this before it breaks, and how dangerous is it near the edge? I think I know the answers to some of the above questions, but for other ones, I have no idea, and for obvious reasons I have not tested this on my own computer. If someone has tested these, please enlighten the rest of us. I'm not necessarily looking for a specific answer to every question; I'll accept any response that answers a reasonable portion. EDIT: Let me clarify that when I say "hibernate," I mean the process of writing the contents of RAM to the hard disk and completely powering down the computer. In this state, powering the computer back on brings you through the BIOS and bootloader again, and you could theoretically select another operating system on a multi-boot system. Anyway, on with the original question: RESULTS Ok, after everyone's assurances that this would work, I tested it for myself. I set up Ubuntu to remount all ntfs filesystems and external drives read-only before hibernating. There was no need for a similar Windows setup because Windows does not read Linux filesystems. Then, I tried alternately hibernating one operating system and resuming the other, back and forth a few times. I even tried mounting the Windows filesystem from Ubuntu read-write, and creating a few files. Windows didn't complain when I resumed. So, in conclusion, you can more or less freely hibernate in a dual-boot Windows/Linux scenario. Note that I did not test a dual Linux/Linux co-hibernation situation. If you have two or more Linux installs and you hibernate one of them, you might be able to corrupt the filesystem by mounting it from another.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    Hi all, This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • PDF Corruption When Sending with Microsoft Products

    - by Winner
    I have the same PDF corruption problem in two different offices that I am the tech support for. Office 1: Started in the middle of December. PDF received from outside the office and is viewable with no problems. I have no control over how it is created. If it is forwarded to anyone else, the PDF is corrupted. I have forwarded it to multiple people in the office. I have tried viewing with Reader 8, 9, Sumatra and Fox IT. I have tried forwarding to Gmail and their viewer says it is corrupted. If I save the PDF and create a new email, it will be corrupted when sent using Outlook 2003, Outlook 2007, Microsoft Live Mail and Outlook Express. If I create the email using Thunderbird 3, Gmail or the webclient Iclient for IPSwitch IMail it will not be corrupted. I have confirmed the same results when using our IMail SMTP and also Using Gmail as the SMTP server. To be clear, if I created in Thunderbird, Gmail or Iclient and received on any of the MS products, it will be viewable. This office receives PDFs daily from multiple sources. There is only a small subset that are having this problem. So far they problem PDFs are from two different companies they deal with, but not all of the PDFs are bad. Office 2: PDFs are created by a management system. I'm not sure what engine is used to create them. Same exact same issues. At both offices, I noticed that the file size is wrong. One small PDF the proper file size is 12kb for the PDF when it's viewable, when it shows up corrupted it is only 8kb. We handle the email for both offices. Both are POP servers, not Exchange. IMail was updated after these issues start. I have tried different SMTP servers and it still seems to happen only when using Microsoft products to send. Anyone else having problems with PDFs getting corrupted? Any ideas how to find out a resolution?

    Read the article

  • apcupsd on Linux does not report on APC BackUPS Pro 900

    - by lserni
    From what documentation I could find, the UPS should be (is!) supported by Linux and ought to work with apcupsd. I looked for specific problems such as the infamous Microlink protocol, and found none. I have found a feedback from a guy in UK that reports using this very model on a not-too-different OS version (his OpenSuSE 12.1, mine 12.3 x86_64). The USB port is detected, lsusb reports Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply and lsusb -v -s002:003 confirms and expands: Bus 002 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x051d American Power Conversion idProduct 0x0002 Uninterruptible Power Supply bcdDevice 0.90 iManufacturer 1 American Power Conversion iProduct 2 Back-UPS RS 900G FW:879.L4 .I USB FW:L4 bNumConfigurations 1 Configuration Descriptor: [...] Interface Descriptor: [...] bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.00 bCountryCode 33 US bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 1134 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 100 Device Status: 0x0000 (Bus Powered) The kernel recognizes this and duly sets up crw------- 1 root root 180, 96 Nov 4 16:11 /dev/usb/hiddev0 As far as I know, everything is as it should be. I have put the standard configuration in /etc/apcupsd/apcupsd.conf (which is Unix-terminated, ASCII-only, no BOM (just in case)) UPSCABLE usb UPSTYPE usb DEVICE (I have also tried commenting out DEVICE, and setting a device of /dev/puppa results in an access attempt to /dev/puppa, not some /var/lib/dev/puppa or /dev/puppa\r\n). Yet, what apcaccess tells me is VERSION : 3.14.10 (13 September 2011) suse CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2013-11-04 16:24:22 +0100 MODEL : STATUS : NOBATT LINEV : 000.0 Volts LOADPCT : 0.0 Percent Load Capacity BCHARGE : 000.0 Percent TIMELEFT : 0.0 Minutes MBATTCHG : 5 Percent MINTIMEL : 3 Minutes MAXTIME : 0 Seconds SENSE : Low LOTRANS : 000.0 Volts HITRANS : 000.0 Volts It doesn't recognize the model, and reports no battery (and no voltage). This confirms that it's not the Microlink problem, or it would report the battery status, if precious little else. If I disconnect the USB cable, I get an apcupsd message to the effect that communications have been lost; and I get the "communication restored" broadcast too, if I reconnect the cable. apcupsd is monitoring. So everything tells me that it should work -- only it doesn't. Does anyone spot what I'm missing?

    Read the article

  • Why would a PCI scan fail because of components that are not even installed?

    - by Brandon
    Recently a PCI scan was run against a web server and the result was a failure. Some of the issues could be fixed, however others simply make no sense to me. The machine was a clean install, there are only two things running, the .NET 3.5 website and the dotDefender web application firewall. However there are several errors similar to: Web server vulnerability Impact: /servlet/SessionServlet: JRun or Netware WebSphere default servlet found. All default code should be removed from servers. Risk Factor: Medium/ CVSS2 Base Score: 6.4 CVE: CVE-2000-0539 I'm not sure what this is, but I can't find anything on the server that looks anything like this. Web server vulnerability Impact: /some.php?=PHPE9568F35- D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests that contain specific QUERY strings. Risk Factor: Medium/ CVSS2 Base Score: 5.0 PHP is not installed. Trying to add that query string to any page does nothing because the application ignores it. And doing that phpVersion check results in a 404. Similar to this, there are dozens of errors related to JSP and Oracle that are also not installed. Web server vulnerability Impact: /admin/database/wwForum.mdb: Web Wiz Forums pre 7.5 is vulnerable to Cross-Site Scripting attacks. Default login/pass is Administrator/letmein Risk Factor: Medium/ CVSS2 Base Score: 4.0 There are several errors like this, telling me that Web Wiz Forums, Alan Ward A-Cart 2.0, IlohaMail, etc. are all vulnerable. These are not installed or referenced anywhere I can find. There are even references to pages that simply don't exist, like OpenAutoClassifieds. Can anyone point me in the right direction as to why these errors are showing up or where I might look to find these components if they are in fact installed? Note: This website and server are for a subdomain of the main website. The main website runs on a server that is running Apache/PHP, but I don't have access to that server. The report says the subdomain was the site being scanned, but is it possible for it to have scanned the main site as well?

    Read the article

  • How to enable caching on Apache / Ubuntu Linux?

    - by Jim Mischel
    I have a large (several megabytes) XML file that's updated rather frequently (every 10 minutes or less) and gets a lot of traffic. I'd like to implement some caching to reduce bandwidth and server load. Looking at the Apache documents, I see a dizzying array of configuration options that involve various combinations of mod_expires, mod_headers, and mod_cache (and variants). I end up running in circles and the results aren't what I expect. I'm comfortable editing the various configuration files if I have some idea what I'm supposed to change. But at the moment I'm poking around in the dark and that's never a comfortable feeling. So, perhaps if I describe what I want, somebody here can take me by the hand and say, "This is what you need to do." Periodically, this file, call it "stuff.xml" is updated and a new version copied to the directory. The external url would be, for example, http://example.com/stuff.xml. Understand, this part works. Whenever I request the file, I get the expected result. But the file is big and I want to save bandwidth, so first I'd like to implement conditional GET semantics with the If-Modified-Since header. How do I do this? I've enabled mod_headers and mod_expired and added the <FilesMatching> section in my httpd.conf as recommended in countless examples I've seen online, but that didn't change the behavior when made a conditional GET request. I always get a status 200 with the entire document. So how the heck do I implement this? That'll cut down on neeless transfers. I'd also like to limit the amount of data transferred. Seeing as this is XML, gzipping it should save me 50% or more. My next step would be to somehow gzip the file and, if it's not too difficult, store it in memory. That'll cut down on per-access data transfer, and also reduce disk transfers. So how do I implement this type of caching? Thanks in advance.

    Read the article

  • Problems migrating an EBS backed instance over AWS Regions

    - by gshankar
    Note: I asked this question on the EC2 forums too but haven't received any love there. Hopefully the ServerFault community will be more awesome. The new AWS Sydney region opening up is something that we've been waiting for for a long time but I'm having a lot of trouble migrating our instances over from N. California. I managed to migrate 1 instance over using CloudyScripts to move a snapshot and then firing up a new instance in the Sydney region. This was a very new instance so both the source and destination were running on a Ubuntu 12.04 LTS server and I had no issues there. However, the rest of our instances are all Ubuntu 10.04 LTS and with these, I'm having a lot of problems. I've tried following: 1- following the AWS whitepaper on moving instances which was given to us at the recent Customer Appreciation Day in Sydney where the new region was launched. The problem with this approach was with the last step (Step 19) here you register the image: ec2-register -s snap-0f62ec3f -n "Wombat" -d "migrated Wombat" --region ap-southeast-2 -a x86_64 --kernel aki-937e2ed6 --block-device-mapping "/dev/sdk=ephemeral0" I keep getting this error: Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' does not exist which I think is due to the kernel_id not existing in the Sydney region? 2- Using CloudyScripts to move a snapshot and then creating a new volume and attaching to a new instance in Sydney This results in the instance just hanging on boot and failing the status checks. I can't SSH in or look at the server log I suspect that my issue is with finding the right kernel_id for the volume in the new region. However I can't seem to work out how to go about finding this kernel_id, the ones I've tried (from the original instance) don't result in the Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' error and any other kernel_id just won't boot. I've tried both 12.04 and 10.04 versions of Ubuntu. Nothing seems to work, I've been banging my head against a wall for a while now, please help! New (broken) instance i-a1acda9b ami-9b8611a1 aki-31990e0b Source instance i-08a6664e ami-b37e2ef6 aki-937e2ed6 p.s. I also tried following this guide on updating my Ubuntu LTS version to 12.04 before doing the migration but it didn't seem to work either, still getting stuck on updating the kernel_id http://ubuntu-smoser.blogspot.com.au/2010/04/upgrading-ebs-instance.html

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >