Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 738/998 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Googlemail users can't email my email address

    - by Jack W-H
    Hi folks I have a GridServer account at MediaTemple. The address linked up to my MT account is [email protected]. My non-Google email address could email [email protected] just fine. But when my friend tried to email it from his gmail address, he got the following message: From: Mail Delivery Subsystem Date: Thu, Apr 15, 2010 at 12:02 PM Subject: Delivery Status Notification (Failure) To: [email protected] Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 relay not permitted (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.231.205.139 with HTTP; Thu, 15 Apr 2010 12:02:26 -0700 (PDT) In-Reply-To: <[email protected] References: <[email protected] Date: Thu, 15 Apr 2010 12:02:26 -0700 Received: by 10.231.169.144 with SMTP id z16mr211585iby.25.1271358147047; Thu, 15 Apr 2010 12:02:27 -0700 (PDT) Message-ID: Subject: Re: Hi Friend From: My Friend To: "[email protected]" Content-Type: multipart/alternative; boundary=0016e6d26c5abcb2a704844b22bf Does this work. Does this work. Does this work? On Thu, Apr 15, 2010 at 11:30 AM, [email protected] wrote: Hi Friend. Just testing the email address I set up for My Site. Could you please reply so I can check if it's working OK? Cheers Jack I thought it was just a fluke, but exactly the same thing happens when I use MY Gmail address that I also have. Can anyone shed some light on the problem? Jack

    Read the article

  • IIS 7.5, Multiple Application Pools, and URL Rewriting (403.18 -- Forbidden)

    - by Jerry Hewett
    Is there any way to configure IIS 7.5 to perform URL rewrites to different application pools on the same site without running into a 403.18 error? We're using Helicon ISAPI Rewrite 3 on IIS 6 and it's working like a charm. The root-level "application" is running under it's own application pool, and on IIS 6 we have no problems doing URL rewrites from that application pool to any one of the other four application pools. But when I copy the same server configuration information over to IIS 7.5 the URL rewrites to any of the other application pools fail with a "403.18 -- Forbidden" error. The weird bit is that the IIS 6 is not (at least as far as I can tell, by looking at the site Service configuration dialog) running under IIS 5 emulation mode, so somehow the rewrites aren't throwing 403.18 errors. So something must be different... but whatever it is, I sure haven't been able to figure it out. Btw, we're not married to Helicon ISAPI Rewrite. If there's another way to preserve our current rewrite configuration rules using another module or method I'd be more than happy to use it.

    Read the article

  • Install a web certificate on an Android device

    - by martani_net
    To gain access to WIFI at university I have to login with my user/pass credentials. The certificate of their website (the local home page that asks for the credentials) is not recognized as a trusted certificate, so we install it separately on our computers. The problem is that I don't take my laptop with me often to university, so I usually want to connect using my HTC Magic, but I have no clue on how to install the certificate separately on Android, it is always rejected. [Edit2] : this is what is stated in their website Need for installation of official certificates CyberTrust validated by the CRU (http://www.cru.fr/wiki/scs/) The certificates contain information certified to generate encryption keys for data exchange, called "sensitive" as the password of a user. By connecting to CanalIP-UPMC, for example, the user must validate the identity of the server accepting the certificate appears on the screen in a "popup window". In reality, the user is unable to validate a certificate knowing, because a simple visual check of the license is impossible. Therefore, the certificates of the certification authority (CRU-Cybertrust Educationnal-ca.ca Cybertrust and-global-root-ca.ca) must be installed prior to the browser for the validity of the certificate server can be controlled automatically. Before you connect to the network-UPMC CanalIP you must register in your browser through the certification authority Cybertrust-Educationnal-ca.ca Download the Cybertrust-Educationnal-ca.ca, depending on your browser and select the link below : With Internet Explorer, click on the link following. With Firefox, click on the link following. With Safari, click the link following. If this procedure is not respected, a real risk is incurred by the user: that of being robbed password LDAP directory UPMC. A malicious server may in fact try very easily attack type "man-in-the-middle" by posing as the legitimate server at UPMC. The theft of a password allows the attacker to steal an identity for transactions over the Internet can engage the responsibility of the user trapped ... This is their website : http://www.canalip.upmc.fr/doc/Default.htm (in French, Google-translate it :)) Anyone knows how to install a web certificate on Android?

    Read the article

  • apache Client Certificate Authentication errors: Certificate Verification: Error (18): self signed certificate

    - by decoy
    So I have been following instructions on setting up Client Certificate Authentication in Apache2 w/ mod_ssl. This is solely for the purpose of testing an application against CAA, not for any sort of production use. So far I've followed http://www.impetus.us/~rjmooney/projects/misc/clientcertauth.html for advice on generating my CA, server, and client encryption information. I've put all three of them into /etc/ssl/ca/private. I've setup the following additional directives in my default_ssl site file: <IfModule mod_ssl.c> <VirtualHost _default_:443> ... SSLEngine on SSLCertificateFile /etc/ssl/ca/private/server.crt SSLCertificateKeyFile /etc/ssl/ca/private/server.key SSLVerifyClient require SSLVerifyDepth 2 SSLCACertificatePath /etc/ssl/ca/private SSLCACertificateFile /etc/ssl/ca/private/ca.crt <Location /> SSLRequireSSL SSLVerifyClient require SSLVerifyDepth 2 </Location> <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> ... </VirtualHost> </IfModule> I've install the p12 file into Chrome, but when I go to visit https://localhost, I get the following errors Chrome: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. Apache: Certificate Verification: Error (18): self signed certificate If I had to guess, one of my directives is not setup right to load and verify the p12 w/ my self created CA. But I can't for the life of me figure out what it is. Would anyone have more experience here who could point me in the right direction?

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel.

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctls, so i'm posting them with comments. Based on Igor Sysoev (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Sysctls are for 7.x FreeBSD. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. Highload web server sysctls: # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Do not use lager sockbufs on 8.0 # ( http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=262144 # Recive clusters (on amd64 7.2+ 65k is default) # For such high value vm.kmem_size must be increased to 3G #kern.ipc.nmbclusters=229376 # Jumbo pagesize(4k/8k) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=192000 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=24000 #kern.ipc.nmbjumbo16=10240 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # Turn off receive autotuning #net.inet.tcp.recvbuf_auto=0 # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This should be enabled if you going to use big spaces (>64k) #net.inet.tcp.rfc1323=1 # Turn this off on highspeed, lossless connections (LAN 1Gbit+) #net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # You can try setting it to 0 on fileserver with 1GBit+ interfaces # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) #net.inet.tcp.inflight.enable=0 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't tested it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds (default: 30000 from RFC) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=40960 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becames lower) vfs.ufs.dirhash_maxmem=67108864 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 /boot/loader.conf: # Accept filters for data, http and DNS requests # Usefull when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load= #siis_load= # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 200M) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=100 # Incresed hostcache net.inet.tcp.hostcache.hashsize="16384" net.inet.tcp.hostcache.bucketlimit="100" # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # `sysctl dev.em.0.stats=1 ; dmesg` # #Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # Nicer boot logo =) loader_logo="beastie" And finally here is my additions to GENERIC kernel # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU of 8K(amd64) # (4k for i386). This req. is only for receiving data. # Read more in man zero_copy_sockets #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_DEFAULT_TO_ACCEPT options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron #device amdtemp # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # DTrace options KDTRACE_HOOKS # all architectures - enable general DTrace hooks options DDB_CTF # all architectures - kernel ELF linker loads CTF data #options KDTRACE_FRAME # amd64-only # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (9.x+) #options TEKEN_UTF8 #options TEKEN_XTERM # NCQ support # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ #options ATA_CAM # FreeBSD 9+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html #options DEADLKRES PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Windows Backup failed as another backup or recovery is in progress.

    - by remunda
    Hi, i have set up backup schedule on our server. SQL Server 2008 to 01:00am on windows server 2008 R2 to 4:00am. Sql backup runs well, but system backup ends sometimes with error. The error is : http://technet.microsoft.com/en-us/library/dd364768%28WS.10%29.aspx I mean, that is caused by SQL Server, because it unexpectedly runs backup. this is from sql server log (it runs for every database): Date 4.2.2010 4:00:21 Log SQL Server (Current - 29.1.2010 23:25:00) Source spid72 Message I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. and Date 4.2.2010 4:00:24 Log SQL Server (Current - 29.1.2010 23:25:00) Source Backup Message Database backed up. Database: master, creation date(time): 2010/01/29(23:25:32), pages dumped: 370, first LSN: 859:56:37, last LSN: 859:80:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{17B91D5C-9968-4D11-A7F1-1A31523D32F0}25'}). This is an informational message only. No user action is required. My questions is: Why runs sql backup with windows backup? How can i dissolve this that errors? Thank you a lot.

    Read the article

  • How to get robocopy running in powershell?

    - by Mark Allison
    Hi, I'm trying to use robocopy inside powershell to mirror some directories on my home machines. Here's my script: param ($configFile) $config = Import-Csv $configFile $what = "/COPYALL /B /SEC/ /MIR" $options = "/R:0 /W:0 /NFL /NDL" $logDir = "C:\Backup\" foreach ($line in $config) { $source = $($line.SourceFolder) $dest = $($line.DestFolder) $logfile = $logDIr $logfile += Split-Path $dest -Leaf $logfile += ".log" robocopy "$source $dest $what $options /LOG:MyLogfile.txt" } The script takes in a csv file with a list of source and destination directories. When I run the script I get these errors: ------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows ------------------------------------------------------------------------------- Started : Sat Apr 03 21:26:57 2010 Source : P:\ C:\Backup\Photos \COPYALL \B \SEC\ \MIR \R:0 \W:0 \NFL \NDL \LOG:MyLogfile.txt\ Dest - Files : *.* Options : *.* /COPY:DAT /R:1000000 /W:30 ------------------------------------------------------------------------------ ERROR : No Destination Directory Specified. Simple Usage :: ROBOCOPY source destination /MIR source :: Source Directory (drive:\path or \\server\share\path). destination :: Destination Dir (drive:\path or \\server\share\path). /MIR :: Mirror a complete directory tree. For more usage information run ROBOCOPY /? **** /MIR can DELETE files as well as copy them ! Any idea what I need to do to fix? Thanks, Mark.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Having trouble using psservice and sc.exe between Windows Server 2008 machines

    - by Teflon Mac
    I'm trying to control services on one W2k8 machine from another; no domain just a workgroup. The user account I'm logged in as is an administrator on both machines. I've tried both psservice and sc.exe. These work in a Windows Server 2003 environment but it looks like I need to an extra step or two due to the changed security model in 2008. Any ideas as to how grant permission to the Service Control Manager (psservice) or OpenService (sc)? I tried running the DOS window with "Run As Administrator" and it made no difference. With psservice I get the following D:\mydir>psservice \\REMOTESERVER -u "adminid" -p "adminpassword" start "Display Name of Service" PsService v2.22 - Service information and configuration utility Copyright (C) 2001-2008 Mark Russinovich Sysinternals - www.sysinternals.com Unable to access Service Control Manager on \\REMOTESERVER: Access is denied. In the remote server, I get the following message in the Security Log so I know I connect and login to the remote machine. I assume it then fails on a subsequent authorization step. The logoff message in the security log is just that ("An account was logged off."), so no extra info there. Special privileges assigned to new logon. Subject: Security ID: REMOTESERVER\adminid Account Name: adminid Account Domain: REMOTESERVER Logon ID: 0xxxxxxxx Privileges: SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege SeSystemEnvironmentPrivilege SeLoadDriverPrivilege SeImpersonatePrivilege sc.exe is similar. The command syntax and error differs as below but I also see the same login message in the remote server's security log. D:\mydir>sc \\REMOTESERVER start "Registry Name of Service" [SC] StartService: OpenService FAILED 5: Access is denied.

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • Task Scheduler not able to execute .vbs scripts successfully

    - by Django Reinhardt
    Apologies if this has a really obvious answer! We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs scripts stopped successfully executing (always timing out)... but could still be executed manually with no problems(!). Not knowing any good reason why the Task Scheduler should start having problems, we thought we'd try a little "creative thinking", and run the .vbs another way: Via a .bat file executed by the Task Scheduler. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file run by Task Scheduler is nothing more than... CScript "C:\location\script.vbs" > Log.txt But after an attempt to run it, the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The Log.txt (as output from the .bat file above) says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Mac OS X Server (10.5) mail trapped in queue

    - by Meltemi
    We've got mail accumulating in our Leopard Server's queue and not sure exactly why. This machine has required little maintenance over the years so I'm hoping someone here spot the obvious and save us some time. Let me know what other information would be helfull. Server appears to be functioning normally except for "clogged" queue and the following error associated with each "trapped" message: Looking at messages in the queue each one states something like this: Message ID: 4213C3B8B3F Date: October 27, 2009 11:33:27 AM Size: 1824 Sender: [email protected] Recipient(s) & Status: ---------------------- [email protected]: connect to 127.0.0.1[127.0.0.1]: Connection refused Under SettingsRelay we have checked Accept SMTP relays only from these hosts and networks: 127.0.0.0/8 10.0.1.0/24 The mail in queue is addressed to users whose accounts are on this server. Mail.app on the client appears to be functioning normally and checking checking mail on the server. We did add a virtual domain some time ago but all that was working fine for some time... This just started happening recently...any ideas? Edit: toggling the filter services on and off seems to have fixed this except for 2 remaining queued messages that show "mail transport unavailable" as an error!?!

    Read the article

  • Is there a good Lotus Notes open-source alternative?

    - by Ben S
    At my work we use Lotus Notes 6.5 for our email, meeting scheduling and instant messaging. I can't stand the horrible UI, buggy meeting scheduling and overall '90s feel when using it and would love to replace it with open-source alternatives. So far I've been able to setup Thunderbird for email, and I should also be able to configure pidgin to do IM, but I can't find any replacement for the meeting scheduling. I need to be able to receive meeting requests and respond to them. I've looked around trying to get the Thunderbird plugin Lightning to manage the scheduling, but everything I've read so far requires me to export .ics files from Lotus Notes or otherwise keep Lotus Notes around for day-to-day activities. I've also looked into using Evolution as the client, but I found even less information for it than I did for Thunderbird. How can I easily send, receive and respond to Lotus Notes meetings using an open-source alternative? Alternatively, if there exists a full drop-in replacement to Lotus Notes I would also consider it. Note: My desktop at work is a Windows XP machine, though I wouldn't be opposed to a solution requiring cygwin at this point. Edit: I have no power over the server. I only want a compatible client.

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

  • sendmail: Error during delivery. Service not available, closing transmission channel

    - by user2810332
    I have a module in my system that will trigger an email and send it to user. The email will be sent to user when the product in my system is going to expired soon. I test the whole module in localhost and there is no problem with it. Then, I finally moved the module in my server but it gives this error. sendmail: Error during delivery: Service not available, closing transmission channel. It will also create a notepad in my desktop that contains information like this : command line : C:\wamp\sendmail\sendmail.exe -t -i executable : sendmail.exe exec. date/time : 2011-06-18 01:10 compiled with : Delphi 2006/07 madExcept version : 3.0l callstack crc : $fecf9b34, $5562b2fa, $5562b2fa exception number : 1 exception class : EIdSMTPReplyError exception message : Service not available, closing transmission channel. main thread ($15b0): 0045918a +003e sendmail.exe IdReplySMTP 501 +1 TIdReplySMTP.RaiseReplyError 0043ff28 +0008 sendmail.exe IdTCPConnection 576 +0 TIdTCPConnection.RaiseExceptionForLastCmdResult 004402f4 +003c sendmail.exe IdTCPConnection 751 +10 TIdTCPConnection.CheckResponse 0043feba +002a sendmail.exe IdTCPConnection 565 +2 TIdTCPConnection.GetResponse 004403fd +002d sendmail.exe IdTCPConnection 788 +4 TIdTCPConnection.GetResponse 0045ab97 +0033 sendmail.exe IdSMTP 375 +4 TIdSMTP.Connect 004b5f14 +1060 sendmail.exe sendmail 808 +326 initialization 77013675 +0010 kernel32.dll BaseThreadInitThunk thread $cf8: 77a400e6 +0e ntdll.dll NtWaitForMultipleObjects 77013675 +10 kernel32.dll BaseThreadInitThunk thread $1088: 77a41ecf +0b ntdll.dll NtWaitForWorkViaWorkerFactory 77013675 +10 kernel32.dll BaseThreadInitThunk May I know what is the problem of this error ? Is it something like firewall in the server that block my sendmail.exe or anything else ? FYI, I'm using Wamp and Sendmail to send the email. This is my first time seeing error like this. I need an explanation on this. Thank you.

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • Windows 8 and SMB2 Issues

    - by Rhys Paterson
    We're playing with the consumer preview of Windows 8 and having issues accessing some network shares in our environment. Basically, when I attempt to access a share directly (\[SERVER].[DOMAIN].[NETWORK]\Share$) I get 'An extended error has occured'. The shares reside on an EMC Celerra system. Sorry, I don't really have much more information about it (this is just a little side project). Accessing shares that reside on Windows machines are fine. The Firewall is completley disabled and I am running under full domain administrative credentials. A quick wireshark shows the following group of packets between myself and the server: SMB2 164 NegotiateProtocol Request SMB2 274 NegotiateProtocol Response SMB2 981 SessionSetup Request SMB2 281 SessionSetup Response SMB2 200 TreeConnect Request Tree: \\[SERVER].[DOMAIN].[NETWORK]\[SHARE]$ SMB2 138 TreeConnect Response SMB2 202 Ioctl Request NETWORK_FILE_SYSTEM Function:0x0080 SMB2 131 Ioctl Response, Error: STATUS_INVALID_DEVICE_REQUEST SMB2 126 SessionLogoff Request SMB2 126 SessionLogoff Respons This repeats five times and then (I assume) Windows throws me the above error. A quick Google shows me: 0xC0000010 STATUS_INVALID_DEVICE_REQUEST The specified request is not a valid operation for the target device. Which shows me that NETWORK_FILE_SYSTEM Function:0x0080 request is invalid.. Does anyone know what would cause this? Thanks in advance. Rhys. Edit: FYI - as a workaround, you can disable SMB 2.2 as noted in the EMC thread: sc config lanmanworkstation depend= bowser/mrxsmb10/nsi sc config mrxsmb20 start= disabled This will allow the machine to access the shares. The below answer still stands though :)

    Read the article

  • Drop outs when accessing share by DFS name.

    - by Stephen Woolhead
    I have a strange problem, aren't they all! I have a DFS root \domain\files\vms, it has a single target on a different server than the namespace. I can copy a test file set from the target directly via \server\vms$\testfiles and all is well, the files copy fine. I have repeated these tests many times. If I try and copy the files from the dfs root I get big pauses in the network traffic, about 50 seconds every couple of minutes, all the traffic just stops for the copy. If I start another copy between the same two machines during this pause, it starts copying fine, so I know it's not an issue with the disks on the server. Every once in a while the copy will fail, no errors, the progress bar will just zip all the way to 100% and the copy dialog will close. Checking the target folder show that the copy is incomplete. I've moved the LUN to another server and had the same problem. The servers are all 2008 R2, the clients are Vista x64, Windows7 x64 and 2008 R2, all have the same problem. Anyone got any ideas? Cheers, Stephen More Information: I've been running a NetMon trace on the connection when the file copy fails and what seems to be standing out is that when opening a file that the copy completes on the SMB command looks like this: SMB2: C CREATE (0x5), Name=Training\PDC2008\BB34 Live Services Notifications, Awareness, and Communications.wmv@#422082, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 245376 SMB2: R CREATE (0x5), Context=MxAc, Context=RqLs, Context=DHnQ, Context=QFid, FID=0xFFFFFFFF00000015, Mid = 245376 But for the last file when the copy dialog closes looks like this: SMB2: C CREATE (0x5), Name=gt\files\Media\Training\PDC2008\BB36 FAST Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight.wmv@#859374, Context=DHnQ, Context=MxAc, Context=QFid, Context=RqLs, Mid = 77 SMB2: R , Mid = 77 - NT Status: System - Error, Code = (58) STATUS_OBJECT_PATH_NOT_FOUND The main difference seems to be in the name, one is relative to the open file share, the other has gained the gt\files\media prefix which is the name of the DFS target. These failures are always preceded by logoff and back on of the SMB target. Might have to bump this one to PSS.

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • No VMKernel Dump File on PSOD for ESXi 4

    - by user66481
    On PSOD no VMKernel Dump File is written to disk and no message is written to screen (the screen is either blank or full of dashes). I need this data to understand why the system crashes; any help as to how to fix this to write a dump file would be appreciated. Thanks. Notes: VMKCore partition exists, is active, and is configured (esxcfg-dumppart -l). esxcfg-advcfg -g /Misc/PsodOnCosPanic = 1. esxcfg-advcfg -g /Misc/CosCoreFile = /var/core. esxcfg-dumppart -C -D /vmfs/devices/disks/ = "Error running command. Unable to copy the dump partition: Couldn't find a valid VMKernel dump file. Dump partition might be uninitialized." Hardware diagnostics (Dell) checks okay. Hardware: VMWare ESXi 4.1.0 (VMKernel Release Build 320137) Dell Inc. Optiplex 960 (2 Drives) Intel Core 2 Quad CPU Q9400 2.66GHz Configuration: 2 Virtual Machines: Windows Server 2003 R2 Enterprise Edition SP2 (1 on each drive) VM 1: Executes Batch Jobs (Has Internet Information Services 6) VM 2: Database Server (Has SQL Server 2000)

    Read the article

  • Mounting a TrueCrypt volume over FTP

    - by Maxim Zaslavsky
    Is it possible to mount a TrueCrypt volume file over FTP? Here's how TrueCrypt works with a local file: User inputs path to volume file, enters password TrueCrypt verifies that the password is correct (probably by decrypting the very first part of the volume file?) TrueCrypt reads the directory listing from the volume file and mounts the volume. However, in this step, TrueCrypt does NOT process the whole volume file. The user browses the directory listing and opens a file. TrueCrypt reads only the part of the volume file that contains the file the user wants, and then decrypts it. Once again, TrueCrypt doesn't process the whole volume file - it only reads part of it. The user edits part of the file and saves it. TrueCrypt encrypts the change and edits the volume file. I'm pretty sure it should be possible to mount a volume over FTP, without undermining security and without having to transfer the whole volume file just to read one small part of the volume. Here's how I imagine it: User inputs FTP path to volume file, enters FTP login information, enters password to volume TrueCrypt downloads the very first part of the volume file and verifies that the password is correct TrueCrypt downloads the part of the volume file that contains the directory listing - the data is sent encrypted over FTP and is decrypted locally. The user browses the directory listing and opens a file. TrueCrypt downloads only the part of the volume file that contains the file the user wants, and then decrypts it locally. The user edits part of the file and saves it. TrueCrypt encrypts the change and edits the volume file over FTP, transferring encrypted data only. Is such a feature available?

    Read the article

  • Can I change the file system on the OS partition on Server 2008 R2?

    - by KCotreau
    I have a client using R1Soft Continuous Data Protection backup, and two of the Server 2008 R2 boxes were erroring out with these errors: Unable to obtain NTFS volume data for device '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}': Incorrect function. Unable to discover information for filesytem volume '\\?\Volume{f612849e-7125-11e0-8772-806e6f6e6963}'; Unable to obtain NTFS volume So I backed up all the registry entries with this, {f612849e-7125-11e0-8772-806e6f6e6963}, in it, and deleted them based on some VERY sparse info from R1Soft. I then decided to restore them before I rebooted, and do a system state backup first using MS backup, and even it errored out saying that there were FAT32 partitions. This was a major clue as the only two computers with problems had these FAT32 partitions. I figured if MS backup can't backup something, any other program is likely to have problems. Also, now that I realized the servers had FAT32 partitions on them, the error referencing NTFS takes on more weight. The partitions on both servers have the label "OS", but on one of the computers, it is given a letter, but on the other not. So I am thinking if I just convert the file systems from FAT32 to NTFS, it may solve the backup problem. So the question is this: Can I just convert those partitions, and does anyone have any concrete knowledge of any major downsides, like the servers not coming back up (of course, I would do one at a time)? My thinking is that the answer is probably at least 95% no, but they are production servers, so I wanted to get some second opinions.

    Read the article

  • ASP.NET AJAX, WebSeal Junctions, and Sessions

    - by powella
    I've run up across a problem with ASP.NET AJAX (hooked up to WebServices directly) and accessing our site through a WebSeal junction. Listing 11. On this page; http://www.ibm.com/developerworks/tivoli/library/t-ajaxtam/index.html explains that requests to pages which do not result in a content type of text/html are not sent with cookie data. Hence, no session. ASP.NET AJAX requests are returned with a content type of "application/json; charset=utf-8". As such, the WebSeal junction is not appending the Session Cookie to the request. This results in our WebService seeing the user as invalid, due to no session information. The Junction has been setup properly with the -J parameter (thats an uppercase J, which appends the required script for WebSeal to the bottom of the page - this prevents forcing IE into quirks mode.) and we've confirmed that the necessary script exists in the output source. I'm up for any suggestions at this point, as I'm out of ideas. FWIW, the site runs perfectly when not accessed through the WebSeal Junction.

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Time machine disk icon on boot disk

    - by Ben Lings
    The icon for Macintosh HD (my boot disk) shows as a Time Machine disk. There is a file .com.apple.timemachine.supported in the root of the disk. If I delete the file and restart the computer, the icon goes back to a normal HD icon. However, the .com.apple.timemachine.supported file is recreated at some point on boot because when I log in again, the file has been recreated. If then reboot again, the icon goes back to being a Time Machine one. Any ideas about what is creating this file and why? More importantly - how can I get it to stop? It looks like something thinks the boot disk should be a Time Machine volume, but what? Console.app shows the following messages at approximately hourly intervals: 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Starting standard backup 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Cookie file is not readable or does not exist at path: /.<12 hex digits of MAC address for en0> 19/01/2010 19:23:54 /System/Library/CoreServices/backupd[7459] Volume at path / does not appear to be the correct backup volume for this computer. (Cookies do not match) 19/01/2010 19:23:59 /System/Library/CoreServices/backupd[7459] Backup failed with error: 18 Other possibly relevant information: The boot HD isn't the original - the original failed so this is a SuperDuper'd clone of the original drive. I used to use the same disk for a SuperDuper clone as for Time Machine. These are the same same symptoms as this and this.

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >