Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 728/956 | < Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • Linux: CIFS/Samba mount hangs for several minutes

    - by Pistos
    I have a small local network which has a Gentoo box and a Windows box. I mount a share originating on the Windows box onto the Gentoo box with a command like: mount -t cifs -o username=WindowsUsername,password=thepassword,uid=pistos //192.168.0.103/Users /mnt/windowsbox Most of the time, everything Just Works, and I can read and write without problems. However, every few weeks or so, the connection or the mount point seems to go dead or hang, such that any process that tries to access the mount point gets stuck in D state (disk, or I/O wait). These processes become impervious to TERM and KILL signals. Disconnecting and reconnecting the Windows box from the network does not help. The frozen state lasts for 5+ minutes. It's really frustrating and gets in the way of normal work, because it freezes Save As dialogues, ls commands, etc. If I issue a umount on the mount point, it either hangs also, or reports that the mount point is in use. Eventually, the dead state resolves itself, and the mount point gets unmounted, or it becomes possible to umount with no delay. My guess is that this happens when the connection/mount has gone idle, or when the Windows machine has been idle. I am not really sure. Why is this happening, and what can I do to prevent it? Or how can I successfully kill these D-state processes at will? Possibly related: CIFS mounts hang on read

    Read the article

  • Vim clobbering scrollback buffer outside of screen

    - by dotancohen
    If I'm not in a screen session, then when exiting Vim I get a bash prompt below the remnants of the VIM window. A side effect of this is that my scrollback buffer is clobbered, especially if I have paged through a long file in VIM. The problem only occurs if I'm not in screen, inside a screen window VIM exits to show the bash prompt and the previous lines just as before. I tried adding sett_ti=t_te= to my .vimrc to fix the problem, but the only effect that it has was to break VIM such that the problem occurs inside screen as well as outside. Thus, I removed the line. For good measure I do have altscreen on in .screenrc. This is on Ubuntu Server 12.04.1 LTS, with Bash 4.2.24, Screen 4.00, and VIM 7.3 (not vim-tiny), accessed over SSH in Cygwin version NT-6.1-WOW64 on a Windows 7 laptop. Thanks. EDIT: Note that in the same Cygwin install I can SSH into a different server (CentOS) and there VIM does not clobber the scrollback buffer. Therefore, I do not suspect a Cygwin issue. The CentOS machine does not have screen installed, and I did not have to add set t_ti= t_te= to .vimrc.

    Read the article

  • Google Drive terminates without error on startup

    - by Iszi
    I've used Google Drive for awhile now, but it won't start up after installing on my latest system re-build. I'm still using the same OS, hardware, and basic software load (antivirus, firewall, etc.) that I have for years during which I had not previously had problems with Drive. OS: Windows 7 Ultimate x64 Google Drive Version: 1.12.5329.1887 Now, whenever I try to run Google Drive, it just spawns two instances of the executable which die shortly after. No error messages are posted to the desktop, and nothing indicating any problem is written to the Event Log. After some research, I've yet to find anyone having the same problem who's found an answer. I did find out how to run Google Drive in diagnostic mode, using the --vv parameter at the command line. After that, I opened up the sync log and got this: 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 OS: Windows/6.1.7601-SP1 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 Google Drive (build 1.12.5329.1887) 2013-10-31 17:11:24,039 DEBUG pid=3664 1892:MainThread logging:1608 DEBUGGING DUMP is ON. 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 ERROR, UNEXPECTED EXCEPTION 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 [Error 5] Access is denied Traceback (most recent call last): File "<string>", line 232, in Main File "<string>", line 118, in RegisterCustomFileTypes File "P:\p\agents\hpal4.eem\recipes\353983091\base\b\drb\googleclient\apps\webdrive_sync\windows\build\pyi.win32\main\outPYZ1.pyz/windows.registry", line 62, in GetValue WindowsError: [Error 5] Access is denied 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Crash reporting disabled. Ignoring report. 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Exiting with error code: 0 I'm running on an account with Administrator-level permissions, and have even tried using "Run As Administrator" on the EXE. I'm not sure why it's looking for a P:\ drive, as no such volume has ever been mounted on this system. What should I do to try to further troubleshoot, and resolve, this issue?

    Read the article

  • ownership of hard drive recoveery of files Windows 7

    - by Jeff
    Here is the issue. I have an old laptop that died that was running xp. I now have a win 7 laptop. I need to get the files off the old drive. Win 7 will not let me take ownership of the drive. I can runn the comand prompt in regular mode and do a dir on the drive that shows up as Q: drive in regular mode. I gives me the volume as c and the serial number. I can not take ownership in regular mode with the comand prompt or other means. Microsoft site says use safe mode with networking. So I go to safe mode the drive shows up as G: in safe mode. I use the comand prompt Takeown /f G: and get the device is not ready error. I am at a loss. All I want is to retrieve my files from this drive. Any ideas or sugestions. I don't see how you can dir and get some info in one mode and not acces it in another. I have to get ownership and permissions fixed to get in the drive to get my files. Thanks in advance. I might add that I am using a usb3.0 to ide/sata cable adapter. Software came with the device but I can't make heads or tails out of the manual to know if any of the software can help me. The soft ware is PCClone Ex lite, and Clone Drive Soft ware

    Read the article

  • GPO - Setting not applied, although policy is applied

    - by Kenny Bones
    This is rather strange. In our domain we have several terminal servers and this morning a user reported that no drives are mapped when he logs on to the terminal server. So, I checked Group Policy Results and compare two users. Both users have the exact same policies applied. But for this particular user, the Script section under User Configuration - Policies - Windows Settings is just not there. For the other user, which this is working fine for, it says under the Script section that Winning GPO is Terminal2008, which is the GPO that contains the script section. And the Terminal2008 GPO is applied to both users. Also, the loopback processing is set to Replace. What could be the cause for this? I've never seen this particular issue before. I mean, both users are in the same OU, they log on to the same terminal server and the same policies are applied to both. They do not however have the exact same group memberships, but should that matter? It's not stated that the script should be run only if the user is a member of a certain group either. Not sure if that could be done through that specific setting either.All I know is, the very same policies are applied to both users, in the same OU and the same computer. Meaning, the same policies should be applied? Edit: I just ran Group Policy Results on one of the other terminal servers, which are also in the same OU, and the Scripts section is there! This means that this particular user don't get this setting when he's logged onto this particular server. What could be the cause of this?

    Read the article

  • Exchange 2003: Accounts with only OWA access unable to change passwords when expired or forced

    - by radioactive21
    We have accounts whith only OWA access, because they are generic accounts and we do not want the accounts to be used as machine logins. We have a password policy that users must change their passwords every 6 months. The problem we are having is that since the accounts are not loging into the machines, when the password policy kicks in it is preventing users with OWA only access from changing their password. Also, when we select "User must change the password at next logon" it also causes the same issue. We have two exchange servers the main one and a front end one. what we have been doing with these generic account is in properties, under the "account" tab we restricted "log on to" to the front end server. Just to clarify, when we have no restrictions, users can change their passwords via the web without any issues. It is only when we force them to only login via OWA that they cant change passwords. I tried adding our domain controler and main exchange server to the "This user can log on to The following computers" in the account tab, but still it is not allowing them to change passwords. Currently I have to manually reset the passwords for OWA only accounts. Is there anyway to allow OWA acconts to change passwords? EDIT: Users restricted to only OWA can change their password via the web browser without any issues when there are no restrictions. In other words normally they can just log into outlook via the web and change their password, but when the password policy expires or we force them to change their password at next login, they are unable to.

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • WSUS performance for unneeded updates

    - by mhouston100
    We have a WSUS server serving around 300 PC's and a couple of dozen servers and a discussion came up at work as to what products to include. We have a single SQL 2005 instance on one of the servers and it has NEVER been updated. My first thought was to just tick the box for SQL 2005 and let WSUS do it's thing to upgrade to the latest service pack at least. One of the other guys here has the opinion that having updates that are relevant to only a small selection of hosts would effect the performance of WSUS as a whole, claiming that each update does a 'check' against all the hosts or something similar. My argument is that manually updating these servers is obviously not working as the admins are not paying attention to what is needed. So my question is: Do updates that only effect a sub-set of the hosts effect the overall performance of the WSUS server in relation to ALL the hosts? (disk space is not an issue at this point) Is there any performance justification for or against manually updating small amounts of products? Basically I'm needing a rebuttal against his argument and I'm unable to find any concrete documentation to prove him wrong.

    Read the article

  • In Windows 7 power management, is it possible to set different sleep settings for different SATA disks?

    - by Ben Voigt
    I'm having an issue with Windows 7 either freezing up or generating a BSOD coming out of sleep. I suspect that it is related to my boot/OS drive, an OCZ Vertex SE SSD, because numerous other Vertex users have reported sleep problems. Notably, if I put the computer to sleep, it almost always wakes correctly. If it goes to sleep after a timeout, it almost always BSODs. I disabled timed sleep and now it freezes when left unattended. My next step is to disable "Put hard disks to sleep after X minutes", but I'd like to change this setting only for the SSD and not for the rotating data disks, which I would like to spin down normally. Does anyone know a place to configure sleep on a per-disk basis? I don't need to set different timeouts on different disks (although that would be nice), simply setting "this disk sleeps" and "sleep is disabled for this disk" would be great. Additional system information: Windows 7 Ultimate x64, Core i5 - P55 chipset, Intel RST drivers are installed. One SSD, two rotating HDD, and a DVD-RW drive are all connected to the Intel SATA ports. I could potentially move some of these to my motherboard's other SATA controller if that would help.

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • 32bit dll not loading with Enable 32 Bit Applications on IIS 7.5

    - by Jon
    I have a MVC 3 Web Site referencing a 32 bit DLL. The OS is Windows 2008 R2 x64. The website is in the ASP.NET 4 App Pool. I have turned on Enable32Bit but it doesn't work. I get a Bad Image Exception but can't find out to turn this level of logging on in IIS. I have setup up a page that outputs whether it's running 32bit or 64bit and when I turn on/off the Enable32Bit on the AppPool I get the correct output. The website is also in Full Trust. I'm at a loss to try and and get it to work. I do know that it works on Win7 32bit. Can you suggest some things to try? UPDATE: I have just written a simple Windows Forms App with a button on it which calls my DLL. This was built with target of x86 and it worked fine so there is an issue with IIS or ASP.Net I think. UPDATE 2: Does it matter if the ASP.Net Pipeline is Clasic or Integrated? I've tried both but same problem but thought it was worth asking UPDATE 3: I found this question trying to do the same thing and he gave up which isnt too helpful!!

    Read the article

  • Juniper SSG20 IP settings for email server

    - by codemonkie
    We have 5 usable external static IP addresses leased by our ISP: .49 to .53, where .49 is assigned to the Juniper SSG20 firewall and NATed for 172.16.10.0/24 .50 is assigned to a windows box for web server and domain controller .51 is assigned to another windows box with exchange server (domain: mycompany1.com) mx record is pointing to 20x.xx.xxx.51 Currently there is a policy set for all SMTP incoming traffic addressed to .51 forward to the NATed address of the exchange server box (private IP: 172.16.10.194). We can send and receive emails for both internal and external, but the gmail is saying mails from mycomany1.com is not sent from the same IP as the mx lookup however is from 20x.xx.xxx.49: Received-SPF: neutral (google.com: 20x.xx.xxx.49 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=20x.xx.xxx.49; Authentication-Results: mx.google.com; spf=neutral (google.com: 20x.xx.xxx.49 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] and the mx record in global dns space as well as in the domain controller .50 for mail.mycompany1.com is set to 20x.xx.xxx.51 My attempt to resolve the above issue is to Update the mx record from 20x.xx.xxx.51 to 20x.xx.xxx.49 Create a new VIP for SMTP traffic addressed to 20x.xx.xxx.49 to forward to 172.16.10.194 After my changes incoming email stopped working, I believe it has something to do with the Juniper setting that SMTP addressed to .49 is not forwarded to 172.16.10.194 Also, I have been wondering is it mandatory to assign an external static IP address to the Juniper firewall? Any helps appreciated. TIA

    Read the article

  • No sound out of headphone port

    - by Thanatos
    I cannot get sound out of the headphone port. Headphones are plugged in, and sound comes out of the internal speakers. Windows behaves normally (sound switches to headphones when headphones are inserted). It did work in Linux at one point, but something changed, we're just not sure what. Rebooting doesn't fix. This appears to occur whether or not PulseAudio is running. Things I've tried: Rebooting. No effect. Booting into Windows. It works properly, so probably not a hardware issue. All of alsamixer. My only controls are this: "Master" Volume bar & mutable, unmuted. Controls volume. "PCM" Volume bar only. 100%. "S/PDIF" Mutable only, currently muted, has no effect. "S/PDIF" Default PCM", Mutable only, currently unmuted, has no effect. Killing PulseAudio. No effect. (It also won't stay dead! Something appears to be restarting it, and I can't tell what, but it is annoying as fuck.) alsactl init 0, no effect. sudo rm -f /var/lib/alsa/asound.state, no effect. General system info: Ubuntu 10.04 LTS Toshiba Satellite T135D-S1324 lspci says I have: 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller

    Read the article

  • Nginx + Apache + Wordpress redirects to localhost/127.0.0.1

    - by jcrcj
    Anyone know how to fix an issue with Nginx + Apache + Wordpress redirecting to localhost/127.0.0.1? I've tried a lot of different fixes, but none have worked for me. I can go to http://domain.com/wp-admin just fine and use everything there normally. But if I try to go to http://domain.com it redirects to 127.0.0.1. Everything also works fine if I just run through Apache. Here are the relevant portions of my nginx.conf: server { listen 80; server_name domain.com; root /var/www/html/wordpress; location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { proxy_pass http://127.0.0.1:8080; } } Here are the relevant portions of my httpd.conf: Listen *:8080 ServerName <ip> <VirtualHost *:8080> ServerAdmin test@test DocumentRoot /var/www/html/wordpress ServerName domain.com </VirtualHost> This is what my nginx log loks like: <ip> - - [19/Jun/2012:22:35:35 +0400] "GET / HTTP/1.1" 301 0 "-" "Mozilla/5.0 This is what my httpd log looks like: 127.0.0.1 - - [19/Jun/2012:22:24:46 +0400] "GET /index.php HTTP/1.0" 301 - "-" -- WordPress Address (URL) and Site Address (URL) both have same http://domain.com

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Associate email account with "Personal Folders" Outlook data file?

    - by TheLQ
    In the process of migrating email servers I've run into an interesting problem: In Outlook 2007 you have the default "Personal Folders" item. This contains the email for the account that was origionally setup with Outlook. My issue is that I have deleted the account associated with that and created an entirely new account. So now I have "Personal Folders" and "[email protected]". However I can't delete "Personal Folders". nor associate "[email protected]" with that PST file. Deleteting it in Outlook (Tools Account Settings Data Files) gave the error "The default data file cannot be removed, because it is your default delivery location. After you have selected a different default delivery location, your current file can be removed." Deleting the PST file itself (outlook.pst) made outlook demand where its default file . would be. So I selected my "[email protected]" PST file and restarted Outlook. Now "Personal Folders" is called "[email protected]", but I still have a duplicate account called this. Which is bad. Worse, my email is associated with the duplicate PST, not the default. How can I associate my email with my default PST or delete the default PST entirely? Luckily I have backu

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • Connecting to Server 2008 shares fails

    - by Chris J
    I'm having problems getting a reliable share working on an x64 Server 2008 R1 SP1 server. All works well after a reboot, but after some time (within a day) the shares become unavailable to XP and Server 2003 servers. Interestingly, they remain available to other Server 2008 servers. On trying to access \\server\share, Server 2003 returns immediately and simply gives me the message "The specified network name is no longer available", XP takes a minute or two to timeout before giving the same message. There doesn't seem to be anything in the event logs indicating a problem. Doing some googling over the last day or two I've seen the following blamed: Bad network drivers ... I've updated to the latest drivers with no result Symantec anti-virus ... we're not using it (currently no AV on the server) Receive window auto-tuning ... I've disabled with netsh int tcp set global autotuninglevel=disabled and netsh int tcp set global rss=disabled None of these have had an effect. Windows Firewall is currently disabled. As other Server 2008 boxes (both x32 and x64) can connect, I can only assume that there's some new security configuration that's not quite right - or there's an AD issue that I need to trace, but don't know where to start. Even if anyone doesn't know how to resolve, if someone knows what I need to look for with Wireshark this would be a help.

    Read the article

  • How can I get around windows 8.1 store (& metro apps) not working with UAC disabled

    - by Enigma
    I have UAC disabled because it is annoying and causes more problems that it could ever possibly solve, at least for me. Here is yet another problem, and it seems to be due to a recent update as I don't remember it in the past. Even with a MS account, I can't use the store because UAC is disabled. How can I get around this? Short term I can just enable it, reboot, use store, disable it, reboot and be on my way but there has to be a better way (other than MS getting their software completely right - like that will happen anytime, ever) Edit: Apparently this is far more of an issue than I originally thought. Now every(?)many metro apps requires UAC. Anyone aware of the update this got rolled in with? Thankfully netflix isn't affected which is the only metro app I use at the moment. What I see: Event Log info: Activation of app winstore_cw5n1h2txyewy!Windows.Store failed with error: This app can't be activated when UAC is disabled. See the Microsoft-Windows-TWinUI/Operational log for additional information.

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • BIND having trouble resolving service.graphicly.com

    - by Keith Burgoyne
    Since about two weeks ago, we haven't been able to resolve service.graphicly.com: dig @192.168.0.12 service.graphicly.com ; <<>> DiG 9.3.4-P1 <<>> @192.168.0.12 service.graphicly.com ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Digging on the name servers listed for graphicly.com shows that service.graphicly.com is a CNAME to takecomicsadmin.cloudapp.net. Digging on cloudapp.net's name servers seems to fail: dig @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; <<>> DiG 9.3.4-P1 <<>> @NS1.LIVEDNS.MSFT.NET takecomicsadmin.cloudapp.net ; (1 server found) ;; global options: printcmd ;; connection timed out; no servers could be reached Somehow, my home ISP's name servers can resolve service.graphicly.com without issue. Has anyone else noticed this problem? Does anyone know what the cause of this problem could be? Thanks!

    Read the article

< Previous Page | 724 725 726 727 728 729 730 731 732 733 734 735  | Next Page >