Search Results

Search found 16324 results on 653 pages for 'per thread'.

Page 212/653 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • MS licensing of multiple RDP sessions for non-MS products in Windows XP Pro

    - by vgv8
    Question 1) and 2) were moved into separate thread Which Windows remote connections bypass LSA? and what r definitions of login vs. logon session? 3) Do I understand correctly that multiple remote RDP sessions are supported by Windows XP but require additional (or modified) licensing? Which one? Or it is always illegal to run multiple RDP sessions on Windows XP? even through non-MS commercial software? ---------- Update1: I already understood my error - the main questions were about definitions (important to find the common language with others) and the licensing questions were collateral - but it was already answered. I shall try to separate these questions leaving here the questions about RDp licensing and migrating other questions into separate thread ---------- Update2: Trying to "work around" licensing terms is pointless and wasteful of time I never try "working around" and I never ask anything like this, I am not specialist in licensing. My clients/employers provide me with tools and licensing support. They have corporate lawyers, planning/accounting/purchase departments for these issues. The questions that I ask is the matter of scalability and efficiency (saving my and others time) in my developing work. For ex., Just because I need autentication against Windows AD it is time-saving to use ADAM instead of deploying full-fledged AD with DC + servers + whatever else? Nobody is forcing you to use Windows XP I shall not rush into re-installing all my operating systems on all my development machines (at home, at client premises) just because a few guys have a lot of fun downvoting development-related questions in serverfault.com. If I do so, I make a joker from me in the eyes of my clolleagues et al Update: I unmarked this question as answered since it had not even adressed the question, at least mine. Should I understand that Terminal Server PRO, allowing Windows® XP and Windows® Small Business Server 2003 to host multiple remote desktop sessions, is illegal? Related: My answer to question Has windows XP support multiple remote login session (RDP) at a time?

    Read the article

  • How to diagnose occasional sudden resets?

    - by steve314
    I have a Windows XP system, and have recently upgraded by adding 2 1GB sticks of RAM to the 2x0.5GB already present. Since then, about once per day (the system is used 8+ hours per day), the system has suddenly and unexpectedly reset. On a couple of occasions, the system has frozen completely, only responding to the power button being held in for several seconds to force power off. Nothing at all ever appears in the system event log that might indicate a possible cause - everything seems to suggest business as usual. Sounds like faulty memory - but memtest86+ says otherwise. A full test, taking over an hour, found no issues. The next likely suspicion, then, is that I've knocked something while installing the RAM. Trouble is, everything I can think of to test seems fine. I've opened up the case and prodded a few things around, hoping to get better contact on connections etc, but there's no sign yet as to whether that has made a difference or not. I thought about a malware-related timing fluke, but again, so far as I can tell I'm all clear. All I can think of to add to my checklist (mainly of things that I can't easily check) is... The power supply is (1) only 350W, (2) not necessarily the best quality, and (3) powering a Prescott P4 640 3.2GHz. Could that be borderline overloaded or about to die? How do I check? Is it possible that the CPU isn't getting cooled properly? I haven't had the fan past normal tickover even doing video encoding, and the only sane temperature reading from SpeedFan is pretty steady at 36 celcius, so probably not. Any thoughts? Is there a standard procedure for diagnosing this kind of fault?

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • apache performance timing out

    - by Mike
    Im running a webserver where I'm hosting about 6-7 websites. Most of these websites get their content from MySQL which is hosted on the same server. Traffic average per day is about 500-600 unique visitors, about 150K hits per week. But for some reason sometimes websites send a timeout, OR sometimes websites dont load all images. I know that I should perhaps separate static content from dynamic content, but for now I think that's not a possibility. I would appreciate any suggestions on how could I improve the performance of apache, so it doesn't keep timing out. Server is running on Sempron LE 1300; 2.3GHz,512K Cache 2GB RAM 10Mbps/1Mbps Services: MySQL, ProFTPD, Apache. Private + Shared = RAM used Program ---------------------------------------------------- 1.2 MiB + 54.0 KiB = 1.2 MiB proftpd 4.1 MiB + 23.0 KiB = 4.1 MiB munin-node 20.8 MiB + 120.5 KiB = 20.9 MiB mysqld 47.3 MiB + 9.9 MiB = 57.3 MiB apache2 (22) top: Mem: 2075356k total, 1826196k used, 249160k free, Timeout 35 KeepAlive On MaxKeepAliveRequests 300 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 10 MinSpareServers 20 MaxSpareServers 20 MaxClients 60 MaxRequestsPerChild 1000 </IfModule> <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • Xen Windows Guest spawn doesn't spawn a vnc display

    - by Henrik P. Hessel
    I'm using this HVM File to create a new guest kernel = "/usr/lib/xen-3.2-1/boot/hvmloader" builder='hvm' memory = 4096 # Should be at least 2KB per MB of domain memory, plus a few MB per vcpu. shadow_memory = 64 name = "hessel-windows2008" vif = [ 'ip=188.40.xx.xx,mac=00:16:3E:C1:8F:CE' ] acpi = 1 apic = 1 disk = [ 'file:/home/xen/disks/hessel/win2008/win2008.img,hda,w', 'file:/home/xen/isopool/win2008_32.iso,hdc:cdrom,r' ] device_model = '/usr/lib/xen/bin/qemu-dm' #----------------------------------------------------------------------------- # boot on floppy (a), hard disk (c) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot="dc" sdl=0 vnc=1 vncdisplay=1 vnclisten="0.0.0.0" vncconsole=1 vncpasswd='howtoforge' stdvga=0 serial='pty' usbdevice='tablet' The guest is created without an error. But no vnc display is created. Any ideas, how to fix that? prometheus:~# netstat -ant Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:615 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN tcp 0 232 188.40.xx.xx:8080 195.36.75.26:54032 ESTABLISHED tcp 0 0 188.40.xx.xx:8080 195.36.75.26:53085 ESTABLISHED tcp6 0 0 :::8080 :::* LISTEN tcp6 0 0 :::53 :::* LISTEN tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:6010 :::* LISTEN

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • Wastage of resources in Virtualization

    - by Sabeen Malik
    I have asked this question on SO, but was suggested that i ask it here on SF, so here it goes. http://stackoverflow.com/questions/3010753/wastage-of-resources-in-virtualization I am not sure if this is the right place to ask the question. However i hope it is. When looking for a VPS earlier today, I was trying to understand how each container would work in the background. Keeping in mind the fact that the operating system uses most of the memory and power on a system, wouldn't having multiple operating systems in the same machine mean more wastage of resources. For instance if i was running centOS on a dedicated box and it was running lets say 20 background OS level processes. Then i go and install a virtualization platform and install 5 more centOS virtual machines in the same system which are exactly the same as the host operating system. Doesn't this mean duplication of those 20 processes 6 times? So internally the context switching is happening between 120 processes instead of 20? Further Notes: Here is an example of what i am thinking: I have a master-slave configuration for a long running, cpu + memory intensive process, which can be distributed to 4 machines. Lets say when the process runs on these 4 machines with lets say 1 Gh CPU and 1 Gig RAM, i get 400 results per hour from the cluster (assuming 100 results from one machine) . Now i get a bigger machine ( lets say 4Gh and 4 Gig RAM), have 4 virtual hosts on it with 1 Gz CPU and 1 Gig RAM. Will this configuration give me the same 4 results per hour from these 4 virtual hosts?

    Read the article

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Active FTP client blocked on Windows XP

    - by Ian Hannah
    Summary We have a FTP server (running in active mode). We have an FTP client which is connecting to the server, carrying out a task and then closing the connection. The FTP Client can perform this operation on multiple threads. Problem We have a situation where customers are experiencing occasional failures to carry out operations on a FTP connection. The actual connection has been made to the server but when the server attempts to return data on the data port if fails. Observations We have a simple test FTP client which is running two separate threads. Each thread is performing a recursive listing of files from a root directory. With the firewall running on the client machine the hang happens within a few minutes. If the firewall is turned off on the client machine, the test application seems to run correctly. This does point to a potential firewall issue. However, with the firewall on we can list files on our company FTP server without any issues. If the simple test FTP client runs a single thread then we do not experience any problems whether or not the firewall is turned on. We have another simple test FTP client which was running 4 threads (with each opening a new FTP connection, doing a directory listing and closing the FTP connection as fast as possible) overnight with the firewall turned off. With the firewall turned on it fails in a short space of time. The confusing thing is that if the test FTP client and the FTP server are run on the same machine the failure occurs even though the firewall is turned off. This means that the problem may not be firewall related. Any help with what this could be would be much appreciated. Thanks Ian

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • dovecot login issue with plain passwords

    - by user3028
    I am having an odd problem in dovecot, the first time I try to login via telnet dovecot gives a error, the second time it works, both within the same telnet session. This is the telnet session, note the 'BAD Error in IMAP command received by server' and the "a OK" just after that : telnet 192.168.1.2 143 * OK Waiting for authentication process to respond.. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot ready. a login someUserLogin supersecretpassword * BAD Error in IMAP command received by server. a login someUserLogin supersecretpassword a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS] Logged in dovecot configuration >dovecot -n # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.5.0-34-generic x86_64 Ubuntu 12.04.2 LTS auth_debug = yes auth_verbose = yes disable_plaintext_auth = no login_trusted_networks = 192.168.1.0/16 mail_location = maildir:~/Maildir passdb { driver = pam } protocols = " imap" ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } This is the log file: Jul 3 12:27:51 linuxServer dovecot: auth: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:27:51 linuxServer dovecot: auth: Debug: auth client connected (pid=23499) Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011no-penalty#011lip=192.168.1.2#011rip=192.169.1.3#011lport=143#011rport=50438#011resp=<hidden> Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): lookup service=dovecot Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): #1/1 style=1 msg=Password: Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client out: OK#0111#011user=someUserLogin Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master in: REQUEST#0111823473665#01123499#0111#0113a58da53e091957d3cd306ac4114f0b9 Jul 3 12:28:06 linuxServer dovecot: auth: Debug: passwd(someUserLogin,192.169.1.3): lookup Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master out: USER#0111823473665#011someUserLogin#011system_groups_user=someUserLogin#011uid=1000#011gid=1000#011home=/home/someUserLogin Jul 3 12:28:06 linuxServer dovecot: imap-login: Login: user=<someUserLogin>, method=PLAIN, rip=192.169.1.3, lip=192.168.1.2, mpid=23503, secured

    Read the article

  • How to Eliminate Black Bars from Powerpoint videos ?

    - by appu
    Hi, I am running a digital signage system for my client. The basic installation is a vertically oriented 42" LCD TV with a 1920x1080 screen resolution (reverse of 1920x1080 when setup normally, i.e. landscaped). Please check out the following link for a basic screen divisions layout I want to setup. http://flickr.com/photos/55097319@N03/5410208856 In the division labeled "ppt" I plan to run a powerpoint presentation. The screen division is 360x1476 resolution. As there isn't an option in powerpoint to specify slide size in terms of resolution so according to this article on indezine http://indezine.com/products/powerpoint/books/perfectmedicalpres02.html to get a screen resolution of my preference I divided 360 and 1476 each by 72 which gives me 5"x 20.5" as the slide size for my ppt. After setting up slide size as per above dimensions, I used sizer (http://brianapps.net) to resize my ppt window to 360x1476 so that when I record I do not get any black bars. But after launching recording there are side black bars visible which camtasia records and brings-inside cam studio with black bars. http://www.flickr.com/photos/55097319@N03/5409597049 My question is that after doing the above and as explained in the following video link why do I still get black bars. http://feedback.techsmith.com/techsmith/topics/eliminate_black_bars_in_your_powerpoint_recordings Is there an option in camtasia to stretch the recording to cover up black bars or any other alternative way I can get rid of those black bars while I get a ppt recorded as per my preferred dimensions. Notes: In the techsmith video above it asks to adjust my desktop screen resolution which my display chipset does not allow me to set. I set up show in powerpoint to be "browsed by an individual(window)". Also signage software only supports swf's and video file formats natively and not ppt, pptx etc. Thanks bhavani.

    Read the article

  • ipmi - can't ping or remotely connect

    - by Fidel
    I've tried configuring the IPMI controller to accept remote connections, but I can't even ping it. Here is it status: #/usr/local/bin/ipmitool lan print 2 Set in Progress : Set Complete Auth Type Support : NONE PASSWORD Auth Type Enable : Callback : : User : NONE PASSWORD : Operator : PASSWORD : Admin : PASSWORD : OEM : IP Address Source : Static Address IP Address : 192.168.1.112 Subnet Mask : 255.255.255.0 MAC Address : 00:a0:a5:67:45:25 IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10 BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Enabled Gratituous ARP Intrvl : 8.0 seconds Default Gateway IP : 192.168.1.1 Default Gateway MAC : 00:00:00:00:00:00 802.1q VLAN ID : Disabled 802.1q VLAN Priority : 0 RMCP+ Cipher Suites : 0,1,2,3 Cipher Suite Priv Max : uaaaXXXXXXXXXXX : X=Cipher Suite Unused : c=CALLBACK : u=USER : o=OPERATOR : a=ADMIN : O=OEM # /usr/local/bin/ipmitool user list 2 ID Name Enabled Callin Link Auth IPMI Msg Channel Priv Limit 1 true false true true USER 2 admin true false true true ADMINISTRATOR # /usr/local/bin/ipmitool channel getaccess 2 2 Maximum User IDs : 5 Enabled User IDs : 2 User ID : 2 User Name : admin Fixed Name : No Access Available : callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR # /usr/local/bin/ipmitool channel info 2 Channel 0x2 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : multi-session Active Session Count : 0 Protocol Vendor ID : 7154 Volatile(active) Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available Non-Volatile Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available # /usr/local/bin/ipmitool chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : unknown Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # arp Address HWtype HWaddress Flags Mask Iface 192.168.1.112 ether 00:A0:A5:67:45:25 C bond0 # /usr/local/bin/ipmitool -I lan -H 192.168.1.112 -U admin -P admin chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status In summary. It exists on the ARP list so arp's are being broadcast. I can't ping it and can't connect to it. Can anyone spot any glaring mistakes in the configuration? Many thanks, Fidel

    Read the article

  • Setting up PerformancePoint Services on Sharepoint 2010: connection errors

    - by Rik
    I have tried to setup PerformancePoint Services on SharePoint 2010, but every time I try to use the dashboard designer, I get this error: “An error has occurred attempting to contact the specified SharePoint site” I have tried these steps but it hasn't helped. Any ideas? The event log gives the following information: WebHost failed to process a request. Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/24724999 Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/client.svc' cannot be activated due to an exception during compilation. The exception message is: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item. --- System.ArgumentException: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item at System.ServiceModel.UriSchemeKeyedCollection.InsertItem(Int32 index, Uri item) at System.Collections.Generic.SynchronizedCollection`1.Add(T item) at System.ServiceModel.UriSchemeKeyedCollection..ctor(Uri[] addresses) at System.ServiceModel.ServiceHost..ctor(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.CreateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) --- End of inner exception stack trace --- at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) Process Name: w3wp Process ID: 2576

    Read the article

  • Why do I have to log into hotmail twice?

    - by Tony Lee
    I just recently noticed I have to attempt a login into hotmail twice before it succeeds. Although I'm using Google Chrome (3.0.195.21), the symptoms are well described in a Mozilla thread. In short, I'm told: The e-mail address or password is incorrect. Please try again. The thread on mozilla's site that supposed to describe the latest details (and the 1st hit on google when I search for "hotmail login twice") requires an account to read so I'm hoping someone here has a good synopsis of what the cause is. I normally start at hotmail.com, which redirects to login.live.com/.... I can login by starting at mail.live.com, using IE8 or attempting a 2nd login. Oddly, if I start at login.live.com Chrome tells me there is a redirect loop. Does anyone know or have a public link to the root cause of the double login? (it is a hotmail account I'm login into) EDIT - Caused by my 'restricting how 3rd parties can use cookies'. If I allow all cookies, it works first time.

    Read the article

  • Courier IMAP always disconnects since update

    - by Raffael Luthiger
    Since one of our customers updated their server courier does not handle IMAP connections properly any more. POP3 works without any problems. When I try to test IMAP with telnet then it is always like this: $ telnet domain.com 143 Trying 188.40.46.214... Connected to domain.com. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information. 01 LOGIN [email protected] test Connection closed by foreign host. I enabled debugging in the authdaemond but the output does not really help much: Apr 12 23:10:04 servername authdaemond: received auth request, service=imap, authtype=login Apr 12 23:10:04 servername authdaemond: authmysql: trying this module Apr 12 23:10:04 servername authdaemond: SQL query: SELECT login, password, "", uid, gid, homedir, maildir, quota, "", concat('disableimap=',disableimap,',disablepop3=',disablepop3) FROM mail_user WHERE login = '[email protected]' Apr 12 23:10:04 servername authdaemond: password matches successfully Apr 12 23:10:04 servername authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Apr 12 23:10:04 servername authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Right after the "Authenticated" line the output stops. There is no other message. And in no other log file I've checked I could find any other related message. The system was updated from Ubuntu 10.10 to 12.04. How could I get more information? Or does anybody have an idea what could go wrong here?

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • How to interpret iozone values

    - by Henno
    I ran a test to measure my I/O IOPS on Linux: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b /tmp/results.xls iozone claims that output is in operations per second yet the numbers are too big for that to be plausible. I'm observing some 320 CMDs/s maximum on vmware esx console (esxtop, then v). File size set to 4194304 KB Record Size 2 KB Record Size 4 KB Record Size 8 KB Record Size 16 KB Record Size 32 KB OPS Mode. Output is in operations per second. Command line used: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b tmpresults.xls Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4194304 2 19025 5580 27581 29848 284 198 415 1103217 1498 18541 4340 24245 25618 4194304 4 15650 21942 18962 21068 252 1198 193 976164 1677 22802 23093 21089 21232 4194304 8 11121 11638 10273 10165 247 1196 202 625020^C The test ran for 15 hours before I pressed ^C. Is that ordinary expectation for such command line (dedicated 4 drive RAID10 LUN, 10k RPM SAS drives in EMC CX300)?

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >