Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 261/335 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • ATI Radeon 5670 Won't Show Resolutions over 1400x900

    - by Phil Sandler
    Just got my new Dell computer with Windows 7 and an ATI Radeon 5670. I attached it to my current monitor, which is a Samsung 24" (2443bwt). Windows 7 does not allow me to display in resolutions greater than 1400 x 900. The setup through a VGA cable into the VGA port of the card. The card also has a DVI port, but I need to use the VGA port because a KVM that supports VGA only. My old PC (which is Windows XP, GeForce 8600 video) can display in 1900 x 1200 on the same monitor (which is what I want) and even higher. It does this through a vga cable also connected to the KVM (through the DVI port but using an adapter). I have tried the same setup (DVI = VGA adapter) on the new PC and nothing changed. I have tried: Updating the drivers via Windows "Update Driver" (says they are current) Installing the updated version of the drivers from ATI (made no difference) Installing Powerstrip (all the options I would need for a custom resolution are greyed out) Installing the drivers/software from ATI caused the ATI Catalyst Control Center software to stop functioning, so I can no longer even start it. I have found some references to other people having this problem and instructions on cleaning the software off and reinstalling it (as uninstalling normally doesn't solve it). I will try this tonight. In any case, I didn't see any options in CCC that would allow me to override the settings for max resolution. However I didn't tinker with it too much before I tried updating the drivers, so I may have missed a setting. I contacted Samsung via online chat and they say it's a problem with the video card/driver (of course--what else would they say?). Any thoughts on what else I could try?

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • How to improve Samba performance on VirtualBox machine?

    - by ColinM
    I am running a Windows 7 64bit host and Ubuntu 9.04 32bit guest inside of VirtualBox 4.0.0 on a laptop which has internet connectivity via Wifi. The main use is writing code for which I use Netbeans. My dev environment is hte virtual machine and I use Samba on the VM to share the code directory so that I can use Netbeans on the host as my IDE. Unfortunately Netbeans does a lot of disk access and due to the poor Samba performance it makes the IDE hardly usable. How can I improve performance of the Samba share? On my desktop it isn't so bad but I don't know what the difference would be since they are similar setups (Win 7 hosts, cloned guests, SSDs, Vbox guests using SATA in AHCI mode, etc..). With Bridged networking is the performance between the host and guest limited by the physical hardware (Intel 6200 AGN on laptop)? I switched to Host-only and it didn't seem to improve performance at all. To clarify bad performance, I used 7zip to zip a project directory and got 19kbs to 500kbs depending on the size of the files being zipped. On my desktop it was in the ~10mbs range. Any tips for VirtualBox/Samba configuration to get improve the performance? I am using Samba 3.3.2. Hopefully Samba with SMB2 support will be released soon..

    Read the article

  • Apache and Virtual Hosts Problem on OS X

    - by Charles Chadwick
    I recently formatted and installed my iMac. I am running 10.6.5. Prior to this format, I had the default Apache web server up and running with several virtual hosts, and everything ran beautifully. After formatting, I set everything back up again, and now Apache is acting funny. Here is a description of what I have going on. My default root directory for the Apache Web server is pointed to an external hard drive. In my httpd.conf, here is what I have: DocumentRoot "/Storage/Sites" Then a few lines beneath that: <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> And then beneath that: <Directory "/Storage/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from All </Directory> At the end of this file, I have commented out the user dir include conf file: Include /private/etc/apache2/extra/httpd-userdir.conf And uncommented the virtual hosts conf file: Include /private/etc/apache2/extra/httpd-vhosts.conf Moving on, I have the following entry in my vhosts file: <VirtualHost *:80> DocumentRoot "/Storage/Sites/mysite" ServerName mysite.dev </VirtualHost> I also have a host record in my /etc/hosts file that points mysite.dev to 127.0.0.1 (I also tried using my router IP, 192.168.1.2). The problem I am coming across is, despite having PHP files in /Storage/Sites/mysite, the server is still looking at /Storage/Sites. I know this because in the DocumentRoot contains a php file with phpinfo() (whereas the index.php file in mysite has different code). I have tried setting up other virtual hosts, but they are still doing the same thing. Also, "NameVirtualHost *:80" is in my vhosts file. I saw as a solution on another thread here. Doesn't seem to make a difference. Any ideas on this? Let me know if this is not enough information.

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • What switch should we use for PCoIP?

    - by Jay R.
    We have a small lab space that seats 10 people and has 20 machines. Each machine is set to 1920x1200 resolution because the user apps are best used at that resolution. Currently the machines are all located close enough to montors that a DisplayPort cable will reach, but the pending lab remodel positions them around 80 feet or more away in racks. Our proposed solution is to use PCoIP. We purchased 10 PCoIP portals and 20 PCoIP host cards. We plan to set up a dedicated network to handle just the PCoIP traffic. After testing just one portal and one host card with a cheap 1G switch from a local office supply store, we were left with less than good impressions about the usefulness in our lab. The framerates were not spectacular and the mouse seemed jerky. Our concern is that we can't get away with the cheap 1G stuff from the store because adding more machines to the switch will just make the user experience worse. What switch would be recommended to best support our PCoIP situation? We will need to plug in at least 30 cables based on just those machines. Is there a particular feature to search for that makes a difference? Is there a switch that works best with PCoIP? Added Info: The reporting webapp for the host card shows maximum bandwidth usage to be 220000 kbps. The average appears to be around 180000 kbps. The reverse direction is much lower, like 15000 kbps.

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • wireless to wired internet

    - by Mark T
    My wife uses an old computer which doesn't really like having an internet connection via a USB wireless adapter. I've tried several, and none have been satisfactory. The connection gets dropped often and it is frustrating for her. A wireless card also had the same trouble. Repositioning the computer made no difference. (Two laptops, nearby, connect just fine and stay up, so it isn't the router.) However, the computer always worked well when I had an Ethernet cable connected to it. I know there is a box which will connect to my wireless network and provide an Ethernet cable connection. But the terminology used is so complex, I can't tell what it is I really need. In case that wasn't clear, here it is in different words: What I need is just the opposite of a wireless router. My wireless router takes my cable modem's Ethernet connection and makes it available to wireless clients. What I want is a box which is a wireless client to my wireless router and provides an Ethernet cable connection that I can connect to any device. I need to know the right name for a box with such capabilities. If you know of some inexpensive examples, that would also be helpful. I'm running a wireless G network with a Linksys Router.

    Read the article

  • AT&T Filtering FTP traffic?

    - by xpda
    Using an AT&T DSL, I cannot ftp upload or ftp download a few files of a large 1500 set. The problem is the file name. I can change a few characters of the file name, and they upload fine. I can change the file names from upper to lower case and they upload fine. If I change back to the original file name, it will not upload again. When it doesn't upload, it starts, transfers about 5% of a 5-10 meg file, and then times out. I have uploaded one of the files under a different name, changed the name back to the original, and it will not download via ftp. It will download onto a browser, and it will ftp download just fine with a different name. It just will not download with ftp. I have reproduced this uploading to three different servers on 1and1 and Amazon EC2. When I try it on a non-AT&T ISP client, it works OK. Here is a file that did not upload until I had renamed it. (I have changed it back to the original name): "http://xpda.com/nautnew/11302 STOVER POINT TO PORT BROWNSVILLE SIDE A.png" This problem is unrelated to connection, speed, and file content. Only things I can see that makes a difference are the file name and ATT DSL. Does ATT have some kind of ftp file filtering? Is there anything else that could cause this behavior?

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Outlook 2010 Crashing Unpredictably

    - by cbkadel
    Very often when I open up Outlook 2010 and start doing actions in it, it will hang and become non responsive. I have tried letting it finish, but it never does come back (up to 20 minutes of letting it try). I generally have to restart Outlook and try again. Usually after about an hour of doing this, Outlook somehow snaps out of it and works for the rest of the day. It's generally in the morning (though I doubt that's the key variable). Generally, the emails that cause problems are HTML/formatted, but not always. What I've done so far to troubleshoot: Install Latest Outlook Hotfix (I think Dec 14, 2010) Start Outlook in Safe Mode Neither of those steps seem to make a difference. Usually - after about 10-15 restarts of Outlook on any given day, then it starts working thereafter. My next step is to uninstall/reinstall Office 2010, but I'm hoping someone has seen this and knows what to do about it - though not sure. My configuration is like this: Microsoft Online Services (using Microsoft's Sign In App) - Connecting to Exchange I have two other Exchange accounts in this profile (new feature in 2010) connected through Outlook Anywhere. Life Meeting Conferencing Add In I've disabled the People tab/add in. I've disabled the "Send to Bluetooth" add-in. Not sure what else to do?

    Read the article

  • ubuntu suspend works, but then immediately starts

    - by Yoav Aner
    Having a strange problem after upgrading from Ubuntu 11.04 to 12.04. Previously I could suspend just fine, the computer will switch itself off. Pressing the ON button will switch it on and it will resume. After upgrading to 12.04 however, when I suspend it does (almost) the same, turns itself off, but about 2 seconds later, the computer turns itself on again, and it goes back to life from suspend. I haven't changed any of the hardware or BIOS and it was working before just fine. Also tried every possible switch of pm-suspend ; setting acpi_sleep=nonvs in /etc/default/grub and also this suggestion but nothing seems to make a difference... UPDATE: just tested suspend using the 12.04 liveCD and it was working perfectly fine... but when I boot normally it doesn't. ANOTHER UPDATE: After re-installing I noticed that I can suspend. However, after restoring my home folder - the strange suspend problem happens again. I then created a new user account. When I login to the other account I can suspend without a problem... So This seems specific to my account only. How/What can cause this in my own user settings?

    Read the article

  • After RAID failure SBS 2008 issues logging in and Exchange store does not mount

    - by Josh R
    today has been one of those days. Yesterday a hard drive in our Dell Poweredge 2900 server failed and the RAID array didn't degrade gracefully, so I called Dell (Server still under warranty) and got an engineer to work though the RAID issues with me. He was a nice guy but didn't do too much. We tried to put the RAID in a state where it was bootable and even though we only lost one disk there are still issues with the server. Once we got the server to boot there was an error message saying that the logonui.exe was corrupted and we needed to run chkdsk. I clicked through the error messages and the login screen never came up. So I power cycled the server and it chkdsk automatically but the login screen didn't appear. I tried safe mode, no difference there either. So the issues I am currently having are: 1) The server boots up, the loading windows screen comes up then it dumps me into a black screen where I can only see my mouse cursor. Ctrl+Esc doesn't work Ctrl+Alt+Del doesn't work 2) Some of the services come up: DHCP, DNS, DFS, and Print come up 3) The exchange information store and transport service don't start - I tried using mmc to connect to services.msc on the computer and start them but they throw an error message of "Can't start because group or dependency failed" Has anyone had a problem like this? Can anyone offer some guidance? Thanks a bunch!

    Read the article

  • Can I clone my hard drive to an external and boot from the clone?

    - by willbuntu
    First thing: I am not asking what software I'm supposed to use. I already know the answer: Ghost (proprietary), Clonezilla, and dd (if I'm careful). What I really want to know is if it is possible to (essentially) bit-for-bit clone my entire installation (OS, installed software, activation(s), etc.) to an external USB hard-drive, and then boot off of that (if I need to, I know how to edit BIOS settings and use Plop boot manager), and work with it day-to-day as if there was virtually no difference from using my internal HDD now. Again, I'm not asking how to install Windows to an external (because I know I'd need to do some special workaround), I'm asking if I can clone everything and boot off of it. In case you're wondering why I'm going to this trouble: I'm using a Lenovo Essentials laptop that has an unmodifiable partition table (due to recovery crap), and has all 4 of its partitions spoken for (3 primary, one extended, cannot change the extended). Anyway, my thought is that if I can clone everything and boot off of it when I need to, and just have a Linux distro on the internal HDD, then that could work.

    Read the article

  • certificate SSH login does not work on 22 but other port

    - by Hugo
    On my Red Hat server, the sshd will not accept my correct certificate login. However, If i start another sshd on another port, it works! (I assume the second sshd loads the same configruation files.) second sshd started with: sudo /usr/sbin/sshd -p 54321 -d #-d is optional and prints debug output ssh strange-host -p 22 -vvv prints: debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering public key: /home/me/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 528 bytes for a total of 2389 debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug2: we did not send a packet, disable method debug3: authmethod_lookup password ssh strange-host -p 54321 -vvv prints: debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Offering public key: /home/me/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 528 bytes for a total of 2389 debug1: Server accepts key: pkalg ssh-dss blen 433 debug2: input_userauth_pk_ok: SHA1 fp 0f:1c:df:27:f7:86:49:a8:47:7e:7f:f3:32:1c:7d:04:a3:73:a5:72 So the question is why the difference? I have thought of no way to get any helpful logging from the "standard" sshd to troubleshoot the problem.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Over gigabit connection, Teracopy does 31MB/s, but Windows 8 does it at ~109MB per second?

    - by Gaurang
    I got my brain-melting first taste of Gigabit networking today, between my 2011 MacMini and Windows 8 Pro desktop connected via Cat.5e to Linksys WRT320N(sporting dd-WRT). After making sure that the line speed on both systems showed 1Gbps, I proceeded to copying a 2.4GB MP4 from the Mini to the Win 8 desktop (SMB sharing). Although satisfied with the 30-34 MB/s that Teracopy was showing (that was a proper step-up for me from 10 MB/s), I still was curious about this massive difference in the advertised and real-world speed. 2 hours of Google had me believing that there were other factors that resulted in less speed, SMB being one. So just for the sake of doing it, I iPerf'd both the systems and guess what that showed - around 875mbps on both systems! I then stumbled upon this little piece of info after which I turned off Teracopy and copied the same file through Windows 8's regular copier. 109 MB/s. Molten brains :) What exactly is causing this? And can I enable such speeds via Teracopy? I really dig the extra features that Teracopy has, will surely miss them now :D

    Read the article

  • How do I view source in Outlook 2010?

    - by Martin Duys
    This question has already been asked here; but the answer only gives advice on how to view the email header not the actual html source of the email. There is another question here, which I think may be caused by the same issue as mine, but does not have a satisfactory answer (the answer does not work for the person who posted the question) If I right click on the bottom of an email I can see the option to 'view source' but when I select it nothing happens. I have done a bit of research and came across a post for a much earlier version of Outlook that suggested adding something into the registry. I applied this advice, but it made no difference; but I'm pretty sure that applying this solution correctly for my circumstances will do the trick. When I first received this machine it had a demo version of UltraEdit installed. I uninstalled UltraEdit and installed Notepad++ instead. I am convinced that there is a registry entry that is pointing to UltraEdit as the default view for 'view source' in my email and I need to replace this entry with a reference to Notepad or Notepad++, but I don't know how to do this. Any suggestions?

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • Why is REMOTE_ADDR only sometimes available as an Apache environment variable?

    - by Xiong Chiamiov
    To avoid having to parse X-Forwarded-For in Varnish, I'm trying to just set a header on the SSL terminator (currently Apache) that stores the direct client IP in a header. On our development machine, this works: RequestHeader set X-Foo %{REMOTE_ADDR}e However, in staging it doesn't. Specifically, the header is empty, as illustrated by both varnishlog: 13 TxHeader b X-Foo: (null) (On the development machine, this shows the IP address as expected.) Similarly, logging REMOTE_ADDR shows that it only appears to be populated on the dev machine: # Config LogFormat "%{X-Forwarded-For}i %{REMOTE_ADDR}e" combined CustomLog "/var/log/httpd/access_log" combined # Log file, staging <my ip> - # Log file, development <my ip> <my ip> Since the dev machine is, well, a dev machine, it is different in a number of ways; however, I can't track down which difference is causing this. The versions of Apache are the same (2.2.22), and I don't see anything relevant in any of the standard config files or /etc/sysconfig/httpd. And the rest of the system is reasonably similar, since they're built off the same CentOS 5 base image. I can't even tell from the Apache documentation whether REMOTE_ADDR is expected to exist or not as an environment variable, but it clearly works on one machine, whether by fluke or design, and the inconsistency is driving me mad.

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • Network card shuts down when stressed

    - by user142485
    I have a network card that functions fine with light use, but quits functioning after heavy use. I replaced it with a brand new one and still have the same issue, also updated drivers. It is a wired D-Link card. The Internet seems fine for a small amount of web browsing but when I run a bandwidth test it starts out fast and slows quickly until the card completely quits; I have a constant ping of the gateway going while I run this and it starts timing out after a couple seconds into the speed test. The card will stay on and the data light on it still flashes some but I cannot ping the gateway or anything else until the computer is rebooted. When I boot into safe mode I can browse and run the speed test fine with no problems. I am guessing that this is probably some program that is loading in regular mode but not safe mode that is causing the issue? I have very limited software (turned off av and firewall) on the computer but I am thinking that I'll just have to start eliminating start-up programs and see if that helps. This is on Windows 7 if that makes any difference. Anyone had a similar issue or have any other suggestions/ideas for narrowing down this problem?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >