Search Results

Search found 9403 results on 377 pages for 'high tech'.

Page 295/377 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • HTML tabindex: Put some links last without complete enumeration

    - by Emanuel Berg
    I know I can use the HTML anchor attribute tabindex to set the tabindex of links, i.e., in what order they get focused when the user hits Tab (or Shift-Tab). But, I have a home page with tons of links, and to enumerate all those is a lot of work. The actual case is, I have four image links that by default gets index 1, 2, 3, and 4 (well, the behavior is equivalent, at least). But, I'd much rather have the first non-image link as number 1. Check it out here and you'll understand immediately. I tried to give the first non-image link (the link I desire to have tabindex 1) - I tried to give it tabindex 1 explicitly, hoping that it would cascade from there, but it didn't (i.e., the first image link got implicit tabindex 2). I also tried to give the image links ridiculously high tabindexes, but that didn't work: as the other links didn't have tabindexes at all, those highs were still "first". As a last resort (the solution currently employed) I gave the image links all tabindex -1. That makes for logical tabbing, but, it is suboptimal, as those image links are excluded from the tab loop - a user tabbing away will probably never realize that the images are clickable. I'd like them to be reachable with tabbing, but last, after all the ordinary links. If you wonder why I'm so determined to achieve this, it has to do with my own finger habits: I almost exclusively search for links, tab back, tab forth, etc., and very seldom using the mouse. Note: I'll accept a script to change the actual HTML for a complete enumeration, if you convince me there is no "set" way to solve this problem.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • graphics performance better on battery?

    - by Scott Beeson
    Anyone have any idea why my laptop would perform (considerably) better while on battery than while plugged in? It's a Dell Latitude E6420 with Windows 8 Pro. I tried mirroring all the settings in the selected power plan from "On battery" to "Plugged In" and that didn't help. I then just restored the defaults for all power plans (balanced and high performance). I'm still seeing the same results. The best example where it is most noticeable (don't laugh) is Sim City Social in Chrome. I'm probably seeing a performance increase of 5x on battery versus plugged in. This is easily reproducible too. I'm very confused. Could it be caused by dust? The laptop isn't that old and there is no visible dust. I'm not going to take it apart to check the insides as it's a corporate laptop. Could it be overheating? Battery Sim City Social: 68 degrees max Civ V: 77 degrees max Charger Sim City Social: 68 Civ V: did not test See answer below... I'm retarded

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • What Wireless Router/ADSL Modem to get? N-band a must!!

    - by JJarava
    I'm looking for a Dual-N band Router OR ADSL Gateway and I'd like some recommendations. Situation: I have a 802.11b/g ADSL gateway provided by my telco, but the WIFI signal won't cover all the house (especially the living-room, so my tv-connected Mac Mini has poor to no internet access). So I'm looking to either replace the DSL modem with a N-enabled one, or to add a Router to the mix. I've had a modem+router setup for many years, and I know the advantatges (double NAT, double FW = more security) and issues (more complex to troubleshoot, two possible points of failure), so I'd rather live with a single (ADSL Gateway) device, if possible. Requirements: Dual-N Band (300 Mbs WIFI) 1 GB Ethernet ports ADSL2+ support (if it's a ADSL gateway, which would be desirable) "Best" range and speed possible Nice to have: USB port to share disks/printers on the network Media streaming I've been a long time user of Linksys, so googling around I found the WRT610N (http://www.linksysbycisco.com/US/en/products/WRT610N) for a "Pure Router" perspective, and it's one of those that Linksys styles "N++" (http://www.linksysbycisco.com/US/en/promo/Promotion-Go-Wireless?stepname=Promotion-Step-Go-Wireless-High-Performance) But I haven't been able to find similar "ADSL" gateways. I've found the WAG320N, but there is little to no info in the Linksys site (i.e., i don't know if it's Dual Band, or if it has GB ethernet) Any opinions/recommendations of other products/suggestions are more than welcome.

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • My desktop has started overheating -- how hot is hot?

    - by Jerry
    I have a two year old desktop, some random quad core HP desktop. It used to run very quietly, but in the past month, the fans start up anytime anything "serious" is being done -- compiles, playing video, etc. Right now, speedfan and speccy report the cores are between 50C and 70C. Speedfan reports this as hot. (Nice flame icon.) Well, the system does sit on my carpet, so two weeks ago, I took off the lid, and cough *cough* it was pretty filled with dust. I got out an air can, turned on a vacuum and carefully got out all the dust that I saw on the CPU fan the case fans any fan I saw (graphics board) and blew out all the dust I could from all the circuit boards. And then I closed the case back up. It has definitely run cooler since then, but it still runs hot, and I hear high speed fan noise I never heard before. How hot is too hot? At what temps do consumer grade CPUs die? What should I be looking to do? Replace CPU fan? (It seems to work) Replace power supply fan? Assuming the dust problem is gone, where should I be looking to determine why the machine is heating up? Epilogue: After following the various pieces of advice given here, the system did run cooler, but it was still noticeably running louder (hotter) than just a few months prior. I ended up purchasing a new cpu heatsink and fan and during installation found the cooling grease from the original heatsink was just a dried, cracked layer, probably more of an insulator than heat transfer agent. With the new fan AND the new heatsink compound, the system ran much much cooler and the fan rarely turns on.

    Read the article

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • How does the "Steam" platform work? Is it DRM? Can I trust "Steam"-powered software? [closed]

    - by Chris W. Rea
    So – I just bought the new game Supreme Commander 2. This question is not about the game, but about the online software installation platform that it seems to require. I haven't bought a game in a long time, and I'm puzzled: Apparently, SC2 is a "Steam"-powered game. When I went to install the game, it asked me to either create a new Steam account, or log in with an existing account. I clicked "Cancel" because I don't plan to play online and I don't want anything unnecessary installed on my computer, since I only plan to play single player! However, after clicking "Cancel", the installer asked for my confirmation that I indeed wanted to cancel installation of the game! I thought I was just canceling the "online" portions! So I really want to know: How do "Steam" powered games work? Is this essentially a form of DRM (Digital Rights Management)? Can I trust this software platform? Has anybody done any independent verification on how this platform works? (I'm very leery of any DRM after the Sony BMG CD copy protection scandal. Thank goodness for Mark Russinovich.) Does the "Steam" platform install anything particularly nasty or unwanted on my computer? High-rep users: Please vote to reopen this question. It is not about the game, but about the software update platform / updater / DRM. Imagine if the software in question were a productivity application. The issues remain the same.

    Read the article

  • LDAP authentication issue with Kerio Connect

    - by djk
    Hi, We have Kerio Connect (mail server) running on a Windows Server 2003 server on a domain. In the webmail client, users are able to change their domain password. This functionality used to work fine until a user tried to change their password a few days ago, when every password they'd try would result in the webmail client claiming their password was "invalid". I spoke to Kerio about this and they claim that this error is returned by the domain controller, which supports my initial investigations. The error that the DC is logging when an attempt is made to change the password is this: "80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece" The "data 52e" part indicates that this is an "invalid credentials" error. I don't see how this can be as I've tried (in the Kerio Connect configuration) various accounts that have privileges to modify accounts, including my own as I am a domain admin. I have ran 'dcdiag' (all tests) on the DC and it came back passing every single one of them. I've searched high and low for an answer to this and came up empty. Does anyone have any idea why this may have suddenly started happening? Thanks! Edit: I should mention that the passwords we are changing to do comply with the complexity policy.

    Read the article

  • Apache Process question about RAM usage

    - by Andrew Fashion
    So everytime I load a new page, I notice a new HTTPD process opens, every time I click a page, and each process says it's using anywhere from 2-4.5% of memory. Does that mean every single process is running at that time using 2-4% of RAM? It's a brand new server and I'm the only one on the server at the moment. Or does it mean all the other processes are dying, and only the new one is active. Because 4% of my 2048MB of RAM is already 82MB for just one process!?!? Let me know, because I am trying to determine what I need to beef my server up in order to handle high loads of traffic. I'm expect to get 20,000 uniques per day on launch. I am currently running a Dual Quad Xeon server, with only 2GB of ram, I will upgrade to 8GB or more shortly. Let me know what you suggest! thank you [root@D18634 log]# top | grep 'httpd' 11315 apache 15 0 362m 82m 24m R 12.3 4.1 0:03.00 httpd 11310 apache 16 0 322m 41m 21m S 5.7 2.1 0:02.98 httpd 11315 apache 15 0 362m 83m 25m S 24.3 4.1 0:03.73 httpd 11319 apache 16 0 324m 42m 20m R 1.0 2.1 0:01.85 httpd 11319 apache 16 0 362m 82m 23m R 78.5 4.1 0:04.21 httpd 11321 apache 16 0 323m 44m 23m S 35.3 2.2 0:04.13 httpd 11319 apache 15 0 361m 82m 23m S 8.3 4.1 0:04.46 httpd 11321 apache 15 0 323m 44m 23m S 35.9 2.2 0:05.21 httpd 11313 apache 15 0 324m 41m 19m S 48.6 2.1 0:03.23 httpd 11322 apache 16 0 354m 72m 20m R 11.0 3.6 0:05.11 httpd 11322 apache 16 0 354m 72m 20m S 23.9 3.6 0:05.83 httpd 11314 apache 16 0 355m 75m 22m R 18.3 3.7 0:04.64 httpd

    Read the article

  • Apahe configuration with virtual hosts and SSL on a local network

    - by Petah
    I'm trying to setup my local Apache configuration like so: http://localhost/ should serve ~/ http://development.somedomain.co.nz/ should serve ~/sites/development.somedomain.co.nz/ https://development.assldomain.co.nz/ should serve ~/sites/development.assldomain.co.nz/ I only want to allow connections from our local network (192.168.1.* range) and myself (127.0.0.1). I have setup my hosts file with: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 development.somedomain.co.nz 127.0.0.1 development.assldomain.co.nz 127.0.0.1 development.anunuseddomain.co.nz My Apache configuration looks like: Listen 80 NameVirtualHost *:80 <VirtualHost development.somedomain.co.nz:80> ServerName development.somedomain.co.nz DocumentRoot "~/sites/development.somedomain.co.nz" DirectoryIndex index.php <Directory ~/sites/development.somedomain.co.nz> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost localhost:80> DocumentRoot "~/" ServerName localhost <Directory "~/"> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <IfModule mod_ssl.c> Listen *:443 NameVirtualHost *:443 AcceptMutex flock <VirtualHost development.assldomain.co.nz:443> ServerName development.assldomain.co.nz DocumentRoot "~/sites/development.assldomain.co.nz" DirectoryIndex index.php SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /Applications/XAMPP/etc/ssl.crt/server.crt SSLCertificateKeyFile /Applications/XAMPP/etc/ssl.key/server.key BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 <Directory ~/sites/development.assldomain.co.nz> SSLRequireSSL Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> </IfModule> http://development.somedomain.co.nz/ http://localhost/ and https://development.assldomain.co.nz/ work fine. The problem is when I request http://development.anunuseddomain.co.nz/ or http://development.assldomain.co.nz/ it responds with the same as http://development.somedomain.co.nz/ I want it to deny all requests that do not match a virtual host server name and all requests to a https host that are requested with http PS I'm running XAMPP on Mac OS X 10.5.8

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • Video output vanishes and it isn't the video card... suggestions?

    - by Ira Baxter
    I have a high-end dual processor Dell workstation that operated for several years without problem. Recently, the video output has gone flaky, in the sense that after 2-3 days, it loses synch with the monitor (sometimes you can still see a hashed-mess of the what you expect to see on the screen). Poking at the keyboard, and accessing the file system on the machine from over the network, indicates that the machine is running fine and its just the video. A reboot fixes the problem... for another day or two. This in effect makes the machine unusable. So, I replaced the video card with an exact duplicate bought from e-bay. (I checked after the dupe arrived to be sure, yep, same model number). I still get the same behavior. So unless I believe that both video cards are broken the same way, I have to beleive this is a problem with the motherboard/power supply, neither of which I am inclined to replace. The only other possibility I can think of is a Windows update to the graphics driver. How would I check for this? Anybody had a similar problem? Otherwise we're junking what used to be a perfectly good machine. (Another couple of hours of more wasted effort and that's it in terms of economics anyway).

    Read the article

  • Overheating Toshiba Satellite L300

    - by ldigas
    A coleague of mine's having trouble with his new Toshiba Satellite L300 ... I tool a kook at it, and indeed it's hot as hell. I couldn't hold my hand on it for too long. He says it also has a tendency to turn itself off (WinXP 32bit running) with no forewarning. Hasn't happened to me while I was using it, but that wasn't long anyways. The first guess was it was too dirty ... problem is it's new, came out of a package a quarter of a year ago. Kept in a clean environment (office). Looks clean. No dust in sight. Second guess is that the fan wasn't working properly, cause indeed it has intervals of working, and non working. But when I listen to it, it sounds like normal usage. I took a SpeedFan measurement, and it reports temp. up to 85 Celsius ... which is definitely too high. Anyone knows what else I could do to it ? It is under warranty and it will go to the service, but I thought if there is something we can do, as to avoid carrying it there / be without it for a week ...

    Read the article

  • What is the fastest and best way to convert an rmvb video to mp4/mkv without losing any quality?

    - by Eric Leung
    the file will be played in a popbox3d. my old method was to convert the video using vidcoder (an offshoot of handbrake) using normal settings, but i've recently confirmed that this significantly reduces video and audio quality. i bumped up the conversion quality to 'high profile' and this produced a higher quality video but raised the conversion time to about twice the video length (95 minutes to convert a 45 minute video) on a core2duo laptop. this is less than ideal when a large number of videos need to be converted. i have tried a direct remuxing using mkv toolnix but this produced a video that refused to display video on the popbox3d, which is consistent with the reported: [quote=other old thread] it is possible to put RealMedia A/V in MKV container (used MKVtoolnix) - however, it is awkward to play later. RV40 is only suspected to be based on H.264 - simplify, is not consistent with MPEG-4 AVC specification. [/quote] i have read that ... [quote=from old thread] Under normal circumstances, [ffmpeg] should convert the video to .video.mp4 and the audio to (.wav then to) .audio.mp4, then mux the video and audio into a new .mp4 file and delete the temporary video-only and audio-only files.[/quote] and i am currently attempting to discover how this is done. help? PS: i download a lot of series from asia and for some strange reason, rmvb is a really popular format over there. sometimes, it's the only format that's available. unfortunately, it's a format that is incompatible with the popbox3d, so i have to convert the files before i can watch them on my tv.

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • Help with memory usage issues on VPS

    - by Niall Collins
    Hi there, I am running a VPS server with 6 .net web sites/applications running on it. I am having issues with performance on the server, mainly it running out of memory. I contacted the company that lease the server to me and they told me it was because I also had sql server 2008 express also running on the server. So I went ahead and removed this, uninstalled etc. However I still seem to be having issues. For example at present, looking at resource consumption, the virtual memory is: ID: vprvmem Current Use: 894,328,832 bytes Limit: 1,073,741,824 bytes This means useage of ~80%. Is there any way I can check out exactly that applications, web sites, software is taking up most of the servers memory, so I can look at rectifying it. I feel that 80% is much to high to allow for contingency for a spike in traffic. I have got extra memory resources added to the box recently, but I would prefer finding the source of the problem rather than throwing extra memory at it. Maybe these levels are correct and alls running ok, but would like to investigate it to make sure. My knowledge of hardware is limited as I mostly deal in the spectrum of software. So any tools out there that can help me or any pertient advice.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Apache + PHP via FastCGI

    - by Wilco
    I'm running into some problems while attempting to run PHP via FastCGI in Apache. I have the FastCGI module loaded, but get the following error when attempting to load a page: The requested URL /fastcgi/php54.fcgi/index.php was not found on this server. Somewhere, it seems that the script to be executed is appended to the executable without any spaces. Is this where the problem likely is? Below I've included snippets from my Apache configuration (hopefully this is enough): LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so FastCgiIpcDir /var/run/fastcgi AddHandler fastcgi-script .fcgi FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 AddType application/x-httpd-php .php ScriptAlias /fastcgi/ /Library/WebServer/FCGI-Executables/ <Directory "/Library/WebServer/FCGI-Executables"> Options +ExecCGI SetHandler fastcgi-script Order allow,deny Allow from all <VirtualHost *:80> ServerName www.somedomain.com ServerAdmin [email protected] DocumentRoot "/Web/www.somedomain.com" DirectoryIndex index.html index.php default.html CustomLog /var/log/apache2/access_log combinedvhost ErrorLog /var/log/apache2/error_log Action application/x-httpd-php /fastcgi/php54.fcgi <IfModule mod_ssl.c> SSLEngine Off SSLCipherSuite "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM" SSLProtocol -ALL +SSLv3 +TLSv1 SSLProxyEngine On SSLProxyProtocol -ALL +SSLv3 +TLSv1 </IfModule> <Directory "/Web/www.somedomain.com"> Options All -Indexes +ExecCGI +Includes +MultiViews AllowOverride All <IfModule mod_dav.c> DAV Off </IfModule> <IfDefine !WEBSERVICE_ON> Deny from all ErrorDocument 403 /customerror/websitesoff403.html </IfDefine> </Directory> </VirtualHost> ... and this is the executable: #!/bin/sh PHP_FCGI_CHILDREN=1 PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN export PHP_FCGI_MAX_REQUESTS exec /opt/local/bin/php-cgi54

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >