Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 95/258 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Need software to save videos from 4tube.com - to watch the videos smoothly

    - by Harold34
    Isn't there a programs that will capture screen writes at the hardware level? I have tried several Firefox add-ons, and several stand-alone programs, and none of them will save videos from this site. I even paid for Replay Media Catcher, and it didn't work, so I got a refund. (The website for the best Firefox video downloader I have, Downloadhelper, said Replay Media Catcher worked with that site.) I have a slow internet connection, and cannot watch videos smoothly unless I can cache them. This site (4tube.com) doesn't cache, when you restart, it reloads, when you pause it stops - so I need to be able to save the videos to be able to watch them.

    Read the article

  • XFS and loss of data when power goes down

    - by culebrón
    Each time electricity goes down, my desktop (without UPS) loses some temporary information. Opera can lose settings, history, cache, or mail accounts (Thanks heavens I was wise to use IMAP). Partially or all together. a whole file (complete and save) in Geany appeared empty (and I didn't commit it to Git) rhythmbox lost all podcasts subscription data I'm afraid there are other losses I just didn't see. What's the reason? A memory files cache, a mem-disk? Or non-atomic file writes in xfs? I have Ubuntu 9.10 and XFS on both / and /home partitions. Is ext4 safer in such circumstances? I've seen ext3 is faster. Is it as safe as *4? Given that the apartment I rent is connected to a common bus and 1 safety switch for several apartments, and the neighbors - alone or together - overload it at least once every week, the lights go down often enough for this to be an issue.

    Read the article

  • Performance difference between MacBook Pro (2.8 GHz) vs Air (1.7 GHz)?

    - by jonathanconway
    I'm comparing these two Apple laptops: MacBook Pro (13", 2011 model): 2.8GHz dual-core Intel Core i7 processor with 4MB shared L3 cache 4GB (two 2GB SO-DIMMs) of 1333MHz DDR3 SDRAM AMD Radeon HD 6770M graphics processor with 1GB of GDDR5 memory on 2.4GHz configuration MacBook Air (13", 2011 model): 1.7GHz dual-core Intel Core i5 with 3MB shared L3 cache 4GB of 1333MHz DDR3 onboard memory Intel HD Graphics 3000 processor with 384MB of DDR3 SDRAM shared with main memory There's definitely a gap between them in terms of CPU speed and graphics, but what practical difference would this make on a day-to-day basis? On the one hand, I love the sleek, thin appearance of the Air. On the other hand, I don't want a machine that's going to be dog-slow when doing tasks such as running Virtual Machines, dual-booting to Windows and running multiple instances of Visual Studio, and maybe some light gaming. Is there going to be a major difference that makes the MacBook Pro a more attractive purchase?

    Read the article

  • Monitoring outgoing bandwidth of application

    - by jnolte
    I currently have a VPS that is consuming a ton of outgoing bandwidth and I am trying to drill down to where this may be coming from. Does anyone know of a logical way to go about finding out which pages on the site are consuming the most outgoing data. We have done a ton of front-end optimizations to the site and our google page speed rankings ar 85% so I feel we have done a pretty great job at optimizing the site for speed. Can someone lend some insight on how they have made similar optimizations? Application / Server Stack LEMP Running Varnish Cache / PHP5-FPM WordPress running w3 Total Cache Ubuntu 12.04 LTS

    Read the article

  • Setting lusca and dansguardian iptables on Ubuntu 12.04 to prevent loop

    - by Heri YT
    I have a server with ubuntu 12:04 operating system, which runs as a proxy cache server lusca and DansGuardian as well as internet content filter. With the following composition: the client browser - lusca - DansGuardian - internet. And all this running only on one machine only, the following is a partial configuration on my server lusca: http_port 3128 transparent cache_peer 192.168.0.1 parent 8080 0 no-query no-digest no-netdb-exchange default which is also only found on the DansGuardian default settings namely: filterip="blank" filterport=8080 proxyip=192.168.0.1 proxyport=3128 The question is: Can all goes well? By simply relying on one machine only? What causes the "WARNING: Forwarding loop detected for:"? is not problematic if we leave? How to solve "WARNING: Forwarding loop detected for:" found in / var / log / lusca / cache.log Thank you.

    Read the article

  • Mac OSX Server 10.6 DNS Issues

    - by dallasclark
    Hi, The server was upgraded from 10.5 from 10.6, during the upgrade the Reverse Zones were lost so I tried to recreate these but found that it's best to delete all zones, definitions and start again. So I've started again and Reverse Zones are appearing but I'm still having issues. I receive the following errors (if they help) 01-Nov-2010 12:52:01.254 client 192.168.1.52#57051: view com.apple.ServerAdmin.DNS.public: query (cache) 'server.dev.home.gateway/A/IN' denied 01-Nov-2010 12:59:24.487 client 192.168.1.52#52858: view com.apple.ServerAdmin.DNS.public: query (cache) 'earth.server.dev.home.gateway/A/IN' denied At the moment I have the following setup in the DNS 1.168.192.in-addr.arpa. Reverse Zone 192.168.1.100 Reverse Mapping MacPro-Server.local. server.dev. Primary Zone server.dev. Machine 192.168.1.100 earth.server.dev. Alias server.dev.

    Read the article

  • VMWare tools not installing with an error

    - by JDS
    VMWare tools not installing on Ubuntu 12.04. I'm using Chef to manage the installation, but the Apt commands fail if run manually. I'm using the VMWare tool Debian repo. Example: $ cat /etc/apt/sources.list.d/vmware-tools-source.list deb http://packages.vmware.com/tools/esx/5.0u2/ubuntu precise main When trying to install, most packages seem to go ok, but one, "vmware-tools-foundation", does not. Example: $ apt-get -q -y install vmware-tools-esx-nox=8.6.10-1.precise Reading package lists... Building dependency tree... Reading state information... You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: vmware-tools-esx-kmods-3.2.0-23-generic : Depends: vmware-tools-foundation (>= 8.6.10) but it is not going to be installed vmware-tools-esx-nox : Depends: ...snip list of deps... E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). $ apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vmware-tools-foundation The following NEW packages will be installed: vmware-tools-foundation 0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded. 7 not fully installed or removed. Need to get 0 B/5,886 B of archives. After this operation, 86.0 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 103499 files and directories currently installed.) Unpacking vmware-tools-foundation (from .../vmware-tools-foundation_8.6.10-1.precise_all.deb) ... VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again. dpkg: error processing /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/vmware-tools-foundation_8.6.10-1.precise_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) The key seems to be this error: "VMware Tools cannot install because it appears that another installation of VMware Tools is already present. Please remove the previous installation and then attempt to install this copy of VMware Tools again." However, I've tryed removing and purging and can't seem to "trick" VMWare tools into thinking the packages are gone. Apt thinks they are gone. Is there some service/file/cache/lock left that VMWare tools sees that makes it think that VMWare tools are still installed? I've googled and googled but there is no answer to this question with my particular circumstances on the interwebs. VMWare's documentation of this error is minimal.

    Read the article

  • Debian server doesn't free memory after backup

    - by stan31337
    I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Unfortunately I don't have enough rep to add image here. So here's a link:

    Read the article

  • Something eating space on OS drive

    - by noquery
    I am facing low disk space issue from last few days. I checked with Restore,System Volume Information, $Recycled folders. But there is nothing which is occupying space. I had scanned my system for virus too. Total size of C: is 18 GB. But when I select all folders inside C: and query for used space, it shows 20+ gb space is used. I vacate space some how by deleting temp files, program's cache files, disk clean up etc up to (3 gb). And I ensured that no cache/temp files are recreated who can use the space again. Even after cleaning so much data, I am again facing low disk space issue. Something is eating disk space within 15-20 mins.

    Read the article

  • Best Processor for MediaSmart Server?

    - by Kent Boogaart
    I'm trying to figure out what the best possible processor is that I can stick in my HP MediaSmart server. I'm clueless when it comes to correlating CPUs to motherboards. I suspect it's the socket type I care about, but I worry that there's more to it. CPU-Z gives me (excerpt): Processors Information ------------------------------------------------------------------------- Processor 1 ID = 0 Number of cores 1 (max 1) Number of threads 1 (max 1) Name AMD Sempron LE-1150 Codename Sparta Specification AMD Sempron(tm) Processor LE-1150 Package Socket AM2 (940) CPUID F.F.1 Extended CPUID F.7F Brand ID 1 Core Stepping DH-G1 Technology 65 nm Core Speed 1000.0 MHz Multiplier x FSB 5.0 x 200.0 MHz HT Link speed 800.0 MHz Stock frequency 2000 MHz Instructions sets MMX (+), 3DNow! (+), SSE, SSE2, SSE3, x86-64 L1 Data cache 64 KBytes, 2-way set associative, 64-byte line size L1 Instruction cache 64 KBytes, 2-way set associative, 64-byte line size L2 cache 256 KBytes, 16-way set associative, 64-byte line size FID/VID Control yes Max FID 10.0x Max VID 1.350 V P-State FID 0x2 - VID 0x12 (5.0x - 1.100 V) P-State FID 0xA - VID 0x0C (9.0x - 1.250 V) P-State FID 0xC - VID 0x0A (10.0x - 1.300 V) K8 Thermal sensor yes K8 Revision ID 6.0 Attached device PCI device at bus 0, device 24, function 0 Attached device PCI device at bus 0, device 24, function 1 Attached device PCI device at bus 0, device 24, function 2 Attached device PCI device at bus 0, device 24, function 3 Chipset ------------------------------------------------------------------------- Northbridge SiS 761GX rev. 02 Southbridge SiS 966 rev. 59 Graphic Interface AGP AGP Revision 3.0 AGP Transfer Rate 8x AGP SBA supported, enabled Memory Type DDR2 Memory Size 2048 MBytes Channels Single Memory Frequency 200.0 MHz (CPU/5) CAS# latency (CL) 5.0 RAS# to CAS# delay (tRCD) 5 RAS# Precharge (tRP) 5 Cycle Time (tRAS) 15 Bank Cycle Time (tRC) 21 Command Rate (CR) 1T DMI ------------------------------------------------------------------------- DMI BIOS vendor Phoenix Technologies, LTD version R03 date 05/08/2008 DMI System Information manufacturer HP product MediaSmart Server version unknown serial CN68330DGH UUID A482007B-B0CC7593-DD11736A-407B7067 DMI Baseboard vendor Wistron model SJD4 revision A.0 serial unknown DMI System Enclosure manufacturer HP chassis type Desktop chassis serial unknown DMI Processor manufacturer AMD model AMD Sempron(tm) Processor LE-1150 clock speed 2000.0 MHz FSB speed 200.0 MHz multiplier 10.0x DMI Memory Controller correction 64-bit ECC Max module size 4096 MBytes DMI Memory Module designation A0 size 2048 MBytes (double bank) DMI Memory Module designation A1 DMI Memory Module designation A2 DMI Memory Module designation A3 DMI Port Connector designation PS/2 Mouse (internal) port type Mouse Port connector PS/2 connector PS/2 DMI Port Connector designation USB0 (external) port type USB DMI Physical Memory Array location Motherboard usage System Memory correction None max capacity 16384 MBytes max# of devices 4 DMI Memory Device designation A0 format DIMM type unknown total width 64 bits data width 64 bits size 2048 MBytes DMI Memory Device designation A1 format DIMM type unknown total width 64 bits data width 64 bits DMI Memory Device designation A2 format DIMM type unknown total width 64 bits data width 64 bits DMI Memory Device designation A3 format DIMM type unknown total width 64 bits data width 64 bits How do I figure out what options I have for an upgrade?

    Read the article

  • Verizon HTC Eris - No sound on incoming phone call after 2.1 droid upgrade. Help!?

    - by Michael Rosario
    Has anyone had the following issue? I've had several issues as well: No sound when call connects, ringing or people talking. Apps would force close like weather. I did call HTC support and they had me go into Menu, Settings, Manage Applications and then clear the cache of the problem app. They also had me clear the cache once the browser was open and then do a soft reset (power off the phone and take the battery out for 15 seconds) This did fix some issues, but I am constantly turning my phone on and off to get sound back on call or to make the assigned ringtones work. There's no rhyme or reason as to why they stop working... Anyone else tried anything different??? Related problem statement... http://community.htc.com/na/htc-forums/android/f/32/p/2601/10344.aspx#10344 My wife and I are most concerned about the incoming call issue.

    Read the article

  • best practice with memcache/php - multi memcache nodes

    - by user62835
    So I am working on a web app - that has to be built for scalability. It stores frequent MySQL querys into the cache. I have pretty much everything built and ready to go - but I am concerned on best practices on handling where to cache the data. I've talked to a few people and one of them suggested to split each key/value across all the memcache nodes. Meaning if i store the example: 'somekey','this is the value' it will be split across lets say 3 memcache servers. Is that a better way? or is memcache more built on a 1 to 1 relationship?. For example. store value on server A till it faults out - go to server B and store there. that is my current understanding from the research I have done and past experience working with memcache. Could someone please point me in the right direction in this and let me know which way is best or if I completely have this mixxed up. Thanks

    Read the article

  • Apache and MySQL taking all the memory? Maximum connections?

    - by lpfavreau
    I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server. One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3Gb of memory instead of the 1Gb I have at the moment. Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the lunch, telling myself I'd tweak it depending on the visits pattern later. I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers. I'm getting: # free -m total used free shared buffers cached Mem: 1000 944 56 0 148 725 -/+ buffers/cache: 71 929 Swap: 1953 0 1953 Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row. Also I don't see any swapping: # vmstat 60 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 57612 151704 742596 0 0 1 1 3 11 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 1 1 24 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 2 1 18 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 0 1 13 0 0 100 0 And finally, while requesting a page: top - 08:33:19 up 3 days, 13:11, 2 users, load average: 0.06, 0.02, 0.00 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024616k total, 976744k used, 47872k free, 151716k buffers Swap: 2000052k total, 0k used, 2000052k free, 742596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24914 www-data 20 0 26296 8640 3724 S 2 0.8 0:00.06 apache2 23785 mysql 20 0 125m 18m 5268 S 1 1.9 0:04.54 mysqld 24491 www-data 20 0 25828 7488 3180 S 1 0.7 0:00.02 apache2 1 root 20 0 2844 1688 544 S 0 0.2 0:01.30 init ... So, I'd like to know, experts of serverfault: Do I really need more RAM at the moment? How do they calculate that for 150 simultaneous connections I'd need 3Gb? Thanks for your help!

    Read the article

  • Changing Network Path of Offline Files

    - by Adam
    Many of our users have their Home folder set as Available Offline. Their Windows 7 laptops will not be back on our network for a few weeks. In the mean time, we're setting up new servers and reorganizing our files, so the network path to the Home folder is going to be completely different. Based on some testing I did, when the users return, any files they've created or modified while offline will be gone, and the new Home folder will be there and not set to sync. The offline cache of the old Home folder is still accessible through the Sync Center, but they're not going to want to dig through that and try to find what's missing. Avoiding this would involve keeping the old server around and moving everyone to the new location in person, so we know for sure they're synced first. Is there any way to avoid this that isn't as tedious, like a quick registry edit or something that will point the old offline cache to the new location?

    Read the article

  • Macbook cannot see specific wireless network after being connected to it for a while

    - by donut
    Okay, so there's a single wireless network that my laptop has troubles with. My Macbook Pro used to be fine with it until it changed to using channel 13 (or 11?). Since then, after being connected to it for a while it disappears from my laptop's view. Other networks are showing up fine and other computers (including several Macs) have no troubles connecting to this network. If I clear my system cache using Onyx and then restart (sometimes a couple times) my laptop can see and connect to it again. But it seems that if I disconnect and try reconnecting I have to clear my cache again. One thing to note is that if I put my computer to sleep while connected to this network it has no problems reconnecting on wake up. I've got a 15" Macbook Pro 2,2 with Leopard 10.5.8.

    Read the article

  • Where did this incorrect cached DNS lookup come from?

    - by Stephen Jennings
    Somehow, I've been having a chronic issue where my computer will get an invalid DNS lookup in its cache for either of the two Exchange servers I use from Mail.app. My workplace runs one of the Exchange servers and I run the other (they are totally unrelated, hosted by different companies, etc.). The problem manifests as a certificate domain error. When it happens, I can run nslookup mail.mydomain.com and I see the incorrect IP address (usually owned by either Apple or Akamai), but if I run nslookup mail.mydomain.com 8.8.8.8, I get the correct address. My real quest is to find out why this keeps happening, and to do that, I'd like to know which server is supplying me this bad DNS entry. Is there a way to check my DNS cache to see where this bad lookup came from?

    Read the article

  • write-through RAM disk, or massive caching of file system?

    - by Will
    I have a program that is very heavily hitting the file system, reading and writing randomly to a set of working files. The files total several gigabytes in size, but I can spare the RAM to keep them all mostly in memory. The machines this program runs on are typically Ubuntu Linux boxes. Is there a way to configure the file system to have a very very large cache, and even to cache writes so they hit the disk later? I understand the issues with power loss or such, and am prepared to accept that. Crashing aside, in normal operation the writes should eventually reach the disk! Or is there a way to create a RAM disk that writes-through to real disk?

    Read the article

  • Need software to save videos from 4tube.com - to watch the videos smoothly

    - by Carl
    Isn't there a programs that will capture screen writes at the hardware level? I have tried several Firefox add-ons, and several stand-alone programs, and none of them will save videos from this site. I even paid for Replay Media Catcher, and it didn't work, so I got a refund. (The website for the best Firefox video downloader I have, Downloadhelper, said Replay Media Catcher worked with that site.) I have a slow internet connection, and cannot watch videos smoothly unless I can cache them. This site (4tube.com) doesn't cache, when you restart, it reloads, when you pause it stops - so I need to be able to save the videos to be able to watch them.

    Read the article

  • User given a login prompt when closing Word documents after viewing them in IE7

    - by Martin Owen
    When using IE7 to view Word documents on our CRM system (an ASP.NET 2.0 application running on Windows Server 2003 and IIS 6 and using Windows authenticaton) I'm finding that a prompt appears when the user closes the document. The Word document is originally opened by clicking a link in the CRM system. Are there permissions that I can set on the folder containing the Word documents to prevent this prompt? I've already tried only allowing the Read permission for the Users group (I've left Administrators with Full Control.) If there's another solution to this without using permissions please let me know. UPDATE: I ran Fiddler as suggested by JD and here is the output from the two responses after the request for the document. The first seems to be a DAV response and the second is the authentication request. How do I prevent the DAV response and just return the .doc on the server? OPTIONS / HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 200 OK Date: Thu, 18 Feb 2010 13:37:36 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET MS-Author-Via: DAV Content-Length: 0 Accept-Ranges: none DASL: <DAV:sql> DAV: 1, 2 Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK Cache-Control: private ------------------------------------------------------------------ OPTIONS /docs/ZONE%20100-105.doc HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 401 Unauthorized Content-Length: 83 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Basic realm="<REMOVED>" X-Powered-By: ASP.NET Date: Thu, 18 Feb 2010 13:37:36 GMT ------------------------------------------------------------------ UPDATE 2: I found a potential workaround for the problem via this post: http://forums.iis.net/p/1149091/1868317.aspx. I moved all of the documents that are being requested into a folder outside of the web root, and created a virtual directory for it (also outside of the web root). When I followed a link to one of the documents in IE and then closed the document I wasn't presented with a login prompt. I should point out that I'm not using FPSE, unlike the person in the forum post. Ideally I don't want to have to put the documents in a separate virtual directory, but this is the simplest solution I've found so far.

    Read the article

  • Mac OSX Server 10.6 DNS Issues

    - by dallasclark
    The server was upgraded from 10.5 from 10.6, during the upgrade the Reverse Zones were lost so I tried to recreate these but found that it's best to delete all zones, definitions and start again. So I've started again and Reverse Zones are appearing but I'm still having issues. I receive the following errors (if they help) 01-Nov-2010 12:52:01.254 client 192.168.1.52#57051: view com.apple.ServerAdmin.DNS.public: query (cache) 'server.dev.home.gateway/A/IN' denied 01-Nov-2010 12:59:24.487 client 192.168.1.52#52858: view com.apple.ServerAdmin.DNS.public: query (cache) 'earth.server.dev.home.gateway/A/IN' denied At the moment I have the following setup in the DNS 1.168.192.in-addr.arpa. Reverse Zone 192.168.1.100 Reverse Mapping MacPro-Server.local. server.dev. Primary Zone server.dev. Machine 192.168.1.100 earth.server.dev. Alias server.dev.

    Read the article

  • Best SQL Server Configuration with this hardware.

    - by DavidStein
    I just received my new SQL Server from Dell. The server will be serve approximately 15 OLTP databases which average 10GB in size. Here are the basic specs: Dell PowerEdge R510 with up to 12 Hot Swap HDDs,LED Intel Xeon E5649 2.53GHz, 12M Cache, 5.86 GT/s QPI, 6 core (Quantity of 2) 48GB Memory (6x8GB), 1333MHz Dual Ranked RDIMMs for 2 Processors, Optimized PERC H700 Integrated RAID Controller, 1GB NV Cache 300GB 15K RPM SA SCSI 6Gbps 3.5in Hotplug Hard Drive (Quantity of 4) 600GB 15K RPM SA SCSI 6Gbps 3.5in Hotplug Hard Drive (Quantity of 6) My first thought was to use 3 arrays. OS - Raid 1 - (2)300GB T-Log - Raid 1 (2)300GB DB - Raid 5 (5) 600GB Backup - (1) 600GB - non-raided. However, I could also do the following after purchasing one more drive for backup. OS and T-Log - Raid 10 - (4)300GB DB - Raid 10 (6)600GB The hard drive space is not an issue as the databases are not that large. I'm just trying to optimize the speed of the applications using these databases. So, what would you guys recommend?

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • USB Flash not recognised by Windows and BIOS, but works fine in Linux

    - by bbalegere
    I have a Transcend JetFLash 2GB USB Drive.It was working fine and I had been using it occasionally. All of sudden it stopped working in all versions of Windows . The USB Drive is also not recognised by the BIOS.It does not show in the list of bootable devices.(It used show up in the list earlier) However the USB Drive works fine in my Linux Mint 11 OS. Running dmesg gives this [ 941.812192] usb 1-2: new high speed USB device using ehci_hcd and address 4 [ 941.936178] usb 1-2: device descriptor read/64, error -71 [ 942.164188] usb 1-2: device descriptor read/64, error -71 [ 942.380189] usb 1-2: new high speed USB device using ehci_hcd and address 5 [ 942.504138] usb 1-2: device descriptor read/64, error -71 [ 942.732179] usb 1-2: device descriptor read/64, error -71 [ 942.948154] usb 1-2: new high speed USB device using ehci_hcd and address 6 [ 943.364134] usb 1-2: device not accepting address 6, error -71 [ 943.476172] usb 1-2: new high speed USB device using ehci_hcd and address 7 [ 943.892140] usb 1-2: device not accepting address 7, error -71 [ 943.892191] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 944.296190] usb 2-2: new full speed USB device using uhci_hcd and address 3 [ 944.438251] usb 2-2: not running at top speed; connect to a high speed hub [ 944.709928] usbcore: registered new interface driver uas [ 944.729999] Initializing USB Mass Storage driver... [ 944.730509] scsi6 : usb-storage 2-2:1.0 [ 944.730908] usbcore: registered new interface driver usb-storage [ 944.730917] USB Mass Storage support registered. [ 945.736320] scsi 6:0:0:0: Direct-Access JetFlash Transcend 2GB 8.07 PQ: 0 ANSI: 2 [ 945.744547] sd 6:0:0:0: Attached scsi generic sg1 type 0 [ 945.753316] sd 6:0:0:0: [sdb] 3944448 512-byte logical blocks: (2.01 GB/1.88 GiB) [ 945.758274] sd 6:0:0:0: [sdb] Write Protect is off [ 945.758288] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 [ 945.765167] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.765181] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 945.784309] sd 6:0:0:0: [sdb] No Caching mode page present [ 945.784323] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.239512] sdb: sdb1 [ 946.257279] sd 6:0:0:0: [sdb] No Caching mode page present [ 946.257292] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 946.257302] sd 6:0:0:0: [sdb] Attached SCSI removable disk Looks like there is something wrong the USB Drive.It is not recognised in any computer running Windows. Is there any way to fix this? Any idea why this problem occurred ?

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >