Search Results

Search found 12397 results on 496 pages for 'maybe'.

Page 362/496 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Can I install windows on an SSD and access data from my old windows HDD?

    - by nzifnab
    I purchased new computer components, switching my hardware from AMD and Radeon to Intel and Nvidia. I kept components from my old computer like the powersupply and two HDDs. Everything appeared to install correctly and the system booted into the BIOS just fine (after a brief snafu with the CPU fan). My goal was to use the two harddrives and just be able to turn on the computer and load up my old windows install with all the files, programs, and documents. I expected to have to call Microsoft to re-register the windows install for the new hardware (since I had to do that last time I upgraded w/ the same windows version). When the computer attempts to boot into windows it briefly flashes a bluescreen and then restarts. System recovery gives a message something like "BadDriver Failover" something something. I assume this is because it's trying to use amd drivers for an intel chipset (or something...?) and I've been as-yet unsuccessful in getting it to boot into my old windows partition. SO! I decided eff it, maybe I'll go visit my nearest Micro Center and buy a 200 GB SSD, install windows onto that, and then... be able to access just the contents of both of my other harddrives? I don't intend on running any of the programs but there were some saved files I would like to salvage from the 500 GB harddrive, The 1.5 TB harddrive only had files on it, no OS or applications so I'd also expect to still be able to access it. Is this possible? Can I install the SSD and only format/install windows onto it and still access the contents from my two transferred-over drives?

    Read the article

  • How to troubleshoot this memory usage?

    - by Camran
    I have a classifieds website. I use PHP, MySql, and SOLR. Solr uses a Servlet Container, in my case JETTY, which is java application. I just noticed that something was terribly wrong on my website. I opened the terminal and entered the "top" command and noticed that JAVA was EATING all the cpu and mem. Now I thought "Ok, maybe I need more mem and cpu" So I increased it. But along with the increase the java app started eating more. This has never happened before, and it is either a bug, or a hack of some kind. Anyways, I need to troubleshoot this now, and so I wonder how do I do this? Can I somehow pinpoint exactly when the memory usage started to go up from some error log? How does one troubleshoot this? How do I prevent it? Is it possible to prevent too many requests somehow, if they are within a timeline? Thanks

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Silent install of Japanese Language Pack in Win7

    - by Doltknuckle
    Every year, due to re-imaging, I am forced to find a way to install the Japanese language pack on a collection of 30 computers. Each year I look for a way to automate this process, and each year I am forced to do this manually. Maybe this year will be different. Has anyone had any luck with installing and configuring far east language support for windows 7 without user interaction? I have already downloaded kb972813 and have a way to get it out to the computers. What I normally do is this: Run the EXE, use the default settings. Open up language settings and create the JP keyboard. Configure the language bar settings. Copy settings to default user. Delete the local user cache. Sign the different user accounts in to make sure that the default settings are correct. This whole process takes about 10 minutes, multiply that out by 30 machines and you are looking at a 5 hour process. If I can log into all of the computers at once, I can normally cut that down to about an hour. Any ideas would be appreciated. Thanks in advance

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • IIS 7: One Page Works, All Others Fail With "Error code: ssl_error_rx_record_too_long"

    - by Michael
    On my local machine, I have a second site bound to Port 81. Within that site is a certain page which I can browse from other machines with no problems, but all other pages fail with "Error code: ssl_error_rx_record_too_long". Each of the failing pages (as well as the lone working page), works with localhost. So, from any machine, local or remote: http://cmwmach01.mydomain.biz:81/RD/SS/SS.aspx (works) http://cmwmach01:81/RD/SS/SS.aspx (works) http://cmwmach01.mydomain.biz:81/RD/POV/SC.aspx (fails - gets changed to https) http://cmwmach01:81/RD/POV/SC.aspx (fails - gets changed to https) Everything works with localhost (locally, of course). I've tagged this question with SSL because, at one point, it would warn about an SSL cert issue (maybe this was self-signed at one point?), but now it doesn't. While there may be an issue around that, I don't see how this could cause the issue I am seeing (but, as I mention below, am I way out of my depth here). I am way out of depth here in trying to figure out why that one page works (or the others don't), so that I can make them all work. Any ideas?

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Multiple SSL certificates on Apache using multiple public IPs - not working

    - by St. Even
    I need configure multiple SSL certificates on a single Apache server. I already know that I need multiple external IP addresses as I cannot use SNI (only running Apache 2.2.3 on this server). I assumed that I had everything configured correctly, unfortunately things are not working as they should (or maybe I should say, as I expected them to work)... In my httpd.conf I have: NameVirtualHost *:80 NameVirtualHost *:443 Lets say my public IP is 12.0.0.1 and my private IP is 192.168.0.1. When I use the public IP in my vhost my default website is being shown instead the one defined in my vhost, e.g.: <VirtualHost 12.0.0.1:443> ServerAdmin [email protected] ServerName blablabla.site.com DocumentRoot /data/sites/blablabla.site.com ErrorLog /data/sites/blablabla.site.com-error.log #CustomLog /data/sites/blablabla.site.com-access.log common SSLEngine On SSLCertificateFile /etc/httpd/conf/ssl/blablabla.site.com.crt SSLCertificateKeyFile /etc/httpd/conf/ssl/blablabla.site.com.key SSLCertificateChainFile /etc/httpd/conf/ssl/blablabla.site.com.ca-bundle <Location /> SSLRequireSSL On SSLVerifyDepth 1 SSLOptions +StdEnvVars +StrictRequire </Location> </VirtualHost> When I use the private IP in my vhost everything works as it should (the website defined in my vhost is being shown), e.g.: <VirtualHost 192.168.0.1:443> ...same as above... </VirtualHost> My server is listening on all interfaces: [root@grbictwebp02 httpd]# netstat -tulpn | grep :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5585/httpd What am I doing wrong? If I cannot get this to work I cannot continue to add the second SSL certificate on the other public IP... If more information is required just let me know!

    Read the article

  • screen scraper templates for various websites

    - by intuited
    I'm looking specifically for a convenient way to locally archive posts from this and other similar sites. I'd like to separate the question itself from the answers, or maybe crop the question and store it, keeping the page title. Obviously I don't need to store the menu or the various other site interface chrome. The best way to do this would seem to be to associate an XSLT template with a match on the URL and use that template to pull the various relevant informations and format them. My two-part question: Is there a tool specifically built for this task? I.E. something that takes a URL and checks it against a map of path-matching expressions to templates, and outputs the result of applying the template to that resource? xmlto seems to be most of the way there, and could probably just be called from a script that does the pattern-matching, but something already integrated would be more convenient. Is such a URL_pattern-to-XSLT_template map publicly available somewhere? Question 2.5: Is it legal to do this with sites like this one that have public licenses on their content?

    Read the article

  • Need to set up a proxy on Linksys E3200 to filter home internet

    - by Justin Amberson
    the fact that I have a Linksys E3200 may not be important. I can configure the router through the web interface, but I don't know the things I will be toggling are called. I already do simple port forwarding to access applications on my Mac remotely. So router admin is not something I technically need explained. I'm looking to running a proxy on my home computer, that filters all HTTP traffic that goes through my router. So if my daughter is on her iPad and accesses Safari, my Mac will be the judge of the validity of the request. I need something like NetNanny I guess, but local. Actually, anything that can just filter all port 80 traffic that runs locally, but maybe validates with a password? I truly truly hope this question falls within the bounds of Serverfault. I'm not a total internet newb but I'm at a loss for what to Google. If possible answer this question: Is there a webapp that can listen on port 80, and validate requests to port 80 with a password? If so, can I forward all traffic on port 80 to my Mac, to be re-routed to the user? Is this the same as a VPN? Thank you for your help. Justin

    Read the article

  • Issue with a secure login - Why am I being redirected to the insecure login?

    - by mstrmrvls
    Im having some issues getting a website working at my place of work. The issue was rasised when a "double login" occurred from the secure login site. The second login was actually being prompted by the HTTP domain and not HTTPS. In essence the situation is like this: The user navigates to https://mysite.com/something The login prompt pops up Enter username and password The user is presented with ANOTHER login prompt (IE will say its insecure, and the address bar reflects that) If the user puts in their password the insecure one, they will login to the insecure site. if they hit cancel it will present them with a 401 page Navigating back to https://somesite.com/something will by pass the login prompt and log them in to the secure site automatically (cookie maybe) I'm a bit confused to why the user isnt being logged in properly the first time (redirected to non-ssl) but any consecutive login will be okay? I've been trying to use fiddler to see what is happening after the user puts in their password the first time and trying to get fiddler to automatically login to the site (with no luck) I believe the website in question is using Basic Digest authentication. Thanks for any help

    Read the article

  • What causes this sonar sound on OS X?

    - by Richard Metzler
    Both of my Macs play this sonar sound that sounds like "ping ping ping ping" with a small amount of delay / echo. It occurs to me that it is played once a day but I'm not sure why. I checked iCal but didn't found anything (I don't use iCal anyways but maybe it's connected to Google calendar or my iPhone). I've heard this sound played by both my MacBook and my iMac but not yet simultaneously. Update This sound is not submarine.aiff. It sounds much more like what skub linked to but there are 4 "pings" instead of 1. It is played at different times (today around 5pm and again at 8.45, but as far as remember not everyday). That's why I'm not sure I could record it, but I could try. The sound might come from my iPhone, though I'm not sure which apps are alowed to play sound when they are not running. Also I don't see any indication in the message center or something similar. I think I have to start taking notes on which apps running.

    Read the article

  • Outlook 2010 search not working after upgrade to windows 8

    - by Klaaz
    After upgrading my computer to Windows 8 Outlook 2010 has stopped displaying search results. Normally you can enter a (part) of a word in the search box on top of the inbox list and it will show you result immediatly. Even mails allready visible on the screen are not found. Somebody familiar with this issue? Update: maybe relevant: I use an Google Apps Pro account. All mail is synced and locally available in Outlook 2010. I did not change this in any way while upgrading, it was working perfectly before. I can scroll through all the e-mails, new mails are coming in as expected. This morning I received two mails from a person by the name of Rosanne. When searching on her name in Outlook it gives me One (1) result, the last mail from today. Update 2: Rebuilding the index seemed to be working. But after another day it stopped working again. No results whatsoever in Outlook search. Rebuilding indexes every day is not an option as it takes several hours. I suspect it has something to do with the fact that I use Google Apps Pro. It acts like a Exchange server to outlook. In indexing options (configuration) I added the directories containg the PST from this service (mail is also synced locally)

    Read the article

  • How to stop Vista from auto changing video resolution?

    - by bialix
    I have new Acer Aspire Revo R3600 computer with Vista pre-installed. The computer has NVidia video adapter. While connecting 17" LCD monitor (LG L1742S) via VGA cable it works fine, and I can change the resolution of the display from max 1920*1024 down to some other value, and after reboot the settings are restored correctly. But when I'm connecting bigger full HD 1920*1080 display (LG E2250) via VGA cable then every boot I have the same problem: I see boot progress window, then I see MS logo, then I see welcome screen then I start to see desktop and suddenly monitor switch off and show me the message about unsupported frequency of input signal As I understand Vista tries to auto-change resolution and sets wrong parameters. I've tried to boot into safe mode and into low-resolution mode, every time I have the same problem: Vista boot-up and suddenly monitor stops working. I've tried to connect this monitor to notebook with Windows XP and has no problem to work with this display on its native resolution. How can I disable this display resolution auto-changer in Vista? Or maybe there is another workaround?

    Read the article

  • Linux servers in a (primarily) Windows (AD) environment

    - by HannesFostie
    When I arrived at my current position, our environment existed almost exclusively of Windows servers. However, I am a big fan of using Linux for certain applications, like the webgallery I was asked to set up, a simple SFTP server, Nagios for monitoring etc. I do fine setting these up, but not being the Linux expert, I am not sure how to properly join these servers to the domain and was therefor wondering what procedures or guidelines other people follow. We often use ping -a to quickly figure out the hostname of a certain server, but this does not seem to work for the linux machines, most likely because of the whole WINS/NetBios thing I assume. I just joined one server to the domain, but probably missed something cause it's not working even after a dnsflush. Next to that, the couple procedures I've found so far are pretty extensive and most of the time don't seem worth the time. Best case scenario, I download some kind of client (smbclient?), enter the domain name and maybe the server to use, supply an administrator password and that's it. Is that possible at all? Thanks

    Read the article

  • JBoss database connection pool configuration

    - by Qben
    I am facing an connection pool issue in my clustered JBoss installation. From time to time one of my connection pools will hit the roof and I get a lot of these in my logfile. java.sql.SQLException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] ); The odd thing is that I can see in the JMX console that the ConnectionCount hit the roof, but at the same time InUseConnectionCount is often quite small. The problem will resolve itself after a couple of minutes but during recovery phase my application will not work (for obvious reasons). The question is if this indicate an error in the configured timeouts of the connections (I pretty much use defaults), or if my pool is simply too small to handle the peaks. Under normal operation I would say I use ~40% of the configured max number of connections. The reason I just don't increase the max number of connection is that if I actually used up all connections I suspect that InUseConnectionCount would hit the roof. Hence I suspect I might have more issues than just a too small pool size. Maybe InUseConnectionCount has decreased at the time I check jmx-console and it actually do hit the roof? I tend to collect data every second minute. Any hints are more than welcome.

    Read the article

  • Read access to Active Directory property (uSNChanged)

    - by Tom Ligda
    I have an issue with read access to the uSNChanged property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNChanged property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNChanged property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNChanged property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

  • Did Adobe Photoshop just killed my Graphics Card for good?

    - by user6004
    I was working with Adobe Photoshop, just some regular work, when I came to edit a PSD file and change the text of some layer, when all of a sudden the PC froze. No mouse, screen is frozen, keyboard strokes aren't getting me anything, no Task Manager, nada. So I rebooted my PC, and then something quite terrifying appeared before my eyes. It was not the Checkdisk utility that was launched, that made me terrified (by the way, that reboot damaged the partition table of an external HDD that was connected at the time to my PC, but that's another story). It was the screen itself. Please have a look. So after Checkdisk finished and Windows loaded, I noticed that the resolution was not right. Instead of 1440x900 which I had set, it was 1280x1024. When I went to change it back, I had no option to change back to my old resolution, and has only 3 other general resolution properties, as if my Video Card (GeForce 8800 GTS btw) was not recognized. And what do you know, in the Device Manager it appeared with an exclamation mark. Inside the hardware, it said this: Windows has stopped this device because it has reported problems. (Code 43) Uninstalling the drivers, downloading the newest drivers from NVIDIA and installing them did not work. It always comes back to this. So, do you have any advice before I go out and buy a new graphics card? I thought this was the only option left, but maybe the experts at Super User can help me out. By the way, the dotted screen appears after every reboot, and I see the dots when the ASUS Motherboard screen shows up at boot. Thanks in advance.

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • Copy a harddrive from a failed desktop machine using a second working one.

    - by MrEyes
    Heres the scenario: I have PC-A, an old PC that runs Windows XP but now refuses to boot due to a failed motherboard (or maybe PSU). This PC has a single 80gb IDE drive. I also have PC-B, running Windows Vista, this is working fine. I want to copy all the data off PC-As HDD onto PC-B. To do this I have taken the HDD out of PC-A and connected it as a slave to PC-B. PC-B now boots and sees the additional drive. However, when I attempt to access/copy user folders (i.e. Documents and Settings/[username]/*) I am told that I cannot access the folders due to user permissions. I am doing this under an adminstrator account on PC-B. So the question is, how can I "backup" the data? Preferably without making any changes to the drive contents. The reason for this is that it is possible that PC-A is failing due to a bad PSU, so I intend to replace it before writing off the machine. However I would feel much happier if I had a backup of the data on the HDD.

    Read the article

  • iptables rules for DNS/Transparent proxy with ip exceptions

    - by SlimSCSI
    I am running a router (A Netgear WNDR3700 if that matters) with dd-wrt. For content filtering I am using OpenDNS. I wanted to make sure a user could not bypass OpenDNS by putting in their own name servers, so I have a rule to catch all DNS traffic. iptables -t nat -A PREROUTING -i br0 -p all --dport 53 -j DNAT --to $LAN_IP I did have one computer on the network I wanted to allow past OpenDNS filters. On that machine I manually set the name servers, and created another rule to allow it to pass iptables -t nat -I PREROUTING -i br0 -s 192.168.1.2 -j ACCEPT This worked well. Today, I installed a transparent proxy (squid) on the router and added these rules: iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT This also works, however the 192.168.1.2 address does not get routed through squid. How can I have 192.168.1.2 (and maybe others in the future) by-pass the port 53 rules, but not the port 80 rules?

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >