Search Results

Search found 17953 results on 719 pages for 'someone someoneelse'.

Page 531/719 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • Why is my Mac not displaying anything to my LCD tv using HDMI?

    - by Pure.Krome
    Hi folks, I've got an iMac desktop computer. Love it. I wish to connect it to my LCD TV using HDMI. There is no HDMI output on the iMac so i had to buy one of these bad boys :- so now I can output video (via the mini Display Port) and sound (via USB) through this box, to my LCD. Works great ... with a single direct cable. I have another 3 or 5 metre cable inserted into my wall, so i do not have to have a silly hdmi cable floating in the air between my iMac and my LCD TV. When I do this, there is no picture. To better explain all of this, i made a quick video explaining my problem in detail, so you can exactly see what is going on/wrong. I've also tried changing the output format for the TV from 1080i down to 720p and even lower .. incase the cable in the wall doesn't allow 1080i. here's the video with the full explanation :- http://www.youtube.com/watch?v=ZkKRKnRIh6Q (NOTE: I incorrectly said in the video that the hidden wall cable is 10 metres long. me == fail. It's 3m or 5m...). Can someone please watch it and suggest some ideas to getting it working?

    Read the article

  • Trouble Downloading from some sites

    - by Fletch
    I am trying to download the new Microsoft Security Essentials but when I click on the Download button instead of getting the Download box popup nothing comes up. The progess bar at the bottom shows it doing something then when it reaches 100% nada. I can down load from HP (Drivers) and sites like Majorgeeks with no problem. I also have this problem on the Adobe download page when trying to get the shockwave and flash player. I am fixing my Granddaughters laptop that she got from someone else. There were over 26 Trojans listed on it when I installed AVG and they would not go away. I used CCleaner and HiJack This and deleted everything I could and wiped the freespace. Then ran AVG again and this time after finding a few Trojans and deleting them the system was reported as clean. IE8 then would not connect to the net so I used my computer to DL a copy and put it on the laptop, after that I was able to use the laptop to connect to the net and download a driver to get the sound working again. Laptop HP dv4000 XP Pro

    Read the article

  • Why is it a bad idea to use a customer email as the from address

    - by Crab Bucket
    I've got an application that emails users once they have filled in a form. It uses a [email protected] as a from address. The customer wants it to use the email from the form as the from address which could be anything. I have been told that this is a bad idea due to spoofing/blacklisting and spam. I feel really vague about the exact reason about why this is a bad idea particularly as i've got to try to counsel the client out of this. Can someone explain to me why this is a bad idea. Interestingly the client has used a gmail account as the from address as a demo which not only works fine but has enabled the application to start sending emails (it wouldn't do it before with an email which was [email protected]). Erm - what is going on. I'm told one thing and the opposite works. Sorry - i know this is basic but I could find anything on a google search. Largely I think because I'm having trouble even framing the question. EDIT Thank you everyone - great answers. Interestingly the server sending the email and the mail box that it is going to are both behind the same firewall so the client says they are unconcerned about spam. Oh well.

    Read the article

  • KVM Guest Reboot Loop

    - by javano
    I have been pulled into a situation where a KVM server (CentOS 6.2) lost power and upon reboot one of the guests hasn't started up again (XP SP3). I have SSH'ed in and someone must have changed something relating to the hyper visors prior to the power loss, but not rebooted all the guests. This particular guest wouldn't start because it was configured to use /usr/bin/qemu-system-x86_64 which isn't there now (assuming it was before?). I changed it to use /usr/libexec/qemu-kvm as this is what all the other guests on this server seem to be using, and its booting up. Using virt-manager on my local machine I can connect to the display of the XP machine and it gets as far as this screen; http://support.gateway.com/emachines/issues/2-1131285152-01.gif The problem I face now, is that which ever option I choose the machine just reboots. So it's and endless loop. I thought that perhaps a file system error maybe present due to the unclean shutdown. There is an XP SP3 ISO mounted under the guest, which I booted from in an attempt to access the recovery tools, but I don't have the Administrator password! I am out of ideas, and it's turning out to be quite the conundrum. Should I use a 3rd party live CD to test the FS for errors? How else can I trouble shoot these restarts?

    Read the article

  • How to have LiveJournal delegate my OpenID to something else?

    - by T-Boy
    I understand that if I have full control over my domain, I can set it up so that I can delegate the task of authenticating to another OpenID service provider. The problem is, what I'd like to do is to get the LiveJournal server to pass the authentication to someone else, instead of having LJ doing it. Preferably what I'd like to do is get LiveJournal, when asked by a web site, say, "No, I don't do it anymore -- go to this address". The plan was that this address would then be in a domain I fully control, which then would pass it on to whichever service provider I choose. I don't even know if I've gotten my understanding of OpenID right, if all this shenanigans are necessary, if my question makes sense, or if it's even possible with a service provider like Livejournal. ETA: Doing a little more reading up, and examining the source for my LiveJournal user page, I note that this particular line in the file's <header> area: <link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" /> I suspect that changing this will allow me to forward OpenID requests to whomever I wish, I think; so far so good. Now comes the hard part -- figuring out how to change all of that using LiveJournal's customization options, if that is at all possible (here's hoping I don't need to pay to get that functionality).

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • How do I change the default ftp folder in MacOS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called AutoUpdate. Clicking an autoupdate button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • Error on LDAP Login - xsessions error - Session lasted less than 10 seconds

    - by Draineh
    I have two machines both running CentOS 5.6 64bit. On the LDAP Machine it has a DHCP, BIND and OpenLDAP Server. LDAP is correctly configured and users can authenticate against it. Using root I configure machine 2 to use LDAP for authentication and when trying to log in it successfully authenticates against a saved user on the LDAP Server but produces the following errors and then throws me back to the login screen. I can still sign in as root and use the machine as normal. The syslog doesn't show any errors and I disabled SELinux to see if it was interfering. The error; Your session only lasted less than 10 seconds. If you have not lgoged out yourself, this could mean that there is some installation problem or that you may be out of diskspace. Try logging in with one of the failsafe sessions to see if you can fix this problem. There is then a tickbox to view the contents of ~/.xsessions-errors which contains; /etc/gdm/PreSession/Default: Registering your session with utmp /etc/gdm/PreSession/Default: running: /usr/bin/sessreg -a -u /var/run/utmp -x "/var/gdm:0:Xservers" -h "" -l ":0" "admin" localuser:admin being added to access control list No profile for user 'admin' found /bin/sh: /usr/bin/dbus-launch --exit-with-session /etc/X11/Xinit/Xclients: No such file or directory /bin/sh: line 0: exec: /usr/bin/dbus-launch --exit-with-session /etc/X11/xinit/Xclients: cannot execute: No such file or directory Apologies if someone notices something isn't spelt quite right or doesn't sound right, the system never actually creates or saves this file so I have had to type it across from the screen. Through the authentication panel in CentOS on the client I have set it to create the users home directory on login. The user is being correctly authenticated and the /home/admin folder has been created but this error would suggest it has not? The client is a new install on an 80gb hard drive so there is well over 80% of the drive still available. Thanks for any suggestions or pointers.

    Read the article

  • explanation of RAM specs, and what do I need for a Gaming rig

    - by ewok
    I am looking into upgrading my custom built PC's RAM. I use the machine mostly for gaming, but I don't really know a ton about RAM, so I wanted to ask a few questions. The research I've done tells me there is a negligible increase in speed for anything above 1600 MHz. is this true or is it worth the extra money to go higher? Other than drawing more power from the PSU, is there any real difference in performance with different voltages (1.5V vs 1.65V)? most of the kits I've found in the 2x4 1600 range have a CAS latency of 9 and timing of 9-9-9-24. For a significant increase in price (usually about 1.5x), I can get either 8 or 7 and lower timing. Is it worth the cost? What I am looking for here is someone to give a good explanation of what the different specs represent, and how that relates to the performance of the machine. Specifically, I'm looking for what specs I need to focus on for a good gaming rig. I am NOT looking for a "buy this, it's the best RAM" without an explanation of why. The information will be much more valuable as it will allow me to make my own informed decision. As they say, give a man a fish, he'll eat for a day. teach a man to fish, and he'll eat for the rest of his life.

    Read the article

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • Computer hangs at BIOS screen. Cannot enter setup

    - by d2jxp
    I have an HP Pavilion a6500f (it's a year out of warranty) and it's hanging on the blue HP BIOS screen. If I mash F10 while it's starting up, it will say "Entering Setup..." but I will see no results. It will hang there and not do anything. If I actually wait until I can see the screen and then hit F10, there's no response at all and the computer will sit at the BIOS menu. I've dusted and cleaned it out, reseated the memory, switched the RAM slots, and reset the CMOS battery using the reset jumper. I'm out of ideas. I'm pretty sure it's not a hard drive issue, since my problem is at the BIOS. After this post, I'll disconnect the hard drive and try to just boot without it. Anyone have any other ideas? Edit: Okay, so I tried disconnecting the hard drive and now I can get back into the BIOS. I reconnected it and I'm locked out again. So the problem is my hard drive.. I guess I should delete this post unless someone has any ideas as to what's wrong with the drive?

    Read the article

  • Computer hangs at BIOS screen. Cannot enter setup.

    - by d2jxp
    I have an HP Pavilion a6500f (it's a year out of warranty) and it's hanging on the blue HP BIOS screen. If I mash F10 while it's starting up, it will say "Entering Setup..." but I will see no results. It will hang there and not do anything. If I actually wait until I can see the screen and then hit F10, there's no response at all and the computer will sit at the BIOS menu. I've dusted and cleaned it out, reseated the memory, switched the RAM slots, and reset the CMOS battery using the reset jumper. I'm out of ideas. I'm pretty sure it's not a hard drive issue, since my problem is at the BIOS. After this post, I'll disconnect the hard drive and try to just boot without it. Anyone have any other ideas? Edit: Okay, so I tried disconnecting the hard drive and now I can get back into the BIOS. I reconnected it and I'm locked out again. So the problem is my hard drive.. I guess I should delete this post unless someone has any ideas as to what's wrong with the drive?

    Read the article

  • File locked / read-only

    - by oshirowanen
    On a networked computer, I have a file which is coming up as read-only because someone else has it open. This is not true. This is a file stored locally on the computer and it is not being used by anyone else. I can login to the same computer using a different user, and the file opens up fine. I just get the issue with a particular user account. Other than deleting theses account/profile and creating it again, how can I unlock this file? Double clicking on this file gives me a message saying The file is locked for editing by another user, or the file (or the folder in which it is located,) is marked as read-only, or you specified that you wanted to open this file read-only. I don't think the folder is locked, because I can use other files in that folder fine, it's just 1 particular file which is causing this issue. I know that only 1 user is using this file as the file is on his c: and the same file works fine if he logs off to allow another user to log in.

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • MySQL (local) owner and permissions

    - by Steve Nelson
    I asked this question on the MySQL forums and got no answer. I asked on StackOverflow and received a recommendation to try on ServerFault. So here I am. I recently successfully installed the 64 bit version of mysql-5.5.8 on a MacBook Pro in the /usr/local directory. To address a completely unrelated software (RVM actually) , I chown-ed my /usr/local directory to $USER, Which made MySQL very unhappy. It complained specifically about the /usr/local/mysql/data directory, so I chown-ed that directory to _mysql:wheel. Everything appears to work again, but it made me wonder if I would have been better off changing the owner of the whole /usr/local/mysql directory, not just the data subdirectory. Since I neglected to make notes of what owner the default installation runs under before rashly changing the owner of the /usr/local directory, could someone tell me what owner and permissions the /usr/local/mysql directory is by default if you don't inadvertently screw it up? :-/ In terms of permissions I'm guessing rwxr-xr-x would be appropriate (that's what the data directory currently has and it appears to be working fine), but reinforcement for that hunch would be appreciated. Thanks for any help. Steve

    Read the article

  • SPF for two different outgoing servers?

    - by Marcus
    I have ran into a problem that I think someone should have a really clever answer for. Today we have our own mailserver that looks like "mail.domain.com" – which we use to send out mail to our customers (with a modified PHPMailer script). Usually around 5000 mails every day. Everything from customer support to invoices goes through there. The from-header is set to "[email protected]". We are now thinking of migrating to Google Apps for internal use (with 70+ users). However, we cannot use Gmails SMTP for sending "bulk" mails (they have a limit of 500 outgoing mails per day) so we really want to keep using our current system for sending automated mail to our customers – and using gmails SMTP for our internal use. So, how do we set up our SPF-records (Sender Policy Framework) for this? We do not want to get stuck in any filters for "spoofing" the sender from either type of account (the ones sent from our own server, and through Gmails). In short: we want to be able to use the same e-mail adress (for sending) on two different SMTP servers (and therefore two different IP-adresses). Anyone with a good knowledge off SPF who knows how to go about? Or if it is even possible? Anything else I should think of when switching to Google Apps?

    Read the article

  • What Windows app can sort a huge XML file?

    - by Torben Gundtofte-Bruun
    I have some enormous XML-based configuration files, with 125000 lines in them. The problem is that they are auto-generated by the system I use, and "child" tags are in a random order within their respective parent tag. This means that a diff comparison is impossible. I want to recursively sort all tags within a parent tag by the value in name="". Some parent tags only appear once and don't have a name="" parameter; these should be sorted by the tag name itself. Once the files are sorted like this, they can be compared quite easily using normal tools. We are currently using ExamXML which can match unsorted XML files, but it fails because the files are too big. Is there an application that can do this? (Windows much preferred; Linux only as a last resort) I do not want to dive into development or XSLT jobs. I am thinking that someone must have made a simple sorting tool like this already - I just can't find it using Google. Update: With help from this site, I created a small package that I want to share: XML-Sorter_v0.3.zip Update: Follow-up question here.

    Read the article

  • Site name questions [closed]

    - by avenas8808
    I'm creating a website on German cars. No issues with it, since it's basically a PHP site, and let's say for the sake of argument, there are no server issues. However, I plan to call it "autohaus" (well, for the sake of this scenario anyway, it's not its actual name) The site is not for commercial gain; it's a fansite. Legally, can I use a name for a website if it's a commonly-used one - "autohaus" appears to be this. There are other sites called "Autohaus" but could someone use a similar name for non-commercial, information/educational use? Basically, what's the copyright law for such a situation? Currently my site is only on localhost for now, but what if I did want to release it to the web? Obviously the legal issues have to be taken into account before the site goes live, but there's no Data Protection Act. etc. since it's not selling anything, and no-one's buying anything, and the pictures in it are being used educationally (but that's for another question entirely)] [NOTE: This question is on trademark/names etc. not domains]

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Managing SharePoint permissions via Active Directory?

    - by rgmatthes
    My company has thousands of employees organized thoroughly via Active Directory. I have confidence in the accuracy of the Department and Title information displayed in the user profiles. I'm helping to put up a brand new SharePoint 2007 site, and I contacted IT about managing the site's permissions through AD Groups. The goal is to have the site automatically assign read/write/contribute/whatever permissions based on the information in AD. For example, we could create an AD Group called "Managers" that would contain anyone with the "Manager" title in their AD user profile. I would have SharePoint tap into this AD Group to mass assign permissions if I knew all managers would need a certain level of access (read/write/contribute/whatever). Then if a manager joins the company or leaves it, the group is automatically updated (provided AD gets updated, of course). My IT rep called back and said it couldn't be done. This seems like a pretty straightforward business requirement, and one of the huge benefits of having Active Directory, but maybe I'm mistaken. Could anyone shed some light on this? A) Is it possible to use dynamically-updated AD Groups when assigning permissions via SharePoint? (Does anyone know of a guide I could show my doubtful IT rep?) B) Is there a "best practice" way to go about this? I've read some debate on whether SharePoint Groups or AD Groups are the way to go. My main concern is dynamic updating. C) If this isn't available out of the box, can someone recommend third-party software that will provide the functionality I'm looking for? A big thanks to anyone who can help me out!!

    Read the article

  • VBA Solution to VLOOKUP with Hyperlinks

    - by Emily2
    I am looking for some help with a VBA solution for preserving hyperlinks when using VLOOKUP on Excel (2010). I have a load of data on Sheet 1 for internal use only, and a cut-down version of this on Sheet 2. Instead of recreating Sheet 2 everytime, I am looking to have a working version which updates everytime Sheet1 is updated. Thus, I have used VLOOKUP on Sheet 2 so that only the desired info is returned on sheet 2. However, the problem was that sheet 1 contained in many cells Hyperlinks to external websites, and this would not pull through to Sheet2 using VLOOKUP. With some help, however, using the following VBA solution the hyperlinks now pull through: Function GetHyperLink(r As Range) As String If r.Hyperlinks.Count Then GetHyperLink = r.Hyperlinks(1).Address End If End Function And I am using the following formula in the relevant cell(s) in Sheet2: =HYPERLINK(GetHyperLink(INDEX('Sheet 1'!$B$1:$B$10001,MATCH(A4,'Sheet 1'!$A$1:$A$10001,0))),(VLOOKUP(A4,'Sheet 1'!$A$1:$B$10001,2,FALSE))) However, the problem is with formatting: every cell on Sheet2 is formatted blue and underlined, even although some of them do not contain a hyperlink! Is someone able to help with a VBA solution/formula to fix this last piece of the puzzle? Many thanks, in anticipation.

    Read the article

  • Many different BSOD

    - by Exa
    I'm experiencing multiple bluescreens for a couple of months now. Their error code is as diverse as their time of occurence... Sometimes it happens during gaming, sometimes when watching videos, sometimes when the computer is idle. These are the bluescreens I see most often: PAGE_FAULT_IN_NONPAGED_AREA KMODE_EXCEPTION_NOT_HANDLED IRQL_NOT_LESS_OR_EQUAL SYSTEM_SERVICE_EXCEPTION SYSTEM_THREAD_EXCEPTION INTERRUPT_EXCEPTION_NOT_HANDLED DRIVER_IRQL_NOT_LESS_OR_EQUAL DRIVER_OVERRAN_STACK_BUFFER Responsible drivers (according to the memory dumps): hal.dll tcpip.sys dxgmms1.sys ndis.sys mouhid.sys atikmdag.sys dump_atapi.sys and of course: ntoskrnl.exe My first thought was a driver incompatibility because I am using Windows 8 and some of the bluescreens seem to come from driver issues. All drivers are up to date. I'm afraid that my memory is broken or the mainboard or both. I used the windows integrated memtest which didn't find any errors. Memtest86 found some errors. Does it make sense to buy new memory? Couldn't it be a problem of the board as well? I also read that my memory could run at a too low voltage. But it's set to 1.5V as recommended. Another guess would be to set the memory's latencies manually, but how do I know which ones to try? Here is a screenshot of bluescreenview showing the latest bluescreens. Maybe someone has faced the same behavior before and found a solution. Any ideas or suggestions? Current setup on which the bluescreens occur. Windows 8 RTM (6.2.9200) Asrock 970 Extreme4 AMD FX-8150 ATI Radeon HD5850 16 GB RAM (DDR3-1800) Latest drivers for all devices

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >