Search Results

Search found 11197 results on 448 pages for 'related'.

Page 76/448 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How does IPv6 subnetting work and how does it differ from IPv4 subnetting?

    - by Michael Hampton
    This is a Canonical Question about IPv6 Subnetting. Related: How does IPv4 Subnetting Work? I know a lot about IPv4 Subnetting, and as I prepare to (deploy|work on) an IPv6 network I need to know how much of this knowledge is transferable and what I still need to learn. IPv6 seems at first glance to be much more complex than IPv4. So I would like to know: IPv6 is 128 bits, so why is /64 the smallest recommended subnet for hosts? Related to this: Why is it recommended to use /127 for point to point links between routers, and why was it recommended against in the past? Should I change existing router links to use /127? Why would virtual machines be provisioned with subnets smaller than /64? Are there other situations in which I would use a subnet smaller than /64? Can I map directly from IPv4 subnets to IPv6 subnets? My interfaces have several IPv6 addresses. Must the subnet be the same for all of them? Why do I sometimes see a % rather than a / in an IPv6 address and what does it mean? Am I wasting too many subnets? Aren't we just going to run out again? In what other major ways is IPv6 subnetting different from IPv4 subnetting?

    Read the article

  • Computer turns off unexpectedly

    - by Shahar
    My computer turns itself off unexpectedly after some time of use. It appears that this might be temperature related, but not for sure. I installed 2 tools that monitor temperature: SpeedFan and CPU Thermometer. The only definite finding is that there is a sensor (labelled temp1 in SpeedFan and CPU in CPU thermometer), which shows a temperature of 108C a second before the computer powers down. Until that moment, this sensor shows a constant temperature of 40C. I can usually reproduce the shutdown by viewing a few movies together, which cause another sensor (labelled CPU in SpeedFan) to go up to 60sC, but I do experience the problem even at times when this sensor remains low and cool. It does seem that the problem is more frequent if the computer is turned back on immediately after shutdown, but not always. I have had other hardware problems recently, which might be related: My hard disk heated up. I installed a fan on it, which worked to reduce the heat. The hard disk sensor shows around 40C. I had occasional blue screens and hard disk failures. Replacing the power supply seems to solve both these issues, but then this powerdown problem began appearing. I would appreciate any suggestions as to how to determine where the fault is, or what needs to be replaced.

    Read the article

  • Network share not always available on Windows 2003

    - by JP Hellemons
    Hello everybody, we have a windows 2003 server with a shared directory/folder. I've seen this thread but this wasn't any help: http://superuser.com/questions/58890/the-specified-network-name-is-no-longer-available I have a ping -t running from 3 pc's (vista and two windows 7) they all work. the problem occurss when two users enter the network share then this 'network share is no longer available' appears and the explorer windows turn white. after f5 or refresh the shared directory is back. this is really strange. there is no anti virus or kasparsky running on either end. this is all in the same LAN. the internet connection is really stable, so it's really strange. because a stable internet connection should imply that the local network connection is also stable and that this is a windows issue. can it be a router issue? I have checked the eventlog on the server for diskfailure related messages, but there are none. EDIT: can this be related to mapping a shared directory to a drive letter? and that there is a router between me and the mapped network drive? or is it just windows that is not working well with two users on the same shared folder? should I install samba or something?

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • PGB Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about pgb multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • Why is a FLAC encoded from a decoded MP3 bigger than the MP3?

    - by Ryan Thompson
    To be more precise than in the title, suppose I have a MP3 file that is 320 kbps. If I decompress it, then logically, all the data except for roughly 320 kilobits out of each second of audio should be redundant data, able to be compressed away. So, when I encode the decompressed file to FLAC, or any other lossless codec, why is it so much larger? On a related note, is it theoretically possible to losslessly recover the source mp3 audio from a decompressed wav? (I know the mp3 itself is lossy. I'm asking if it's possible to re-encode without any further loss.) EDIT: Let me clarify the related question, and the rationale behind it. Suppose I have a wav that was decompressed from an MP3 file (and assume I don't have the mp3 itself for some reason). If I don't want to lose any more quality, I can re-encode it with FLAC or any other lossless encoder and get a larger file just to maintain the same quality. Or, I can re-encode it to mp3 again and get the same size as the original but lose more data. Obviously, neither of these cases is ideal. I can either have the original size or the original quality, but not both (I mean the quality of the original mp3, not the original lossless source). My question is: Can we get both? Is it theoretically possible to recover the lossy compressed data from the lossy decompressed data, without losing even more? If it is possible, I could imagine a lossless compression algorithm that compresses the audio with FLAC. Then it also scans the audio for any signs of previous lossy compression, and if detected, recompresses it losslessly to the original lossy file. Then it keeps whichever file is smaller.

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

  • Resources for Smartphone Security

    - by Shial
    My organization is currently working on improving our data and network security due to increasing HIPAA laws and a general need to get a better grasp on controlling our health related information. We are a non-profit working with people with developmental disabilities so we handle a lot of medical related information. One area that has been identified as a risk is our use of smartphones, specifically at this time Windows Mobile 6.1 devices from T-Mobile. We do not utilize the VPNs on the phones so there isn't any way they can access our databases or file servers (username/password for VPNs is not the domain logons). What would be exposed however is the particular user's email account since you could extract out the username/password and access the email either on the device or on our web email (Exchange 2003) which could contain HIPAA protected confidential information about clients and services and this would be an incident that would have to be reported. What resources or ideas would help us secure these devices? I'm not worried about data interception (using SSL) but more about physical theft or loss of the device. Are there websites that I just have not found with guidelines and suggestions or particualar products that would help protect us? I also don't want to limit the discussion to windows Mobile either. I myself am looking at an android 2.0 device and there is always the eventual possibility we could get pushed to enable the VPNs. I know this is a subject that likely won't have any particular correct answer and it is something we should all be aware of since there devices are sitting outside of our immediate control most of the time.

    Read the article

  • How to set up Zabbix to monitor SQL Server Failover Active-Passive Cluster?

    - by Sebastian Zaklada
    It should be simple, so it is just most likely my approach being totally off and someone will hopefully prod me into the right direction. We have a Zabbix 2.0.3 server instance set up monitoring a bunch of different servers, but now we need to set it up to monitor and notify any alerts in regards to the SQL Server 2008 R2 Failover Active-Passive cluster. Essentially, this is a 2 servers cluster, when only one of its nodes can be "active" at a given time, serving all SQL Server related requests, while the other server just "sleeps" and from the point of anyone logged on on that server - has all of the SQL Server related services in stopped state. We have tried setting up Zabbix agents on both servers, using SQL Server 2005 templates (we could not find any 2008 specific ones and the 2005 ones always seemed to be working just fine for monitoring 2008 R2 instances) and configuring Zabbix server for both of the servers, but we end up having constant alerts for the server being currently the passive one in the cluster. We have been able to look up various methods of actually monitoring the failover, but we have not been able to find any guidance in regards to how to instruct Zabbix, that in this particular case, only one of the servers in the group is expected to be in the online state, while the other can be just discarded and should not raise any alerts. I hope I made myself clear. Thanks for any guidance. I am out of ideas.

    Read the article

  • Exchange server intermittently not receiving or delivering emails to a few addresses?

    - by Gary Willoughby
    This is a strange problem. We are using an Exchange 2007 server to handle the emails to and from the company. There are two main problems which are probably related. None of our mails sent to one single customer are ever received. When we send any type of mail to one particular customer, they never get it. We have confirmed the address and tried to send more to other mail addresses on the same domain and they still don't receive it. No error (email or otherwise) is ever issued. (Domain related? Blacklisted?) Sometimes (intermittently) a mail sent to our company (can be any address on our domain) is never received. I tried this the other day from home and sent a mail to my work address. It was never received. But then a day later i sent another and it was received fine (so the mail address is fine). No error (email or otherwise) is ever issued. Any ideas where to start looking for causes?

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Is there a way to avoid Wacom Control Panel to stop showing in certain cases ?

    - by S.gfx
    This is the problem: Suddenly, you double click on desktop wacom tablet settings icon, and it wont show the dialog. It appears to be loaded as it's shown down in the windows taskbar. I suspect is caused by change of resolution or some setting, maybe suddenly it sets the origin of the panel dialog at some 3000 thosand pixels to the right or something. I have digged that wacom_tablet.dat file to see if I fix it changing some value there, but looks like a log, a history, more than a ini for settings... And anyway does not solve it. My solution is having always a very complete settings file done and backed up to restore (with wacom utility for this, which in previous driver versions did not exist) when this happens, but is counter productive: You change the settings even per each project(and software) needs. I have seen it happenning with Cintiq 12", intuos4 A6, Graphires, Intuos 1. Is just me, doing something weird every time? I doubt it, is normal use, I am amazed that seems no one else had this prob(or nobody asked), happens often with typical use. Maybe is because am setting a shortcut in the desktop? Weird as it works perfect till some random moment... (Have digged Wacom's forums and FAQs, then here, but nothing related to it... Neither in "related questions".) The thing happens in Win XP, 7, etc. Done so during years in my experience, and several times at work, which is worse.

    Read the article

  • Software Diagnostics Tool recommendations for Debugging a Windows 8 freeze

    - by Stuart
    I've had my HP Pavillion dv6 laptop since last November - and it has had 8GB RAM and a 256GB Crucial M4 SSD installed since the start. I use it for software development and it's had a Windows 8 RTM installation since early September. Yesterday I had to give a presentation at a customer site - so used Powerpoint for the first time since installing Win8... since that point my machine has 'frozen' every 2 hours or so after startup. There doesn't seem to be any easy to see reason behind the freeze - the system just freezes, even if I have left it idle with just a desktop there. My immediate suspicion is that the SSD is the mostly likely cause of the problem. I've looked at some of the questions on here - e.g. How do I troubleshoot hardware issues related to a computer freeze/crash? - but don't really want to start taking my laptop apart. Another suspicion is that this might be related to the WiFi adapter (Broadcom 802.11n) since I have noticed that this doesn't seem to play perfectly with things like Hyper-V in Win8. Can anyone recommend any software diagnostic tools that I can run in order to evaluate the health of the SSD or of other parts of the system? Thanks Stuart P.S. I doubt Powerpoint is the cause of this, but I may use it as an excuse never to use it again... More realistically perhaps something got damaged during travel to the customer site?

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • MS licensing of multiple RDP sessions for non-MS products in Windows XP Pro

    - by vgv8
    Question 1) and 2) were moved into separate thread Which Windows remote connections bypass LSA? and what r definitions of login vs. logon session? 3) Do I understand correctly that multiple remote RDP sessions are supported by Windows XP but require additional (or modified) licensing? Which one? Or it is always illegal to run multiple RDP sessions on Windows XP? even through non-MS commercial software? ---------- Update1: I already understood my error - the main questions were about definitions (important to find the common language with others) and the licensing questions were collateral - but it was already answered. I shall try to separate these questions leaving here the questions about RDp licensing and migrating other questions into separate thread ---------- Update2: Trying to "work around" licensing terms is pointless and wasteful of time I never try "working around" and I never ask anything like this, I am not specialist in licensing. My clients/employers provide me with tools and licensing support. They have corporate lawyers, planning/accounting/purchase departments for these issues. The questions that I ask is the matter of scalability and efficiency (saving my and others time) in my developing work. For ex., Just because I need autentication against Windows AD it is time-saving to use ADAM instead of deploying full-fledged AD with DC + servers + whatever else? Nobody is forcing you to use Windows XP I shall not rush into re-installing all my operating systems on all my development machines (at home, at client premises) just because a few guys have a lot of fun downvoting development-related questions in serverfault.com. If I do so, I make a joker from me in the eyes of my clolleagues et al Update: I unmarked this question as answered since it had not even adressed the question, at least mine. Should I understand that Terminal Server PRO, allowing Windows® XP and Windows® Small Business Server 2003 to host multiple remote desktop sessions, is illegal? Related: My answer to question Has windows XP support multiple remote login session (RDP) at a time?

    Read the article

  • RoboCopy errors on Windows Server 2008

    - by Steve
    I am getting bizarre error with RoboCopy in Server 2008. It will randomly hang with "The specified network name is no longer available." error. Once that happens, it will continue to fail on the retries. But of course the remote server IS still available on the network and can be reached with other tools. I think it must be somehow permission related but I can't figure out what is wrong. Any ideas would be much appreciated. Other info: Options : *.* /S /E /COPY:DAT /NP /R:10 /W:30 If I turn on the /B option it will fail 100% of the time at the very beginning (that's why I think it has to be somehow permission-related) The two servers are standalone and I am doing a NET USE command prior to the robocopy It does not matter what user account on the remote server. Tried both Administrator and another user which was also a member of the local Administrators group UAC is turned off on both sides It is not always the same file that hangs. Sometimes it will get through half or more and sometimes it will fail on the first file

    Read the article

  • Incorrect directory permissions with OpenSSH on Cygwin on Windows Server 2008 SP2

    - by Davy Brion
    I ran into a weird directory permission problem when logged in to a Win2008SP2 (not R2) server through SSH. When I open a local cygwin shell on the server, i can do this: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config myUser@myServer /cygdrive/c/Windows/System32/inetsrv/config $ I have no issues accessing the 'config' directory when using a local cygwin shell. 'myUser' has all necessary permissions to access the directory as well. In fact, 'myUser' is a local administrator on the machine. Listing the permissions of the config folder through the local cygwin shell shows the following output: 4 drwx------+ 1 SYSTEM SYSTEM 0 Aug 2 09:38 config But when I log into the server with a SSH client (in this case Putty), i run into the following problem: myUser@myServer ~ $ cd /cygdrive/c/Windows/System32/inetsrv/ myUser@myServer /cygdrive/c/Windows/System32/inetsrv $ cd config -bash: cd: config: Permission denied It also doesn't list the proper permissions through SSH: 0 drwxr-x--- 1 ???????? ???????? 0 Aug 2 09:38 config When I look at the running processes on the server with Task Manager (with a remote desktop connection), it shows that all bash.exe processes are running under the 'myUser' account, so I don't understand why I can't access that particular directory through SSH but have no problems accessing it in a local cygwin shell. I'm using OpenSSH 5.9p1-1. I'm not sure what the Cygwin version is... I used the latest setup.exe (version 2.738) of Cygwin, but I can't seem the find any other Cygwin-related version number. I doubt that it's related to SSH/Cygwin though, because when I connect from the Win2008SP2 server to my local Win7 machine through SSH (using the same OpenSSH/Cygwin versions) I can access the /cygdrive/c/Windows/System32/inetsrv/config folder without issues. Does anyone have an idea on what the issue could be?

    Read the article

  • How can I make the XAnalogTV xscreensaver fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Alternatively, as I believe the problem is aspect-ratio related, is there any way I can get the screensaver to render larger than my display, and simply discard the extra pixels? Relevant screenshots (windowed, maximized, and full-screen, respectively): You can see in the last two that the scrollbar from Firefox is clearly visible, even though this is a full-screen screensaver.

    Read the article

  • how to back up data from a machine that keeps hanging

    - by Amit Phatarphekar
    Hello - I have a storage server running opensolaris. But lately its been acting up - it hangs at random times due to some SCSI/ATA related error messages. I've tried to fix it without any progress, so I'm giving up now. The machine keeps hanging every 30 minutes or 1 hr ...sometimes after 4 hrs. Its very unpredictable. So I've decided to just reformat the storage server and start from scratch...maybe I'll just not use solaris and install something else, since the errors are related to solaris running on ATA HDD or something. Question - Before I reformat it, I want to back up some of the important data on it. Like it has a VM with 200 GB disk files, it has a whole bunch of ISOs stored on it etc etc. I'm using a simple scp to copy the files over to a different machine. My issue is that, because the machine hangs....sometimes my file copy is incomplete and I have to start all over again. Lets say I'm trying to copy a 200GB file which takes like 4 hrs....IF the machine hangs before the whole file i copied over...I have to recopy the file from scratch. Is there a solution to copy the files over such that if the machine hangs or network goes down..the copying can resume from where it left off? - like if 50 GB of a 200GB file was copied and machine hung....next time, it'll just continue to copy rest of the amount, instead of starting all over again. Thanks Amit

    Read the article

  • Blue screen error code 1000008e

    - by Kas
    I'm getting blue screens, mostly when trying to boot a program that required a lot of memory (games, photo editing software.) So far I've only managed to catch one set of error codes: BCCode: 1000008e BCP1: C0000005 BCP2: ADA393BA BCP3: E9BCEBC4 BCP4: 00000000 OS Version: 6_0_6002 Service Pack: 2_0 Product: 768_1 It's on a Sony VAIO Laptop VGN FW-41E, Vista OS service pack 2. Besides these codes it lists two 'temporary' files that were related with this crash: ...AppData\Local\Temp\WER-134925-0.sysdata.xml ...AppData\Local\Temp\WERDA66.tmp.version.txt When I googled these files some site said it was linked to a worm called 'yodo', but virus scans don't return any results (hitman pro, malware bytes, avast antivirus all turn up empty). Upon further searching about this yodo worm, I came across security stronghold where someone posted they had acquired this worm when downloading access and excel templates. Now, I actually did download templates for the same programs, they might have been the same, they may be related or I might be grasping at straws here. I have not noticed any issues other in performance as of late, just BSOD's when I start software that requires some memory, but I never had issues with these exact same programs before. Help and/or hints are required on how to actually figure out what's the root of this BSOD issue and how can I fix it. Do you reckon it's actually a virus? What program should be able to remove YODO worm stuff?

    Read the article

  • IP-dependent local port-forwarding on Linux

    - by chronos
    I have configured my server's sshd to listen on a non-standard port 42. However, at work I am behind a firewall/proxy, which only allow outgoing connections to ports 21, 22, 80 and 443. Consequently, I cannot ssh to my server from work, which is bad. I do not want to return sshd to port 22. The idea is this: on my server, locally forward port 22 to port 42 if source IP is matching the external IP of my work's network. For clarity, let us assume that my server's IP is 169.1.1.1 (on eth1), and my work external IP is 169.250.250.250. For all IPs different from 169.250.250.250, my server should respond with an expected 'connection refused', as it does for a non-listening port. I'm very new to iptables. I have briefly looked through the long iptables manual and these related / relevant questions: http://serverfault.com/questions/57872/iptables-question-forwarding-port-x-to-an-ssh-port-of-different-machine-on-the-n http://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables However, those questions deal with more complicated several-host scenarios, and it is not clear to me which tables and chains I should use for local port-forwarding, and if I should have 2 rules (for "question" and "answer" packets), or only 1 rule for "question" packets. So far I have only enabled forwarding via sysctl. I will start testing solutions tomorrow, and will appreciate pointers or maybe case-specific examples for implementing my simple scenario. Is the draft solution below correct? iptables -A INPUT [-m state] [-i eth1] --source 169.250.250.250 -p tcp --destination 169.1.1.1:42 --dport 22 --state NEW,ESTABLISHED,RELATED -j ACCEPT Should I use the mangle table instead of filter? And/or FORWARD chain instead of INPUT?

    Read the article

  • Is there a way to avoid Wacom Control Panel to stop showing in certain cases?

    - by S.gfx
    This is the problem: Suddenly, you double click on desktop Wacom tablet settings icon, and it won't show the dialog. It appears to be loaded as it's shown down in the Windows taskbar. I suspect it is caused by change of resolution or some setting, maybe suddenly it sets the origin of the panel dialog at some 3000 pixels to the right or something. I have dug in the wacom_tablet.dat file to see if I can fix it changing some value there, but it looks like a log, a history, more than a ini for settings. And anyway does not solve it. My solution is having always a very complete settings file done and backed up to restore (with Wacom utility for this, which in previous driver versions did not exist) when this happens, but it is counter-productive: You change the settings even per each project (and software) needs. I have seen it happenning with Cintiq 12", intuos4 A6, Graphires, Intuos 1. Is it just me, doing something weird every time? I doubt it, it's normal use, I am amazed that it seems no one else has had this problem (or nobody asked). It happens often with typical use. Maybe it's because I'm setting a shortcut in the desktop? Weird as it works perfect until some random moment. (I have dug in Wacom's forums and FAQs, then here, but nothing related to it. Neither in "related questions".) The thing happens in Win XP, 7, etc. It's done so for years in my experience, and several times at work, which is worse.

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • iptables rules to allow HTTP traffic to one domain only

    - by Emily
    Hi everyone, I need to configure my machine as to allow HTTP traffic to/from serverfault.com only. All other websites, services ports are not accessible. I came up with these iptables rules: #drop everything iptables -P INPUT DROP iptables -P OUTPUT DROP #Now, allow connection to website serverfault.com on port 80 iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT #allow loopback iptables -I INPUT 1 -i lo -j ACCEPT It doesn't work quite well: After I drop everything, and move on to rule 3: iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT I get this error: iptables v1.4.4: host/network `serverfault.com' not found Try `iptables -h' or 'iptables --help' for more information. Do you think it is related to DNS? Should I allow it as well? Or should I just put IP addresses in the rules? Do you think what I'm trying to do could be achieved with simpler rules? How? I would appreciate any help or hints on this. Thanks a lot!

    Read the article

  • Blocking an IP from connecting

    - by Sam W.
    I have a problem with my Apache webserver where there's and IP than connecting to my server, using alot of connection and wont die which eventually making my webserver timeout. The connection will stay as SYN_SENT state if I check using netstat -netapu I even flush my iptables and use the basic rules and it still doesn't work. The IP will get connected when I start my Apache Basic rules that I use: iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT iptables -A INPUT -s 89.149.244.117 -j REJECT iptables -A OUTPUT -s 89.149.244.117 -j REJECT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT The bold part is rule in question. Not sure this is related but tcp_syncookies value is 1. Can someone point out my mistake? Is there a way to block it for good. Thank you

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >