Search Results

Search found 9335 results on 374 pages for 'random thoughts'.

Page 265/374 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Multiple Video Cards - Stuttering

    - by jstawski
    I have two video cards: - XFX PVT84JUDD3 GeForce 8600GT XXX 256MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card - EVGA 256-P1-N399-LX GeForce 6200 256MB 64-bit GDDR2 PCI Video Card both running the same set of drivers on Windows 7 64-bit. When I work with 2 monitors connected to the 8600GT card everything works smoothly. When I connect the third one to the 6200 then Windows works well and all of a suddon the screens turns black for up to 5 minutes. Then it goes back and at some random interval it goes black again. I can still see the pointer and hit CTRL+ALT+DEL and see the menu to log off, bring the task manager, etc. I've tried changing the 6200 to another PCI slot and the error persists. I've tried connecting 2 monitors only one to each card, same problem. Tried swapping them, mixed and matched the monitors to see if it was a problem with the monitor and my conclusion was that it is not the monitor. The problem also occurred with Vista 64 as well. What could be generating this problem? Can it be the fact that they are different interfaces? Maybe the Motherboard? Should I change something on the BIOS? What do you guys think?

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • Second monitor stopped being recognized by Windows

    - by Eric J.
    One of the developer PC's running Windows Vista Ultimate had the second monitor stop being recognized in Windows overnight. There were no hardware or driver changes at the time, though I have subsequently updated to the latest nVidia drivers (card is NVIDIA GeForce 210). The non-recognized monitor IS recognized during the boot sequence. In fact, only the "bad" one shows the POST or the Windows loading screen. At some point during Windows initialization after the loading screen disappears and before the logon screen appears, the active monitor switches. Any thoughts? UPDATE: When I open the Vista monitor properties window, I see my primary display and secondary display depicted. The primary one is portrayed as the regular blue box, but the secondary one is portrayed greyed-out. I have the option to "Extend desktop to this monitor", the only resolution is 800x600, and all of the advanced monitor properties are greyed out as well. If I opt to extend the desktop, the greyed-out box turns blue, when I then select Apply the screens flash as usual and I'm given the 15 second countdown to accept the new settings and when I do, everything returns to the previously broken state... secondary monitor is portrayed greyed-out again. At no point is the desktop shown on the secondary monitor.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Best NIC config when virtual servers need iSCSI storage?

    - by icky2000
    I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this: NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc) NIC03: connected to iSCSI VLAN #1 NIC04: connected to iSCSI VLAN #2 NIC05: dedicated to one virtual switch for VMs NIC06: dedicated to another virtual switch for VMs The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host. Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options: Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host. I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad. Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

    Read the article

  • Switched from DVI to HDMI, possible audio artifacts?

    - by I take Drukqs
    I'm using an ASUS VH236H monitor and an EVGA GeForce 570 GTX both of which are brand new. My monitor has an audio out port for speakers/headphones so I plugged in my headphones and made a random selection from my library when I noticed two things: There are static-like artifacts during "louder" parts of songs. There's what seems to be a volume cap in place. When I crank the volume past 100% in VLC the decibel level does not truly increase but the amount of static does. The cable is not new; I yanked it off of my PS3 when my DVI cable broke. It has been used a good amount on my HDTV and PS3 so I doubt it's a matter of burn-in. I like the way the setup works with an HDMI cable as opposed to DVI because my headphones barely reach my rig whereas I have plenty of slack when they're plugged into my monitor. Thanks in advance for any support. Note: I'm using a high quality HDMI cable from monoprice, AKG K702 headphones, and VLC media player.

    Read the article

  • Best way to troubleshoot intermittent network outages?

    - by Ben Scheirman
    We have a Comcast 50/10 line into our office. We keep seeing very short but sometimes frequent drops in our internet service. It's enough to kick you off of skype and stop any websites from loading, which is obviously affecting our productivity. We've tried 4 different routers, we've tried moving everyone off of wireless and onto wired via a switch and so far nothing has helped. Right now we're on a Cisco SB WRP400-G1 router. Attached to the router is a 16 port switch going to the ports in all of the offices. We've moved to OpenDNS in the case that it was the comcast DNS servers going down. Today we tried putting the modem, router, and switch on a UPS to make sure it wasn't power fluctuations that was causing it. Every time we call Comcast, by the time they are here the internet is working fine. I'd like to somehow prove that the problem is with Comcast, so if that means plugging in a machine directly into their router and collecting data all day, I'm up for that. I just want to hear ideas on what tools to run and how to collect this data. I could just continuously ping google.com all day long but I'm not sure how valuable that data would be. Thoughts?

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • CMD/ADB - Autorun script to search, copy, and paste a file from android system to flash drive

    - by Outride
    I've looked around and can't find anything that answers my question. This is my first question, so any tips or thoughts are welcome, as well as an answer :p As explained in title, i want to create a script that launches, finds a file on android phone, copies it, and pastes it to a flash drive. As of right now, it's a mix of multiple tutorials, trial and error, and I'm at a point of giving up. As of right now, I have a flash drive, loaded with three scripts. As follows: Bold = name of file file.bat @echo off :: variables /min SET odrive=%odrive:~0,2% set backupcmd=xcopy /s /c /d /e /h /i /r /y echo off %backupcmd% "C:\Users\Outride\Desktop\kikDatabase.db" "%drive%\all" @echo off cls invisible.vbs CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False launch.bat wscript.exe \invisible.vbs file.bat So far, I had to use android commander, manually go through the directory, find /data/data/kik.android/databases and then copy kikDatabase.db to my desktop. Then run this scrip. Yes i'm trying to pull the database to copy all my email contacts. I use launch.bat, which then makes file.bat invisible due to the invisible.vbs script. What would i need to do now to have the file searched for and copied to the flashdrive? Thanks in advance, i'll be glad to answer any questions if theres any :p just remember that i'm not exactly a tech expert haha EDIT* Cleared junk of prior edits. New - I now have a .bat script to recognize what drive the usb is on, and launch py_cmd (adb shell) This is the current script. pull.bat @echo off :: variables SET odrive=%odrive:~0,2% set launching=start "%drive%\Minimal ADB and Fastboot\py_cmd" echo off %launching% so how could I make it for the .bat or a new script, to type the following "adb pull /data/data/kik.android/databases/ %drive%\All\Database" into the adb terminal? please help! I've been racking my brain over this all night :3

    Read the article

  • Exchange 2010 Transport rules stepping on each other

    - by TopHat
    I have a group of users that I have to restrict email access for and so far using Exchange Transport Rules has worked very well. The problem I am having is that Rule 0 is supposed to bcc the email to a review mailbox but otherwise not change anything and Rule 9 is supposed to block the email and throw a custom NDR to tell the user why they were blocked. Here are my results in practice however. If Rule 0 is enabled and Rule 9 is enabled then only Rule 9 functions If Rule 0 is disabled and Rule 9 is enabled then Rule 9 functions If Rule 0 is enabled and Rule 9 is disabled then Rule 0 functions This is after the Transport Service has been restarted (multiple times actually). I have other rule pairs that work correctly. None of these are overlapping rulesets however. - copy email going to address outside domain and then block - copy email coming in from outside and then block Here is the rule for copying internal emails (Rule 0): Apply rule to messages from a member of Blind carbon copy (Bcc) the message to except when the message is sent to a member of or [email protected] Here is the rule to block the same email (rule 9): Apply rule to messages from a member of send 'Email to non-supervisors or managers has been prohibited. Please contact your supervisor for more information.' to sender with 5.7.420 except when the message is sent to , [email protected], The distribution group used for membership in these rules is used for the other blocking and copying rules and works as expected. Is there something I missed in this setup? All of the copy rules are at the front of the transport rule group and all the actual copies at at the end of the queue if that makes a difference. Any thoughts as to why the email doesn't get copied when it gets blocked?

    Read the article

  • RD Gateway reporting features/capabilities

    - by Don
    We have just implemented RD Gateway for our own department in preparation for a push to the whole agency for telecommuting. It is all setup and working great, but I was trying to figure out how best to go about monitoring/reporting of users. I see third party software out there that will do it, but is there anything built-in or via powershell/scripting that I could use that would give me a report of the daily activity of users? Something to say, "User A connected at this time, was on for this long, sent/received this much data"? Basically some of the same stuff you can see in event viewer. Ideally I'd like to be able to have this setup so that once a day it emails me with the daily usage for when a supervisor asks about if their person is actually working (or at least online sending and receiving x amount of data), I'll have some metrics to give them. I realize that actual work output is relevant and more of a managerial issue, but I would like to be able to offer as much as I can from my end when asked. Thoughts? Thanks!

    Read the article

  • What does "incoming" and "outgoing" traffic mean?

    - by mgibsonbr
    I've seen many resources explaining how to set up a server's firewall to allow incoming and outgoing traffic on HTTP standard ports (80 and 443), but I can't figure out why I would need either of them. Do I need to unblock both for a "regular" web site to work? For file uploads to work? Are there situations where it would be advisable to unblock one and leave the other blocked? Sorry if that's a basic question, but I couldn't find it explained anywhere (also I'm not a native english speaker). I know in a "regular" web site the client is always the one who initiates a request, so I'm assuming a web server must accept incoming traffic on those ports, and my common sense tells me the server is allowed to send a response without unblocking anything else (otherwise it wouldn't make sense to have two types of rules). Is that correct? But what is an outgoing web (service) traffic, and what would be its use? AFAIK if the server wanted to initiate a connection with another machine, the specific port that matters is the one in the other end (i.e. the destination port would be 80), on its end any free port could be used (the source port would be random). I can open HTTP requests from my server (using wget for instance) without unblocking anything. So I'm assuming my concepts of "incoming" and "outgoing" are wrong somehow.

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • Server 2008 R2 domain windows update strategy

    - by Joost Verdaasdonk
    Let me explain my question a bit. We are a small company that have now made the first move to a bigger network. For now the network contains of 5 servers 2008 R2 (dc,sql,web,etc..). Everything we need is now in place but for now we cannot afford to finish the network by implementing redundant systems. (secondary dc, dns, sql cluster, etc...) For some people this is hard to understand but this is the current situation. (and we are aware and will fix this when we can) Because we want to keep our system secure and up to date I've made sure that all systems are updated regularly. The problem is ofc that the nr of updates Microsoft rolls out that need a system reboot seam to occur more often. (maybe I'm wrong and it just feels like this) ;-) In our domain servers depend on each other for services (like SQL, WEB, or whatever) so just rebooting a server at will is NOT a good idea! For now I update all of them without rebooting at once. After all are up to date I bring them down in the order they are depended on each other. After this I reboot all of them in the inverse order. I understand ofc that if I DID have redundancy in my system that updating and rebooting would not be such a problem because the server task could be taken over by another node but this is something we generally need to add when we can. So my question is. If you read my above situation can you suggest me more Update strategies or general ideas that could help me do this process in a better / faster way? Thanks for your thoughts!

    Read the article

  • Automated Linux VMs on Hyper-V 2012

    - by Mick
    I have a requirement to create a ton of linux VMs for our customers (we run managed infrastructure) on Hyper-V 2012 in the coming months and I have an issue with automating it. Here is how I need it to work: User accesses their web page and creates a VM. VM is created with a unique IP and name User logs in over SSH I know Hyper-V quite well and can work with powershell and am a C# programmer so the development side of things is taken care of. I also know enough about Linux to be at least competent: I have used it on and off for a number of years but not done anything Enterprise-level with it. All this can be done easily by manual processes but I need to be able to script or program this to automate it as there could be hundreds of them being created but I don't know how. My first thought is to have a database with random-generated names and IPs already created but I don't know how to get a Linux VM to boot up and grab one from the database... I suppose a Kickstart script would take care of it but I don't know what to do from there. Here is what is bouncing around in my head: Create a std linux build. - Easy to do Someone clicks "Create VM" and I pull a name and IP from the database and write it to a kickstart script. - Easy to do I could then open the template VHDX file and copy in the script and then save it. - Not sure if possible User boots up new VM and the kickstart script gives it the name and IP I assigned it. My problem is that I don't know how to open a VHDX file and insert a kickstart script into it... can't figure it out. I am reaching here and this solution may be miles off... I am more used to creating Windows VMs with scripts and so on which i am more familiar with... any help would be appreciated. Thanks Mick

    Read the article

  • how to diagnose a hard system seizure? Dell+Ubuntu

    - by rob
    I've got Ubuntu 9.10 on a Dell Vostro 420 desktop, a little over a year old, which I use for plain vanilla work stuff (email, web, terminal, text editor). Every now and then, at totally random times, it completely freezes on me. Hard. Mouse and keyboard stop working, cursor stops blinking, clock stops moving. All I can do is hold down the power button on the front of the box to shut it off. Sometimes it happens after several months of continuous uptime; sometimes it happens a few minutes after a reboot, while all I've done is open a terminal to look at log files, or maybe firefox to do a google search. Each time, there is nothing at all in /var/log/messages at the time of the crash. This makes it seem like a hardware problem, and indeed a few months ago I opened the box and wiggled everything and the problem went away for a while. But now it's back. I went in and checked everything, took out each RAM card and reseated. No luck. I ran all the system diagnostics (the long version) and everything passed with flying colors. Something is messed up in this box, but without any useful logs or failed tests, how in the world am I going to find it? And of course, Dell's not gonna help me cause I went and replaced Windows with Ubuntu. What steps would you take next to track down this problem?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Strange problem with Google Mail and IMAP on Outlook 2007

    - by Alex C.
    I work for a small non-profit organization. We have about 35 administrative employees who use e-mail. We're on a Windows network with a domain. Everyone is running XP Pro and Office 2007 with all updates/patches. We used to use POP3 mail through a local provider. However, we recently signed-up for a free Google Apps account, and we switched to IMAP mail through Google. Everyone uses Outlook 2007 as the client. For about ten days, everything was working fine. Yesterday afternoon, we suddenly developed a strange and annoying problem. Every time you send an e-mail message, a copy of your outgoing message shows up in your inbox. It's as if you're adding your own address to the CC: line of every message. Nothing has changed on our end. I was hoping that the problem was a temporary glitch that would resolve itself, but here we are about 24 hours later, and it's still happening. I searched Twitter, and there were a handful of vague messages about issues with Google mail and IMAP, but I didn't see any references to this specific problem. Any thoughts on what's going on here and how to fix it?

    Read the article

  • I would like to edit the layout of my keyboard just a bit - what's the best way?

    - by Codemonkey
    I'm using an Apple keyboard which has some annoyances compared to other keyboards. Namely, the Alt_L and Super_L keys are swapped, and the bar and less keys are swapped ("|" and "<"). I've written an Xmodmap file to swap the keys back: keycode 49 = less greater less greater onehalf threequarters keycode 64 = Super_L NoSymbol Super_L keycode 94 = bar section bar section brokenbar paragraph keycode 108 = Super_R NoSymbol Super_R keycode 133 = Alt_L Meta_L Alt_L Meta_L keycode 134 = Alt_R Meta_R Alt_R Meta_R I did this by identifying the keys using xev and the default modmap xmodmap -pke and swapping the keycodes. xev now identifies all my keys as correct, which is awesome! I can also use the correct keys to type the bar and less than symbols. (I followed this answer on askubuntu: http://askubuntu.com/q/24916/52719) But it seems the change isn't very deep. For instance, the Super key is now broken in the Compiz Settings Manager. No shortcuts involving the Super key works (but the Alt key does). Also the settings dialog for Gnome Do doesn't heed the changes in xmodmap, and I can't open the Gnome Do window anymore if I use any of the remapped keys. So to summarize, everything broke. I would like a deeper way of telling Ubuntu (or any other Linux distro for that matter) which keys are which on the keyboard. Is there a way to edit the Keyboard Layout directly? I'm using the Norwegian Bokmål keyboard layout. Does it reside in a file somewhere I could edit? Any comments, previous experiences or relevant stray thoughts would be greatly appreciated - Thanks

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >