Search Results

Search found 22546 results on 902 pages for 'mode management'.

Page 734/902 | < Previous Page | 730 731 732 733 734 735 736 737 738 739 740 741  | Next Page >

  • Hard drives (SATA/ATA) corrupting

    - by JC Denton
    Hello All, 2 years ago I bought relatives a new computer for christmas and installed Ubuntu on it. Ever since then it has been experiencing problems with the hard drives. The hard drive supplied with the machine was a SATA drive. When it appeared it was having problems (files and folders with invalid encoding started appearing) I replaced the SATA drive with the drive of their previous computer. I replaced it (The replacement) later on, the drive being rather old and thus becoming more prone to the risk of failure. The replacement drive is a IDE drive but the same problems started to appear (files and folders showing up in nautilus with invalid encoding). I fear the files and folders that are showing are existing FS entries, starting to corrupt. As it's happening to both the IDE and SATA drive it's unlikely to be the drives themselves or the IDE/SATA controller, I believe. Any ideas as to what could be causing the (assumed) corruption? EDIT: You're right about the paragraphs. They were there in edit mode but I'm still getting to grips with the whitespace format codes. The system is a "Primo Pro" AMD Phenom II X4 Quad Core 920 2.80GHz SILENT DDR2, ordered from overclockers.co.uk and nothing has been added to it except for the replacement of the SATA drive with an AMD drive. It would seem unlikely for a barebones system to be underpowered.

    Read the article

  • PGB Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about pgb multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • Linksys router cannot change default password

    - by Jessica M.
    My wireless internet suddenly stopped working today. I have Windows 7 and Lindsys WRT54G router. I tried to log into the Linksys setup router page by typing in www.192.168.1.1 into firefox and it prompted me for a username and password as usual. The problem is when i tried to enter my regular username and password it did not work. I finally solved my problem when I came across this post here and the very last post solved my problem. It suggested I try username: root / password: admin. For some reason the username and password has been changed. When i tried username: root / password: admin , it worked and allowed me to get into the Linksys setup page. The problem is I can't change the username or password anymore. Every time I want to log into my Linksys setup page I have to enter username: root / password: admin. I can't change the "WPA shared key" (password). For the security settings I selected WPA2Personal + AES. Also the post said "If the firmware was upgraded to non-linksys firmware - the default will be different" . The problem is I didn't update anything and I'm worried that someone installed a virus or something or somehow changed the firmware on my router. How did I get non-Linksys firmware on my router? EDIT: I figured out how to change the password when I log into the Lynksys setup page. Administration -- Management -- password. I still don't understand if my router firmware was changed or who changed it or if it happened by mistake.

    Read the article

  • How harmful is a hard disk spin cycle?

    - by Gilles
    It is conventional wisdom¹ that each time you spin a hard disk down and back up, you shave some time off its life expectancy. The topic has been discussed before: Is turning off hard disks harmful? What's the effect of standby (spindown) mode on modern hard drives? Common explanations for why spindowns and spinups are harmful are that they induce more stress on the mechanical parts than ordinary running, and that they cause heat variations that are harmful to the device mechanics. Is there any data showing quantitatively how bad a spin cycle is? That is, how much life expectancy does a spin cycle cost? Or, more practically, if I know that I'm not going to need a disk for X seconds, how large should X be to warrant spinning down? ¹ But conventional wisdom has been wrong before; for example, it is commonly held that hard disks should be kept as cool as possible, but the one published study on the topic shows that cooler drives actually fail more. This study is no help here since all the disks surveyed were powered on 24/7.

    Read the article

  • Outlook Shared Address book and contact not displaying

    - by user224061
    We have a shared Exchange addressbook with distribution email groups. When someone connects to the shared addressbook, composes an email to a group, the email distribution list is empty, then the distribution list is expanded. In troubleshooting, I noticed that when we expand the distribution list to view the recipients, most of the recipients are missing and only semicolons appear. CLICK HERE FOR IMAGE Further troubleshooting, I notice that when I open the distribution list with my Outlook client and click on the Update Now icon, and then go to create the email then when I expand the group the email addresses now appear. CLICK HERE FOR IMAGE Now, my Outlook profile is a cached profile. The shared contact list that I pulled the distribution list from is an online/non-cached shared contact list. What I also found is that if I switched my Outlook client to be online only(not cached) the share address book lists appear properly when expanded. Is there any way to make this list appear correctly without having to click on update now for each and every distribution list in the shared contacts list we have on the server? I would really prefer that every time one wants to use this shared contact list, they do not have o click the update not button or switch from cached mode to make this work. T.I.A

    Read the article

  • linux mint VIA sound issue

    - by user2699451
    So I installed linux Mint 15 "Olivia" 64 bit on my Mecer W550EU laptop I have HD Audio with a VIA chipset charles-W55xEU charles # lsmod | grep snd snd_hda_codec_hdmi 36913 1 snd_hda_codec_via 51018 1 snd_hda_intel 39619 5 snd_hda_codec 136453 3 snd_hda_codec_hdmi,snd_hda_codec_via,snd_hda_intel snd_hwdep 13602 1 snd_hda_codec snd_pcm 97451 4 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel snd_page_alloc 18710 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30180 1 snd_seq_midi snd_seq 61554 2 snd_seq_midi_event,snd_seq_midi snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi snd_timer 29425 2 snd_pcm,snd_seq snd 68876 19 snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_via,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device soundcore 12680 1 snd And my sound card charles-W55xEU charles # aplay -l **** List of PLAYBACK Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: VT1802 Analog [VT1802 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 2: VT1802 HP [VT1802 HP] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: PCH [HDA Intel PCH], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 and my audio device charles-W55xEU charles # lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High `Definition Audio Controller (rev 04)` Subsystem: CLEVO/KAPOK Computer Device 0550 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7c10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Sometimes when I boot up, soundworks, other times it doenst, it is completely random, so far, no-one on xchat linux help or linux mint forums was able to help me, I have always had issues with sound on VIA chipsets I have: sudo apt-get upgrade && apt-get install mint-meta-cinnamon it seemed to help but after 2-3 reboots, the problem came back, btw, everytime I checked, pulse audio is selected to Duplex Audio Input & Output and alsa mixer is always unmuted!

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Can a hardware firewall block a server accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. Is this possible? Does 'UNC traffic' go out of the machine and then back in again?

    Read the article

  • Struggling to set-up NLB cluster

    - by Chris W
    I'm trying to set up NLB on a couple of Windows 2008 R2 virtual servers running on top of Hyper V R2. The servers each have a single vNIC for LAN access (and a second vNIC for SAN access). I'm setting up the cluster to use Multicast mode. The vNICs are each set to allow MAC spoofing. Essentially I'm finding that i can add SERVER1 as a host and it will pick up and respond to the cluster IP from a remote subnet. If I then 'stop' the node in NLB manager it still responds when I would expect it to stop answering on that IP. If I recreate the cluster and add SERVER2 as the first host, the wizard completes correctly and an IPCONFIG on the server shows that it now has the cluster IP but I can't ping the cluster IP from a remote subnet but I can from another machine on the same subnet. As a final test - with both servers in the cluster, pinging from another machine on the same subnet I still get a response from the cluster IP when both nodes are stopped according to the NLB manager. The two VMs are sat on the same physical blade and are built up exactly the same as they'll be used as SharePoint web front end servers. I'm at a loss as to what could be wrong with the second VM that prevents it taking on the address just as the sole node in the cluster, never mind the strange behaviour of the cluster when I stop/start nodes.

    Read the article

  • I/O APIC on Virtualbox

    - by RidDeBakTiYar
    I'm trying to use the PIT to do APIC timer calibration, and I want to use the PIT through I/O APIC instead of PIC. On Bochs I get interrupts from the PIT at the asked frequency from the I/O APIC, while on Virtualbox I can't receive a single interrupt. It must be an I/O APIC configuration problem because as I unmask the first PIC entry, the IRQ fires. However that's not what I want. Can you imagine any possible condition that wouldn't make Virtualbox fire the IRQ? I'm not assuming single I/O APIC configuration (even though Virtualbox has only 1). I'm not assuming identity mappings between ISA IRQs and I/O APIC GSIs (using ACPI MADT table to get I/O APIC base address and Int override). I'm setting the Trigger Mode and Polarity bits correctly (on Virtualbox they are set as '00 - default' which means edge high right?). I'm putting the BSP APIC ID into the Destination field (using Physical destination) and vector 0x20. Being the BSP APIC ID 0 on Virtualbox, it ends up with 0x0000000000000020 written to the IOREDTBL. And, just in case I'm getting the wrong values from the Interrupt Override descriptor, I'm setting this value to all the IOREDTBL entries (I know this is very very bad, and it wont be kept as I understand what's going on). The only thing I didn't check out is Local APIC configuration. Actually I'm not writing any value to the BSP LAPIC. Just reading the APIC ID and using it to boot APs through IPIs. And obviously I'm setting bit 11 in the IA32_APIC_BASE MSR to enable the LAPIC. Any ideas? Thanks in advance.

    Read the article

  • Why would the SQL 2008 "Generate scripts..." utility generate an invalid SQL script?

    - by Deane
    I have a SQL2008 database that needs to be restored to a SQL2005 instance. I have gone through the "Generate scripts..." wizard, set it for SQL2005 compatibility, and generated a 62MB SQL script. When I run it on the SQL2005 instance, it throws all kinds of errors, and some of them are really strange in that they describe an invalid database. FK constraints are wrong. It's trying to create FKs on columns that don't exist. It's trying insert records with duplicate key errors. It's trying to create the same objects twice. Any idea how this could happen? This SQL script was generated by SQL Server Management Studio just minutes before I tried to restore it, and was not modified. Why would this generate an invalid SQL file? Doesn't it just describe the SQL2008 database, which is presumably valid since we're using it? In particular, the duplicate key insertion errors mystify me. If there's a key constraint in the SQL script, then there must be the same thing in the SQL2008 table. So how could we get rows in there that violate that key constraint?

    Read the article

  • Which ports are needed for NTLM (Windows Authentication) to connect to SQL Server?

    - by Adam Bellaire
    I've got SQL server running on a machine which is not in a domain, and which is not operating in mixed mode (it's running with "Windows Authentication"). I'm trying to connect to it from a Linux web server running freetds via TCP/IP, using NTLM to authenticate. The firewall on the SQL server is very restrictive. 1433 is open to my web server, but I'm getting conflicting information from the web on what additional ports (TCP/UDP) are needed for NTLM to succeed. It is currently fail; I can talk on 1433 to request NTLM, but the actual authentication always fails. One source says 137, 138, 139, but those are just the NetBIOS ports. Do I really need those? Another source says 135. Still others seem to say 1434... I can't make heads or tails of it. Dammit Jim, I'm a programmer, not a network administrator! EDIT: The exact error message: Msg 18452, Level 14, State 1, Server , Line 0 Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. Msg 20002, Level 9, State -1, Server OpenClient, Line -1 Adaptive Server connection failed I am attempting to connect with a remote machine username, i.e. 'servername\username'. Some sources recommend that I set up mirrored accounts on the local and remote machines, but the local machine is running Linux, not IIS under Windows.

    Read the article

  • How do I configure nVidia drivers on a Portable Ubuntu setup?

    - by Nicholas Flynt
    I've been pulling my hair out over this one for a couple of days now, google is no help. I've created a wonderful (until this issue) portable copy of Ubuntu linux that will boot on mostly anything by using a USB enclosure for my laptop's 80GB SATA drive. So far so good, it boots and runs on everything, and on non-nVidia card setups was even detecting the drivers, or letting me install the required drivers for hardware acceleration and compiz. Because you know, the wobble windows are the most awesome thing ever. Anyway, my desktop machine had an nVidia card, so I'm thinking, sure, I'll just install the nVidia drivers like before and everything will work happily. Not so-- now the desktop and any other nVidia cards work great, but it seems to have completely disabled any other graphics cards. When the kernel module detects that an nVidia card isn't present, it shoots up this nasty little dialog box giving me the option to boot into "low graphics" mode, which doesn't even allow me to use the correct screen resolution, much less see the installed graphics card and try to configure a driver for it. Is there any way to configure Ubuntu (with the dreaded nVidia kernel module) so that it can use nVidia's drivers when an nVidia card is present, and default to the normal (not low-graphics) setup in other cases, so that it has a fair chance of using what's actually present? I'm not afraid to much with config files, I just don't know the underlying system well enough to feel comfortable diving in without a push in the right direction. Thanks guys!

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • Windows installation repair option not showing up

    - by Jason
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: - 1.When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. - 2.Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. - 3.Do not choose the option to press R to use the Recovery Console. - 4.In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. - 5.Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. - 6.Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:/windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.)

    Read the article

  • Mac OSX and root login enabled

    - by reza
    All I am running OSX 10.6.8 I have enabled root login through Directory Utility. I have assigned a password. I get an error when I try to ssh root@localhost. ssh -v root@localhost OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/rrazavipour-lp/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /Users/rrazavipour-lp/.ssh/identity type -1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_rsa type 1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/rrazavipour-lp/.ssh/known_hosts:47 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_dsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/rrazavipour-lp/.ssh/identity debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). What I am doing wrong? I know I have the password correct.

    Read the article

  • Android failure to boot on LG [migrated]

    - by Ukavi
    I need to recover data from my AT&T LG Thrill Android Phone Background: My AT&T LG Thrill phone's battery died a couple of days ago because I forgot to charge it. When I charged the phone and tried to turn it on, it showed the LG logo followed by the dropping balls and the AT&T "Rethink Possible" screen. I then get a mesage that the Application Google Services Framework has crashed and the phone goes into a loop with the dropping balls showing again followed by "Rethink Possible" screen. This sequence repeats itself over and over and the phone does not get out of this loop. I have been able to go into the recovery screen (both Safe Mode and the Android Recovery Service) and have cleared cache, etc. However, I DO NOT want to wipe user data and restore to factory settings as this will wipe all of my data (pictures, application data, etc). Solution Needed: I need a suggestion to a way of accessing my data so that I can back it up onto an SD card/computer. I DO NOT want to root the phone as this may void the warranty. What I'm looking for is a way of perhaps putting the original flash image on the micro SD card and then have the phone read that image. Or some other similar solution that will get the phone out of this loop and allow me to get to the data.

    Read the article

  • Why a 10 years old software still is so slow even today?

    - by Cawas
    I just noted this question due to a game (which happens to be Diablo 2), but the matter of fact is: why is my brand new mac book pro, made in 2009 with latest technology (tho it's the cheapest one) can't rival my computer which used to run this much faster back in 2000? Really, it was much faster on my AMD K6 450 back in those days, and I could even run two clients at same time with no slow down. I've always had the feeling this machine was slow, but this is a very odd way to attest it. Granted, the machine is smaller, runs on wifi and "boots" way faster thanks to sleep mode. But other than that, what have we evolved after all?! I'm pretty sure this shouldn't be graphical card's fault. Sure if I buy latest technology it will run fast, and probably most people here can confirm this and won't even understand my question. But the thing is, all the hardware is supposedly much faster and better than the stuff from 10 years ago. The software and operating system became more complex, but also more well refined. Now I'm trying a piece of software that is actually 10 years old and it's not getting any better results! Why?

    Read the article

  • How to configure a Web.Config file to allow custom 404 handling while still displaying on-page 500 e

    - by Mark
    To customize 404 handling and based on the hosting company's suggestion, we are currently using the following web.config setup. However, we quickly realized that with this configuration, any page error (500 error) are also getting redirected to this custom error page. How can I modify this config file so we can continue to handle 404 with custom file while still able to view on-page error? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.webServer> <httpErrors errorMode="DetailedLocalOnly" defaultPath="/Custom404.html" defaultResponseMode="ExecuteURL"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="/Custom404.html" responseMode="ExecuteURL" /> </httpErrors> </system.webServer> <system.web> <customErrors mode="On"> <error statusCode="404" redirect="/Custom404.html" /> </customErrors> </system.web> </configuration>

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • Word 2013 can't compare readonly files

    - by Moshe Katz
    I am using Tortoise SVN to work with a repository that contains some documentation saved as Word documents. On my old computer, with Office 2010, I was able to compare with previous revisions. Tortoise would open Word in compare view so I could see the differences between the files. I have installed Office 2013 (final version from Technet, not the preview version) on my new laptop for testing and now I can no longer compare Word Documents. Tortoise pops up a generic error that it was unable to compare the two files. Tortoise uses a JScript file to interface with Word, so I ran that file through a debugger and found that the actual error is: The Compare method or property is not available because this command is not available for reading. Some Googling followed by some testing revealed that the error is caused by the first file opened (in this case, the previous version) being opened as Read-Only. If I change the JScript code to open in normal mode, and I find the file on the system and un-check the "Read Only" property (if necessary), then the comparison opens as expected. I was unable to find any documentation about this change to Word on any Microsoft site. Does anyone know why this has been changed, and if it is intentional and not a bug, what the benefit is of requiring the file to be writable in order to compare it with another? Note: This is tagged word-2013-preview but it is actually for the release version of Word that is available on MSDN and Technet. I do not have enough rep. on this site to create new tags (yet).

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Exchange 2010 Recovery: Mailbox not found using Restore-Mailbox

    - by user146665
    Exchange 2010 SP1 Update Rollup 5 server information store database was restored to a Recovery Database using EMC Networker successfully. The Recovery Database is in a mounted state with mailboxes listed within in it. However, when restoring the mailbox content using the following command: Restore-Mailbox –Identity MYMAILBOX –RecoveryDatabase MYRECOVERYDB –RecoveryMailbox LOSTMAILBOX –TargetFolder FOLDERFORLOSTMAILBOX Returns the following error: Mailbox "LOSTMAILBOX" doesn't exist on database "MYRECOVERYDB". + CategoryInfo : NotSpecified: (0:Int32) [Restore-Mailbox], ManagementObjectNotFoundException + FullyQualifiedErrorId : 66265C53,Microsoft.Exchange.Management.RecipientTasks.RestoreMailbox Note: I've used the correct alias name for the mailbox name; i've also tried combinations such as first name, or last name or both and so forth. Issuing a Get-MailboxStatistics -Database MYRECOVERYDB to see if the mailbox is there and it is as shown below: DisplayName ItemCount StorageLimitStatus LOSTMAILBOX 39495 MailboxDisabled Note: The StorageLimitStatus shows a strange output of MaibloxDisabled. Perhaps this may be the culprit. Going by the article's documentation I cannot complete the restore of the mailbox as I'm stuck at the restore-mailbox error that it cannot be found. Please advise & Thank you! Source of article: http://www.testlabs.se/blog/2012/07/05/exchange-2010-restore-to-recovery-database-using-emc-networker/

    Read the article

< Previous Page | 730 731 732 733 734 735 736 737 738 739 740 741  | Next Page >