Search Results

Search found 2898 results on 116 pages for 'solid'.

Page 81/116 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • ESX hosts lose connectivity with iSCSI SAN LUNs

    - by Themist
    I've been experiencing this issue for a couple of months now where my ESX hosts lose connectivity with my iSCSI SAN vmfs volumes. As a results the ESX hosts enter a nonresponsive mode the associated VMs disconnect and the only remedy is to reboot the host. This issue happens randomly . I have escalated this issue with VMWare but I haven't had any solution to the issue yet. I see no errors on my switches and there are no hardware issues as well. My SAN infrastucture is solid and there are 2 paths for every vmfs volume. Did anybody else experienced a similar issue? edit: Here are some more details: The iSCSI SAN software is Datacore Sanmelody 2.0.4.2 running on 2 HP Proliant G5 servers. The storage attached to each of the servers is an HP MSA70 and all the iSCSI SAN Volumes that are presented to my 4 ESX hosts are mirrored. I have two iSCSI swithces HP Procurve 1800G-24 that are trunked together. My SANLELODY servers are using NC360T NICs. I team two NICs and have one cable connecting to each iSCSi switch. Each ESX server uses two NICs as well for the iSCSI Network.

    Read the article

  • SkyDrive broken after upgrade to Windows 8.1: "This location can't be found, please try later"

    - by avo
    Upgrading from Windows 8 to Windows 8.1 via the Store upgrade path has screwed my SkyDrive. The C:\Users\<user name>\SkyDrive folder is empty (it only has single file desktop.ini). When I open the native (Store) SkyDrive app, I see "This location can't be found, please try later". I'm glad to still have my files alive online in my SkyDrive account. I tried disconneting from / reconnecting to my Microsoft Account with no luck. Anyone has an idea on how to fix this without reinstalling/refreshing Windows 8.1? From Event Viewer: Faulting application name: skydrive.exe, version: 6.3.9600.16412, time stamp: 0x5243d370 Faulting module name: unknown, version: 0.0.0.0, time stamp: 0x00000000 Exception code: 0x00000000 Fault offset: 0x0000000000000000 Faulting process ID: 0x4e8 Faulting application start time: 0x01cece256589c7ee Faulting application path: C:\Windows\System32\skydrive.exe Faulting module path: unknown Report ID: {...} Faulting package full name: Faulting package-relative application ID: Also: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {C2F03A33-21F5-47FA-B4BB-156362A2F239} and APPID {316CDED5-E4AE-4B15-9113-7055D84DCC97} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19) from address LocalHost (Using LRPC) running in the application container Unavailable SID (Unavailable). This security permission can be modified using the Component Services administrative tool. Never was a big fan of in-place upgrade anyway, but this time it was a machine which I use for work, with a lot of stuff already installed on it. Shouldn't have tried to upgrade it in the first place, but was convinced Windows 8.1 is a solid update. Another lesson learnt.

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Monitor goes black for a few seconds

    - by privatehuff
    I have a Hanns G 28" monitor, Model # HG281D It has its issues (viewing angle sucks) but has been functional and solid, great for desktop stuff. Worked without any sign of any problems for 6-12 months. However, now the monitor "goes black" for about 2-3 seconds, almost like when you click "detect display" It does not turn off (power light does not go amber) The computer is completely unaffected and the video mode never changes when the picture returns. The computer is fully responsive and will keep playing music or taking my keypresses during the time I can't see anything. (it just happened and I kept typing, etc) It happens on multiple computers across several operating systems. (I have an 8-port iogear KVM switch that has several computers connected) But, it seems to happen only on certain computers. I have a hackintosh that does it, a windows 7 PC that does not, a lenovo laptop that does not, and my old ubuntu 8.10 box did not do it, but my new mint 8 box does do it. I've check the connections and tried changing out the power cable and the vga cable. Sometimes it won't happen for hours (or days) and sometimes it happens several times per hour. It was happening many months ago, did not happen for months, and has now started happening again. Does this make any sense? What could it be?

    Read the article

  • Overriding vhost.conf to always allow PHP include access to directory

    - by Jeremy Dentel
    My predecessor in my job developed a simplistic newsletter system for our school's newspaper utilizing PEAR's Mail package. As I grow this system (and our site) we are constantly stuck with Plesk rewriting the vhost.conf file in which the PEAR include path has been manually entered. This has become an unwieldy task to actually manage and keep running. There's been a "note" from both the previous developer and I to attempt to solve this problem, but we can't entirely figure it out. I'm attempting a move to cPanel through another host, so hopefully it'll go away there, but until then, it can be tedious extremely difficult to get a solid uptake of the system without constant "web-presence." I've searched around and haven't found a solution. I'm rather new to the server management scene (command line was non-existant till around a year ago. =/), so I haven't found anything. Any help would be useful. "Similar Questions" popped this up, but it still seems to rely on vhost.conf, and will still allow changes within Plesk to overwrite the changes.

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • How/when do you study?

    - by Sergei
    Would be interesting to find out how other sysadmins are educating themselves. I find myself in the need of constantly learning new things. I prefer to spend more time on the subject and know it thoroughly knowing that not doing so will kick me back in the future. This can be frustrating sometimes as it feels that I move too slowly. Our company has account on safari.oreilly.com and I am reading a book or two at any given time. I also read sysadmin related blogs for ideas and tips and to keep myself in the tune with the trends. I cannot do any study at home as I would rather spend my out of work hours with my family plus I find it hard/impossible to study at home due to the inability to concentrate at home. So I mostly study while on the train, luckily my commute time takes up to 2 hours a day. I also read a lot at work and don't feel guilty about it. To fix/implement/plan, I need to have a solid knowledge and if it requires time then this is a part of my job being a sysadmin. There is a joke that says "sysadmin is a person that knows a lot about everytihng and as a result knows nothing" - I think ther is a grain of truth here...

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • Decyphering Seagate drive model numbers?

    - by Stefan Lasiewski
    I'm comparing Seagate's Enterprise and Desktop drives for a variety of old and new servers. These servers come from different generations, so options like size (73GB, 2TB) and interface (SATA vs SAS 3.0Gbps vs SAS 6Gbps vs SCSI Ultra320) are widely variable. I'm trying to compare the sizes, speeds and interfaces, but I'm getting thrown off by different models. Also, their website is not the best. Does anyone know of a documented explanation of the Seagate model numbers? And is there a single spreadsheet which compares the features for all drives (or all 'Enterprise' drives?). Seagate drives have model numbers like this: Model ST3600057SS 6-Gb/s SAS 600 GB None at Cheetah® 15K Hard Drives Model ST373455LW Ultra320 SCSI 73.4 GB 68-Pin LW at Cheetah® 15K Hard Drives Model ST32000644NS SATA 3Gb/s 2 TB None at Constellation™ ES Hard Drives Model ST973452SS 6-Gb/s SAS 73 GB None Savvio® 15K Hard Drives Model ST9200011FS SATA 3Gb/s 200 GB Pulsar™ Solid State Drives I understand the model numbers read something like this: ST - SOMETHING1 - SIZE - SOMETHING2 - INTERFACE Where the fields mean something like this: ST : For 'Seagate'? 'Seagate Technoligies'? SOMETHING1 - This field has number, but I'm not sure what that represents. SIZE - Size in Gigabytes. This is a number like '73' or '300' or '2000' SOMETHING2 - This field also has a number, but I'm not sure what it means. INTERFACE - This field seems to indicate the Interface. 'SS' means SAS, 'FC' means Fibre Channel, but I don't see how to distinguish between 6Gbps SAS and 3Gbps SAS, or different SATA or FC speeds. I don't see a field which indicates the RPM (15K , 10K, 7.2K) etc. Is this part of the model number?

    Read the article

  • Backup Exec 10 - Network connection to the remote agent has been lost

    - by jherlitz
    Okay, so I have 4 remote offices, all running off of a 3mb ethernet connection. Two sites are part of a WAN and 2 sites are using 3mb connections over a site to site tunnel. I am using Backup Exec 2010, I have the remote agent installed on all the remote servers. For the past few weeks now, on the two sites running over the site to site tunnel have been failing with the following error message now. "The network connection to the Backup Exec Remote Agent has been lost. Check for network errors" We used to be on a DSL connection site to site tunnel, now we changed to the 3mb ethernet connection using site to site tunnel. I have to find out, has it been failing ever since we changed, or just recently. Backup exec support is telling me it is a network issue. My communication or connection to the server is solid, we don't have any issues, or outages. So I am baffled on why this continues to fail. And why just those two sites.. Any advice?

    Read the article

  • How is made sure magnetic or electric fields from devices like transformers or fans close nearby do

    - by matnagel
    Fans and transformers which are inside the server case create magnetic and electric fields. Electric fields can be easily shielded, but what about magnetic fields, they can only be shielded with high cost materials like mu metal http://en.wikipedia.org/wiki/Mu-metal If a hard drive is installed too close to an intense transformer field, how is the magnetically stored information on the ferromagnetic surfaces of the disk kept safe? Even if drives are shielded, where are the limits? Is there some technical investigation or recommendation from manufacturers about this? (I never heard about something and never had any problem but I am interested in some facts. This is much preferred over what you believe or a habit you developed. Please try to give some solid infromation.) I have built and repaired many servers and sometimes I did put the harddrive on top of the power supply. Edit: This question is not about frequencies that could affect the drive via the power or data connectors of the drive, those are electronically decoupled and that's another question. Edit 2: The wikipedia page states that the motor inside the drive is shielded with mu metal. It is obvious that manufactureres have to take care of this. This question is about such influences from outside the drive.

    Read the article

  • RAM ok in memtest86+ == RAM ok after wake from sleep?

    - by twon33
    I have a Windows XP (32-bit) system that appears stable in normal operation, but was repeatably freezing (hard lock, no BSOD) a minute or so after waking from S3 sleep. Some Googling against the motherboard model and memory manufacturer suggested that I might need to bump up the memory voltage, so I tried it and it now seems to resume without freezing. However, I don't really trust it and I'd like to validate that it's actually stable, especially after resuming from sleep. I've run Prime95 for a few hours with no issues, and am planning an overnight run of Memtest86+, which I expect to pass because the system has been solid whenever I've run it without putting it to sleep. Does something like Memtest86+ exist that actually invokes S3 sleep during operation? Clearly it would need an operator to wake the computer to resume testing, but I don't think I've ever heard of a memory test tool that can do this. Alternately, am I wasting my time? Should a clean bill of health from Memtest86+ indicate stability regardless of whether sleep is involved, or, conversely, does my original problem indicate that Memtest86+ would have failed eventually with the stock voltage if I'd run it, sleep or not?

    Read the article

  • What is the sysadmin's dream network printer? 6-8k pg/mo. Xerox, OkiData, Lexmark and HP are all fail

    - by Jacob
    How do I find out what printer brand and/or type doesn't suck? This information is hard to find and manufacturer's websites won't reveal any issues with certain printers. After 10 years of dealing with network shared printers, I can't say that I have been impressed with any of the printer brands I've seen. Brother's little laser MFPs have been close to ideal for low volume, but that is it, period. OkiData, Lexmark, HP, Xerox solid ink printers, they all sucked in one way or another. Currently I'm looking to replace a Xerox ColorQube 8570 because it fails to print on a regular basis. Sometimes it doesn't even boot VxWorks fully - it just hangs at 2% or whatever. I've used Xerox 8860MFPs and they sucked just as bad. I won't talk about ink jets here, that's most likely not what I'm looking for. We currently spend about $4k on paper and ink per year for this printer at up to 6-8k pages per month, letter, mostly black and white, low color usage. I want the printer to feed paper correctly, not crash and burn when a PDF isn't according to its taste (my favorite Xerox problem here) and with decent drivers for Windows and OS X. Print quality is not of the utmost importance but paper does get sent to customers.

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • Problem in accessing Windows shared folder on Ubuntu using terminal

    - by vikramtheone
    Hi Guys, Description I have 2 systems with me, one running on Windows(Host) and one on Ubuntu, both on a LAN. On the Windows(Host) I develop software intended for the Linux system and because the Linux system has little external memory, my idea to overcome this is by making the project folder on the Host side a Shared Folder with full access and access it on Ubuntu over the network. To achieve this, I have installed Samba on Ubuntu, when I go to Places -> Network I can see the shared project folder and I simply mount it. A link appears on the desktop. Next, using Nautilus I open the link and I can access the contents of the shared folder. Problem Even though I mount the shared project folder, I don't see it appearing in the /media or the /mnt folder, as a result of this I don't know what path to use to access this folder, from the terminal. For example: When, I mounted my USB stick, as expected, a link for the device appears on the Desktop and I also see a folder in the media folder. So, similarly, a mounted shared folder should have appeared on the /mnt folder, too. Can anyone suggest what I should do now? There are many posts around, but no solid solution for this problem. Help!!! :) Vikram

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • Problems using InfraRecorder to back up ISOs of certain CDs

    - by Voyagerfan5761
    I've gotten into the habit of backing up my CDs as ISO files, just in case the discs should be damanged, lost, or destroyed. Using InfraRecorder, the process is pretty painless. Unfortunately, I have run into at least two discs that don't back up. I get the error message: Can't read source disc. Retrying from sector 252270 Sometimes this will appear repeatedly. One of the discs is my retail copy of Star Trek: Armada II; the other is disc one of DOOM 3. Both discs run flawlessly when I put them in the drive and let Windows AutoPlay them. Armada II appears as two tracks (one data, one audio) in InfraRecorder, and the error happens at the approximate track boundary. DOOM 3's first disc, however, fails much sooner (around sector 990) and appears as one solid data track. Am I simply using the wrong tools for this job? InfraRecorder is a nice free tool that I can run from my flash drive and use for most tasks of this type, but it does seem to have trouble with certain things. Ideally I'd like to hear about any workarounds people have found for this issue, but if I must switch tools I'm open to it (preferably other free tools).

    Read the article

  • Server freezes while installing Redhat Enterprise Linux Server 6

    - by eisaacson
    We've tried both the first options Install or upgrade an existing system Install system with basic video driver When trying option #1, it gets to a screen that has a solid cursor about halfway down, then freezes. When trying option #2, it freezes at the point where it says: Waiting for hardware to initialize... Of course, we bought the unsupported version and haven't found anything to help us so far. Here are the specs to the server in the original post: ASUS P8Z68-M Pro LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard with UEFI BIOS RAIDMAX Reiter ATX-305WBP Black Steel / Plastic ATX Mid Tower Computer Case 450W Power Supply Intel Core i7-2600 Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 2000 BX80623I72600 16GB Ram OCZ Agility 3 SSD 120GB From some of the posts out there could the UEFI Bios or the Sandy Bridge processor be a culprit here? We just tried the DVD on a different computer and it got past that point with ease. It's a standard Dell build compared to our custom machine. Could it be having difficulty recognizing drivers? How do we get past that?

    Read the article

  • How to send from my Z88 to my PC

    - by Bevan
    I've got a Cambridge Z88 that I want to get working with my PC. Around 6 years ago - in 2004 - I made heavy use of my Z88 to do a whole bunch of writing on the train while commuting to and from work. The Z88 is solid state, lightweight and has a full size silent keyboard, so it works very well as a writing instrument. I still have the serial cable I soldered up back then and used successfully in 2004. It has these connections: Z88 9 pin ----- ------- 2 TxD ------> RxD 2 3 RxD <------ TxD 3 7 GND <-----> GND 5 4 RTS ------> CTS 8 5 CTS <-+ RTS 7 8 DCD <-+---- DTR 4 9 DTR ----+-> DCD 1 +-> DSR 6 Unfortunately, I haven't been able to find my notes from 2004 that describe how I got it to work back then. I've spent several hours trying to Google a result, but to no avail. I'm pretty sure the cable is fine - after all, it's what I used successfully six years ago, and I've checked it out with a multimeter - so I'm focusing on the PC end of things, which is where I'd like some assistance. Q1: In my recent attempts, I've been using both Hyperterminal (as built into Windows XP) and the command line (copy com2: con:), but with no success. What's a good (better!) serial communications application to use? Is there one that allows me to see as deep as the signalling that's occurring on the wire? Q2: If you have a Z88 that works correctly with your PC, what software do you use on the PC end, and what's the pinout of your cable? I'm pretty sure that the Z88 itself is working properly: When using the built in Import/Export tool to send a file, I see different behaviour when my serial cable is connected compared to disconnected. When disconnected, the transmission appears to work, with a progress meter counting up and then finishing; when connected, nothing happens 'cept a timeout if I wait long enough.

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • Wake-on-lan only works so many times

    - by Chance
    I have Wake-on-lan configured on my Windows XP machine so that the computer will wake up from Standby. Waking the computer from Standby via network traffic seems to work a certain number of times, say 4 or 5, then it stops working. If I restart the computer it seems to reset this behavior so that I can use WOL a few more times before it starts working. I use the command "wol" on my other, Linux machine with the appropriate IP address and MAC address of the card. I looked at the network card to see if it had different lights when WOL worked and when it didn't. When it has a solid amber light where the ethernet cable connects, WOL seems to work. When it has a flashing amber light, WOL does not. It seems that the system seems to almost "shut off" the card when it falls to sleep, but I don't know if this is a function of time or number of standby/wakeups. I have a 3Com 3c920 network card. If I look at the properties in Device Manager, I have "Allow this Device to bring the Computer out of Standby" checked. In the Advanced tab I have anything related to RWU (Remote Wake Up) enabled. I also believe I have the appropriate settings in BIOS related to Remote Wake-Up and I have tried both S1 and S3 power configurations in the BIOS. Intuitively, I would think I would uncheck "Allow the Computer to turn off this device to save power", but doing so disables the "Allow this Device to bring the Computer out of Standby" option. Does anyone know what is happening here or if there is a way to fix it? I have an integrated network card; would getting one that goes into a slot be better? I am running Windows XP on a Dell Optiplex GX240 with a 3Com 3c920 network card.

    Read the article

  • Completely unintuitive Apache/PHP memory-freeing behavior

    - by David
    Okay, this one's weird. I have a Turnkey Linux server with a gig of dedicated RAM. It's running WP3.2 with a boatload of plug-ins. It's a new site, so it has very limited traffic (other than search engines, maybe 20 hits a week). Now, for a few weeks, every few days, it would max out on main RAM, start eating up virtual RAM, and then crash. It's had this behavior for a while and I've been trying to figure out which element was causing the crash. Nine days ago, I pointed my external server monitor to this server. I wrote a 5-line HTML file (not PHP and not WP) that the server monitor accesses every minute, to see if the server is up. So, now, nine days later, the server has been rock solid, up all the time, no memory leak at all. I changed NOTHING on the server itself to see this behavior change. Have you EVER seen anything like this? All the server monitor is doing is retrieving a single, super-simple HTML file and all the memory leak problems have gone away. Weird, eh?

    Read the article

  • How to restart boot Windows 7 after upgrading to a SSHD on SONY VAIO with recovery discs?

    - by Boris Okun
    The original HDD on my Sony VAIO still works, but has a damaged sector 0 and I was constantly prompted to replace the HDD because of the imminent failure. I created recovery discs as instructed, used a USB external HDD for complete back up (including Windows image back up). After installing the SSHD and using recovery discs to upload Windows and boot, I am getting the Windows welcome screen. Right after that, I'm getting the following message: Windows couldn't complete the installation. To install Windows on this computer, restart the installation. I have tried repeating the process many times all kinds of different ways and I still receive the same message. Also, when I tried to change to partitioning as the other option offered, I get the message: Windows Setup could not configure Windows to run on this computer's hardware. All troubleshooting for hardware and PCU came out solid. I tried to load the image back up from the external drive, but can't load the driver. The computer doesn't see it. Does anyone have a clue or has encountered something similar?

    Read the article

  • No boot device found. Press any key to continue

    - by Andrew Banks
    I took out the hard drive from my Dell Latitude E5420 notebook, put in an ADATA S599 solid state drive, and installed Ubuntu 11.10. When I boot, the Dell BIOS splash screen appears with a progress bar, which quickly fills up, and the screen goes black. All of this is like it was before. At this point, the OS splash screen should fade in. Instead, I was dismayed to see simply the following, in white text on a black screen: No boot device found. Press any key to continue After looking around for the Any key (just kidding) I press a key, and the Dell BIOS splash screen appears again with a progress bar, which quickly fills up, and the screen goes black. This time, however, the Ubuntu splash screen shows up, Ubuntu opens up, and all is normal. Every time I shut down, however, this happens again. It's like a game the computer and I play together. The computer has never started up without first saying: No boot device found. Press any key to continue and it has always started up after I press any key to continue. It also starts up fine if I click Restart instead of Shut Down. Thoughts?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >