Search Results

Search found 17366 results on 695 pages for 'memory card'.

Page 103/695 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Why doesn't 12.04 recognize my ATI Radeon X300SE graphics card?

    - by Nomad Zero
    So far everything including the BCM4321 card is working, however, the OEM ATI Radeon X300SE (RV370) is not recognized. When I go into system details the card comes up as unknown. When I run lspci the card shows up as: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV370 5B60 [Radeon X300 (PCIE)] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV370 [Radeon X300SE] I have attempted check Additional Drivers but the card does not show up as needing a driver. So far this is causing the system to run warmer as the processor is doing a bit of the work to run Unity and any full screen videos tend to be rather choppy. Is there an available driver that will fix my issue and what is the install method? I am not scared to run a terminal so no worries there, as long as I have instructions, I can find my way. I am running under 12.04 LTS Kernel 3.2.0-25-generic-pae

    Read the article

  • ATI Radeon HD 5770 Cooling

    - by Murtez
    I have a Radeon HD 5770 manufactured by ATI. In the course of using compressed air to clean it the fan shattered. I put a case fan on the card to cool it, but it's not doing a great job; the temperature goes to nearly 100 sometimes. The card is not overclocked or anything, the PC itself is clean. I looked around for a third party cooling system but the only one I found was the Accelero L2 Pro. It's low on stock and I don't know how great it will be, some review say it may not fit on all 5770 cards. Does anyone know of another one that will work? Help is much appreciated.

    Read the article

  • How to get disable nVidia Graphics Card (which has recently expired) so that I can boot Desktop ?

    - by shan23
    I have a 3-4 years old laptop (Compaq V3000), which had Win Vista with Ubuntu 10.10 in dual boot configuration. The graphics card inside is an old Nvidia GeForce Go 7200. One fine day, my graphics card died (of old age, presumably) - resulting in myself being initially unable to boot to WinVista and Ubuntu 10.10. I solved the problem with WinVista (disabled Nvidia card after booting to Safe mode), but I don't know how to do the same with Ubuntu. I can only disable the 3rd-party driver after I boot to desktop, but since its crashing before that, I'm unable to do so. Can anyone help me disable the graphics card in Ubunutu ?

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Memory allocation strategy for the vertex buffers (DirectX 10/11)

    - by Alex
    I have the following question. I write CAD system. So I have a 3D scene and there are many different objects (walls, doors, windows and so on). User can add or delete some objects. The question is: how can I organise the keeping of vertices for all my objects. I can create vertex buffer for every object. But I think drawing/switching from one buffer to another would have performance penalty. Another way - I can create several big buffers for every object type. But I don't understand how to update such buffers. It is too big to update whole buffer (for example buffer for all walls). What I need to do if I want to delete the object from the middle of the buffer? Actually I have the similar question: http://stackoverflow.com/questions/5515700/how-to-properly-update-vertex-buffers-in-directx-10 Most examples I've found work with very static models. Therefore, they tend to create a single vertex buffer with their list of points, and then are just manipulated by matrix transformations. I, on the other hand, will be updating the scene very often.

    Read the article

  • How can I configure the embedded wireless card in a Toshiba Satellite Pro 4600 to work under Lubuntu 10.10?

    - by MoLE
    I'm struggling to get the embedded wireless card in this laptop to work. In 7.10 (gutsy) it worked fine. Now I'm trying to get 10.10 (maverick) working on it, and am using the Lubuntu flavour due to the low resources of this laptop. The hardware: Appears to be an embedded pcmcia card. pccardctl ident gives: Socket 0: product info: "TOSHIBA", "Wireless LAN Card", "Version 01.01", "" manfid: 0x0156, 0x0002 function: 6 (network) The default kernel recognises the card and loads the orinoco_cs driver. orinoco_cs 0.0: Hardware identity 0005:0002:0001:0002 orinoco_cs 0.0: Station identity 001f:0001:0006:000e orinoco_cs 0.0: Firmware determined as Lucent/Agere 6.14 Then for some reason, the driver isn't happy with this and gives: orinoco_cs 0.0: Hardware identity 0005:0002:0001:0002 orinoco_cs 0.0: Station identity 001f:0002:0009:0030 orinoco_cs 0.0: Firmware determined as Lucent/Agere 9.48 All seems ok until I try to associate with my access point using Network Manager. eth1: Lucent/Agere firmware doesn't support manual roaming repeated about 10 times then NM gives up. According to the linuxwireless.org wiki page on this driver, this is a known issue, and I quote: Known issues Roaming and WPA_supplicant Lucent/Agere firmware doesn't support manual roaming On the Agere cards, roaming is controlled by the firmware instead of userspace. You will get the above message if userspace attempts to associate with a specific AP rather than by SSID. If you are using wpa_supplicant use ap_scan=2 mode. NetworkManager uses wpa_supplicant, so the above also applies. At this point my google-fu has failed me, and I can't find how to configure network manager to use the mystical "ap_scan=2" mode via wpa_supplicant. I have tried the following suggested solutions (from launchpad or the forums) deleting the agere* files from /lib/firmware using wicd instead of network manager combining both blacklisting the orinoco_cs driver in an attempt to force use of the hostap_cs driver instead (in case it is a prism2 card). Obviously none of them have worked for me. Any hints on how to perform the suggested workaround above? Edit: I have also confirmed working on 8.10 (intrepid) live CD.

    Read the article

  • Mini DV and DVI Conversion

    - by Kairan
    Video card is: AMD HD 6950 My video card comes with 1xHDMI, 2xDVI, and 2xMini DV ports. I have always used VGA and DVI only. I have the HDMI already connected. I want to hook up 2 more display. One of the display can take a VGA only, the other is an LCD TV so it has VGA, HDMI, Component. I am wondering: 1) What is the best output port to use for the second two monitors? The miniDV or the DVI? 2) I need to run a long cable maybe 15-25 feet, this might change the answer to question 1, is an HDMI long cable 20+ ft going to be better than a 20ft DVI or miniDV ?

    Read the article

  • Multiple EyeFinity Display groups

    - by Shinrai
    Is it possible with an EyeFinity enabled card to make multiple display groups at once? I was playing with a FirePro 2460 and while a 4x1 or 2x2 display group works quite nicely, if I make a 2x1 display group and then select one of the other displays to try to make a second 2x1 display group, it disables the first one. Is there any way to circumvent this behavior and set up two separate spans on the same card? Additionally, can you set up distinct display groups if they're on different cards? I will have the opportunity to test several of these cards in one machine very shortly, but I'm curious if anyone has any experience. EDIT: I can confirm that you can make multiple spans on multiple cards (as long as they don't cross cards, obviously) (If the answers are different for FirePro/FireMV cards and Radeon cards, that is helpful and relevant knowledge - I doubt it, though.)

    Read the article

  • System won't boot: Gigabyte HD 7790 1GB OC GPU issue or Corsair VS550 PSU issue?

    - by MGOwen
    Installed a new GPU, and PC won't boot. Turn it on and: No monitor signal at all (tried HDMI and VGA via DVI, on 2 working monitors). CPU and GPU fans DO spin, but No system beeps, no sounds from drives (they might make a small noise in the first 1 second or so, but there's definitely no OS loading or anything like that) If hit "power off" button it turns off immediately (no holding down for 3 seconds like usual) If I put my old HD 5670 GPU back in, everything works fine. But (plot twist!) card is not totally dead. My friend put it in his PC, and it works fine (he even played a game for 15 minutes, no issues). He has a Corsair TX850 850W and a Gigabyte MB. So my main theory is: the GPU isn't getting enough power from the PSU. But is it: Bad PSU? Seems unlikely, since it works fine with the other GPU. Also, the PSU Is brand new and 550W (single 42A/504W 12V rail). Overkill for this GPU. Corsair is a decent brand, but maybe just mine is faulty? Bad GPU? Could it be drawing more power than it should be, somehow, or something? Supposedly HD 7790 needs only 21A/75W on the 12v rail, though this one is factory overclocked a bit... but should that triple the power requirement? Something else? Could there be a motherboard incompatibility somehow? Both MB and GPU are less than a year old and PCI Express 3.0 x16. Things I've tried: Re-seating the video card Testing PC with old GPU (works fine, same PCIe slot). Checked AMD's stated amp/watt requirements of a 7790 and my PSU (see above). My PSU can output twice the amps (single rail) and 5x the Wattage a 7790 needs. Here are the full specs: Gigabyte HD 7790 1GB OC GPU Corsair VS550 550W PSU 4GB RAM AsRock H61M U3S3 motherboard i3-2100 500GB SATA HDD (2007-ish) blu-ray drive (new) PCI 802.11g card Edit: Motherboard BIOS Update seems to have fixed it. (If anyone has same problem and it doesn't work, comment here).

    Read the article

  • NK2 files doesn't keep the email addresses in memory

    - by r0ca
    When I send an email to someone outside the firm, when I only type the first letters of its name (Contact), I get the auto-suggest of the "Already-sent" users. So now, since a few days, the emails are not kept in memory by Outlook (NK2 file). I see that that file is only 2kb and on my old machine, it's almost 200kb (So a lot more email addresses kept in memory) Should I just rebuilt the Outlook profile or the whole Windows Profile? A simple Outlook reinstall or to build a new PC?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Fail2ban memory usage

    - by ltsstar
    Since my server is under a sustain DNS amplification attack (DDOS), I configured fail2ban and initially my outgoing traffic dropped markedly. Anyway, after a few hours (mostly +10), fail2ban uses about 75% ram and seems to be crashed in some way, because the outgoing traffic raises imediatly after. When I searched the web for the memory problem, I found some people complaining about high fail2ban memory usages as well. But the recommended solution, to insert an ulimit command into a fail2ban config file, did not change that much for me.

    Read the article

  • Is PCI Express x4 faster or slower than a standard PCI slot for graphic cards?

    - by Stephen R
    I am looking at potential motherboards for a computer I want to build and ran into this conundrum. The motherboard has two PCI Express slots that allow for 16 channel cards to fit in them. The catch is only one of them operates at 16 channels, the other operates only 4 channels. My question is, would it be faster to buy a PCI Express graphic card and install it in the 4 channel PCI Express slot? Or would it be better to buy a standard PCI graphic card and install it in one of the available PCI slots?

    Read the article

  • The sound freezes while listening to music

    - by Scott
    I have bought an sound card - Sound Blaster X-FI Surround 5.1 PRO and while I'm listening to the music, after few minutes or even hours (it does happends randomly) the sound dissapears and I have to go settings: Playing Devices - Advanced and switch the Default Format (sampling frequency) and switch from the `16-bits, 48000 Hz (DVD Quality) to any other from the list (16-bits, 96000 Hz or 24-bits, 48000 Hz or 24-bits 96000 Hz). I have been trying to set all of the above options, but for all of them the sound freezes after listening to music. It is getting annoying and I hope its the settings fault and not my sound card. I got my drivers installed from the attached CD and I'm running Windows 7 HP 64 bits. If anyone would try to help me, I would really appreciate this.

    Read the article

  • Memory upgrade for Toshiba P20 S203?

    - by pjc50
    I've had an offer of a 256MB PC2700 SODIMM, apparently from an iBook, to upgrade a Toshiba laptop. Is that suitable? I've seen "DDR 266 SODIMM" on sale as the official upgrade memory. How in general should I work this out? I've long since lost track of what memory goes with what system.

    Read the article

  • Full computer freeze with audio stuttering after playing games for a period of time

    - by Wes
    I've been having a problem with my computer freezing completely when playing games like LA Noire or SW:TOR (yay early access!). Basically, what happens is I will play for around an hour or so (depending on the game) and when the freeze happens, the entire computer locks up and any audio that was being played glitches out and stutters broken-record style (only much shorter. Very techno). I think it might be heat related and thought it might be my video card overheating, so I have been setting my video card (Nvidia Geforce 260GTX 216-core) fan to highest setting, but that has little to no effect. Now I'm beginning to think it's either my FSB or CPU overheating. Can anyone provide some insight or similar experiences? I'm really at a loss and don't wanna damage my rig beyond repair.

    Read the article

  • Enabling Squid delay pool eats up the entire memory

    - by Supratik
    I am using "squid-3.1.8-1.el5" in my CentOS 5 32 bit system. In normal condition Squid uses 85m - 90m, but when I enable the delay pool parameters the memory usage suddenly rise up 2GB. The memory keeps on increasing until the system is out of resource. The following are my delay pool settings: delay_pools 1 delay_class 1 1 delay_access 1 allow all delay_parameters 1 192000/192000 Is there anything I am missing here or is it a bug with Squid ?

    Read the article

  • Building my own computer

    - by There is nothing we can do
    I'm planning to build my own computer. I do not have enough cash to buy all components I need in one go. I want to ask, if I buy motherboard which is compatible with i7 processor (any) and compatible with graphic card Nvidia gtx 780, does it mean that this mother board will be compatible with processors (from intel) which will be released next year? Same for graphic cards? The point is that I'd like to avoid situation where I buy motherboard let's say now, and in couple of months there will be new graphic card/processor which will not be compatible with my mother board? Or maybe shall I start completely somewhere else?

    Read the article

  • Shared memory multiprocesses

    - by poly
    I'm building an multi processes application and I need to save session ID, the sessions ID is 32 bit, and of course it can't be used twice in its lifetime, I'm currently using DB that saves all the ID in a table, and I do the following, ID table is (int key, char used(1)) //1 is used, 0 is not 1. lock table 2. get one key for one sessions 3. update used field in it to used 4. unlock After the session is finished the process use below to free key, 1. lock table 2. update used field in it to not used 4. unlock I'm really wondering whether this is a good/fast implementation. and please note it's multi processes application.

    Read the article

  • Multi-monitor resolution and position settings lost after reboot

    - by SoftDeveloper
    I've had two 1280x1024 monitors running for years on an nVidia 8800GT card with no problems. I've now replaced one monitor with a new 2560x1440 one. The card seems to support both fine, however every time I reboot the resolutions and monitor positions revert to the old settings. I've tried upgrading, downgrading, stripping out and reinstalling many versions of the nvidia drivers to no avail. Logging in as another user doesn't help - same problem. Booting into another another OS (Win7 64) works OK, so it is just this OS installation. During boot up everything looks fine (ie native 2450x1440 res) until the nVidia control panel or something is loaded which flips it back into the old mode. I have no old saved nvidia profiles. I can't find anything in the registry relating to these old settings. Its driving me crazy having to set resolutions and realign monitors on every reboot! Can anybody help?

    Read the article

  • Is GPU active when there are not any monitors?

    - by Mixer
    Does GPU render anything when there is no monitor plugged in? Today I turned my old PC into some kind of a "server" which means that I want it running 24/7. I don't need any display (I will operate through ssh) so when everything was set up I removed my monitor. After a hour I checked my graphic card's cooling system and it was still hot. My graphic card is GeForce 8600 (with DVI connector), OS is Debian Linux. What is the best solution in this situation (standalone server) if GPU is active and I don't want it to waste power?

    Read the article

  • Apache with mod_php high memory utilization

    - by Raj
    We have Magento application deployed on Apache with mod_php and mysql. I have observed that sometime apache server starts consuming high memory which causes memory swapping and results in high load on servers. whenever there is high load on apache server, the apache processes which are causing the high load were in sleep mode at mysql end and CLOSE_WAIT state at client side. Any help is appreciated to resolve this issue.

    Read the article

  • Multiple EyeFinity Display groups

    - by Shinrai
    Is it possible with an EyeFinity enabled card to make multiple display groups at once? I was playing with a FirePro 2460 and while a 4x1 or 2x2 display group works quite nicely, if I make a 2x1 display group and then select one of the other displays to try to make a second 2x1 display group, it disables the first one. Is there any way to circumvent this behavior and set up two separate spans on the same card? Additionally, can you set up distinct display groups if they're on different cards? I will have the opportunity to test several of these cards in one machine very shortly, but I'm curious if anyone has any experience. EDIT: I can confirm that you can make multiple spans on multiple cards (as long as they don't cross cards, obviously) (If the answers are different for FirePro/FireMV cards and Radeon cards, that is helpful and relevant knowledge - I doubt it, though.)

    Read the article

  • ATI HD 5970 display

    - by user55406
    Hello everyone i'm a little worry about my graphic card. before I get into that I will tell you my system specs. I have intel dx58so mainboard, 6gb corsair xms ddr3 ram, intel i7 960 cpu, and ati hd 5970. I'm also using a coolmaster haf92 case. My OS is vista X64. Here is my issue when I type in dxdiag in (Start Search) and I look into display. I see in (approx total memory) 716mb graphic memory. The ati hd 5970 is a 2gb graphic card. Am I being stuipd or is there an issue.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >