Search Results

Search found 5942 results on 238 pages for 'total'.

Page 105/238 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Understanding the memory consumption on iPhone

    - by zoul
    Hello! I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect. I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB. I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.) Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they? I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage? What I have found so far: I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault. I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure. I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling. When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing. I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. I

    Read the article

  • Fresh Red Hat Enterprise Linux fails to install httpd using yum

    - by Julian
    I'm trying to install a LAMP stack in a fresh red hat server but yum is misbehaving. Being linux illiterate I'm at a loss. $yum install httpd Loaded plugins: security Setting up Install Process No package httpd available. Nothing to do My yum config $ cat /etc/yum.conf [main] cachedir=/var/cache/yum keepcache=0 debuglevel=2 logfile=/var/log/yum.log distroverpkg=redhat-release tolerant=1 exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 # Note: yum-RHN-plugin doesn't honor this. metadata_expire=1h # Default. # installonly_limit = 3 # PUT YOUR REPOS HERE OR IN separate files named file.repo # in /etc/yum.repos.d Other stuff in the yum.repos.d dir $ ls -lah /etc/yum.repos.d/ total 12K drwxr-xr-x 2 root root 4.0K Feb 4 01:15 . drwxr-xr-x 59 root root 4.0K Feb 4 01:28 .. -rw-r--r-- 1 root root 561 Mar 10 2010 rhel-debuginfo.repo What could be going on? I thought "out of the box" RHEL5.5 would be friendlier :)

    Read the article

  • Printing a dynamic sheet as one document

    - by Sux2Lose
    I have a spreadsheet structured as follows: Summary section at the top Detail section on the bottom Summary section summarizes the detail section which is filtered using auto filters There are ten products that all need to be printed individually, but I want the page footer to show the overall page position of all the print jobs and the total number of pages. That is probably not clear. So for example, if I print the two page Product A view it will print page 1 of 2 and 2 of 2. If I print the one page Product B it will show page 1 of 1. What I want is to print both and have Product A show Page 1 of 3, Page 2 of 3, and Product B be Page 3 of 3. Is there any way to accomplish this?

    Read the article

  • What does the "Max Memory Size" on the new Intel Core i3 / i5 / i7 CPU's mean?

    - by Josh
    I just noticed in the specs of the new Intel Core i-series processors that there is a "Max Memory Size" that is usually pretty small -- anywhere from 8GB to 24GB. See here: http://ark.intel.com/Product.aspx?id=41316 Core 2-based motherboards were just starting to roll out support for 32GB and greater memory sizes. Anyone have any idea what the Max Memory Size indicates? Is this the total limitation of the on-chip memory controller? Limitation per channel? Limitation per stick (e.g. density??)? Thinking of building a decent machine that needs lots of RAM, so I'm looking at the i7 860.

    Read the article

  • grow/shrink a zfs RAIDZ

    - by c2h2
    I'm going to build a freenas server, would like to make sure what I can do with such magical and advanced zfs. If I have 5 * 3TB disks in RAIDZ (12TB storage in total), now I am trying to add another 2 * 3TB disks to this existing array. Q: Am I able to do it without affect/touch any existing data on RAIDZ volume? What about take away some existing disk? say take away 1 disk out of the 5 disks, assuming only very small portion of data exists on the raidz.

    Read the article

  • How do I send traffic from my Mac's wifi to my VPN client?

    - by Heath Borders
    I need to connect my Android to a Juniper VPN. Unfortunately, Juniper doesn't support Android on our VPN version. We've already put in a feature request for it, but we have no idea how long it will take to be complete. Right now, I connect to the Juniper VPN with a Juniper Mac OSX VPN client that uses Java to install kernel extensions to start and stop the VPN. Thus, I can't use the Network panel in System Preferences to create a VPN device, which means it won't show up in the 'Sharing' panel's Internet Sharing Share your connection from: menu, as suggested here. I used newproc.d to see what /usr/libexec/InternetSharing did when it ran, and it runs the following processes: 2013 Nov 1 00:26:54 5565 <1> 64b /usr/libexec/launchdadd 2013 Nov 1 00:26:55 5566 <1> 64b /usr/libexec/InternetSharing 2013 Nov 1 00:26:56 5568 <5566> 64b natpmpd -d -y bridge100 en0 2013 Nov 1 00:26:56 5569 <1> 64b /usr/libexec/pfd -d 2013 Nov 1 00:26:56 5567 <5566> 64b bootpd -d -P My Juniper VPN client creates the following devices (output of ifconfig): jnc0: flags=841<UP,RUNNING,SIMPLEX> mtu 1400 inet 10.61.9.61 netmask 0xffffffff open (pid 920) jnc1: flags=841<UP,RUNNING,SIMPLEX> mtu 1450 closed So, it seems like I should just be able to do this and have everything work: sudo killall -9 natpmpd sudo /usr/libexec/natpmpd -y bridge100 jnc0 My android connected fine and could hit public internet sites, but it couldn't hit private VPN sites. I assume this is because I need to change the routes that /usr/libexec/InternetSharing sets up. This is the output from sudo pfctl -s all before starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all rdr-anchor "com.apple/*" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble anchor "com.apple/*" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all INFO: Status: Disabled for 0 days 00:11:02 Debug: Urgent State Table Total Rate current entries 0 searches 22875 34.6/s inserts 1558 2.4/s removals 1558 2.4/s Counters match 2005 3.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 0.0/s proto-cksum 0 0.0/s state-mismatch 1 0.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 OS FINGERPRINTS: 696 fingerprints loaded This is the output from sudo pfctl -s all after starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all nat-anchor "com.apple.internet-sharing" all rdr-anchor "com.apple/*" all rdr-anchor "com.apple.internet-sharing" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble scrub-anchor "com.apple.internet-sharing" all fragment reassemble anchor "com.apple/*" all anchor "com.apple.internet-sharing" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all STATES: ALL tcp 10.0.1.32:50593 -> 74.125.225.113:443 SYN_SENT:CLOSED ALL udp 10.0.1.32:61534 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:55433 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:64041 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL tcp 10.0.1.32:50619 -> 74.125.225.131:443 SYN_SENT:CLOSED INFO: Status: Enabled for 0 days 00:00:01 Debug: Urgent State Table Total Rate current entries 5 searches 22886 22886.0/s inserts 1563 1563.0/s removals 1558 1558.0/s Counters match 2010 2010.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 12.0/s proto-cksum 0 0.0/s state-mismatch 1 1.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 TABLES: OS FINGERPRINTS: 696 fingerprints loaded It looks like I need to change the pf settings that /usr/libexec/InternetSharing set up, but I have no idea how to do that.

    Read the article

  • OSX: Python packages fail to install, error message "/usr/local/bin: File Exists"

    - by kylehotchkiss
    I keep trying to install django and other python packages, and I keep getting the exact same error message: Installing django-admin.py script to /usr/local/bin error: /usr/local/bin: File exists So I look to make sure that my /usr/local folder is okay. At first glance it appears okay, until I try cd-ing into my bin. It says it can't because it's not a directory. Peculiar, I thought, so then I tried a Anchorage:local khotchkiss$ ls -a -l total 26168 drwxr-xr-x 6 root wheel 204 Dec 26 20:18 . drwxr-xr-x@ 14 root wheel 476 Feb 24 12:54 .. -rwxr-xr-x@ 1 root wheel 13395080 Oct 22 23:04 bin drwxr-xr-x 8 root wheel 272 Dec 26 20:18 git drwxr-xr-x 4 root wheel 136 Dec 18 11:31 include drwxr-xr-x 12 root wheel 408 Dec 18 11:31 lib And haven't a clue of what the 'bin' is, why its so large, and why its preventing me from installing python packages. Any clue?

    Read the article

  • Is RAID 0 or JBOD better for home media server?

    - by Donald Hughes
    I have an external two-bay drive enclosure (the OWC Mercury Elite-AL Pro) connected to a Mac Mini (my home media server) over FireWire 800. I'm streaming media to other computers in the house over wired gigabit. I have two 1.5 TB drives that I'm using independently right now. The media is on one, and I'm mirroring the files to the other drive at night as a backup. But as I approach filling up the drive I'm wanting to span those two drives together to give me a total of about 3 TB, and then buy another drive for backups. The external enclosure supports both RAID 0 and JBOD, but I'm not clear on which would be better in this situation. Would RAID 0 provide any performance improvements over JBOD for streaming video (possibly several streams at once? How does each affect the MTBF of the drives? In general, should I choose RAID 0, JBOD, or keep them independent?

    Read the article

  • I started getting a weird message "Encrypting file system - Back up your file encryption key"

    - by user22559
    Hello I started getting a strange message when I start my computer. An icon appears in the system tray, and a popup tells me "Encrypting file system - Back up your file encryption key". I know what EFS is, but I don't use it. To my knowledge, I don't have any encrypted files on my partition. I have searched using Total Commander on all the partitions for files that have the "encrypted" attribute, but I found nothing. So I don't have any encrypted files. Does anyone know what I did to get this message?

    Read the article

  • Cannot resize OS X partition

    - by David Pearce
    I am trying to resize my existing Mac OS Extended partition on my Macbook to install Windows 7 (using steps similar to these), but when ever I go to apply the changes, I get this error: Partition failed Partition failed with the error: The partition cannot be resized. Try reducing the amount of change in the size of the partition. The total capacity of the hard drive in question is 260GB, with the entirety being taken up by the OS X boot partition. There is I am aiming to shrink that partition down to 60GB. How can I fix this problem? I have been reducing the amount of change by 10GB each attempt, but it still is not working. I assume the problem is that there is not a large amount of continuous space on the device. Is there some way to can do a manual defrag that would rectify this problem?

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • CentOS only detecting 50% of ram

    - by Devator
    I have 16GB ram in my machine. Before, free -m outputted the normal 16 GB ram, however now (after a reboot) it only detects 8 GB ram. Is one ram module damaged? grep -i memory /var/log/dmesg outputs Memory: 15621184k/16017200k available (2535k kernel code, 387120k reserved, 1748k data, 196k init). (Which looks like 16 GB to me). Free -m outputs: total used free shared buffers cached Mem: 7484 7415 68 0 6104 524 -/+ buffers/cache: 786 6697 Swap: 2055 0 2054 Anything I might be missing? Thanks in advance.

    Read the article

  • Overhead of Perfmon -> direct to SQL Database

    - by StuartC
    HI All, First up, I'm a total newb at Performance Monitoring. I'm looking to set up central performance monitoring of some boxes. 2K3 TS ( Monitor General OS Perf & Session Specific Counters ) 2K8 R2 ( XenApp 6 = Monitor General OS Perf & Session Specific Counters ) File Server ( Standard File I/O ) My ultimate aim is to get as many counters/information, without impacting the clients session experience at all. Including counters specific to their sessions. I was thinking it logging directly to a SQL on another server, instead of a two part process of blg file then relog to sql. Would that work ok? Does anyone know the overhead of going straight to SQL from the client? I've searched around a bit, but havent found so much information it can be overwhelming. thanks

    Read the article

  • kvm memory changes via virsh not propagating to vm

    - by kevintmckay
    Hi I just started using kvm on rhel6 and after creating a vm I tried to increase the memory but the changes I amde in the xml file do not propogate to vm, even after bouncing vm and restarting libvert? [root@kvm01 qemu]# virsh dominfo dev-kvm01 Id: 2 Name: dev-kvm01 UUID: 9b2bf581-2807-3116-b176-60e9c0559943 OS Type: hvm State: running CPU(s): 2 CPU time: 1975.3s Max memory: 7864320 kB Used memory: 7864320 kB Persistent: yes Autostart: disable Security model: selinux Security DOI: 0 Security label: system_u:system_r:svirt_t:s0:c47,c760 (enforcing) [iknowmed@dev-kvm01 ~]$ free total used free shared buffers cached Mem: 3632284 3614508 17776 0 3980 3491676 -/+ buffers/cache: 118852 3513432 Swap: 5668856 0 5668856

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • How do I reset the time on my computer, without turning it on?

    - by chipperyman573
    Alright, so today, I did something very stupid: Do an experement on my computer without backing it up. So I saw that the calander in windows could only go up to 12/31/2999 (or something like that). I was wondering if I set the time to 11:59:59 PM, if it would crash my computer, thinking that if I did I could just restart it from the recovery disc or something. Well, I was right: It did crash it. However, I can't turn my computer on AT ALL. When I try to, it plays a 1-2 second beep, 1 second silence, repeated a total of 3 times. My manufacturer is Dell. I'm sending this from my phone, I appolgise for all typos. My last backup was from a few months ago, that won't work.

    Read the article

  • Implementing a Linux-HA based clustering setup on Windows

    - by Alex
    I have a (tried and tested) setup involving: 2x Load balancing nodes on a floating IP via Heartbeat, load balancing 2 tomcat servers. 2x Tomcat servers 2x Galera Cluster MySQL servers synchronously replicating (+1 arbitrator node) All are evenly spread across 2 physical nodes. Now, I have to somehow get the same functionality on Windows Server (2008? I think) nodes .... running under Xen virtualization. There is no possibility to use Linux for any of the nodes. I count two main problems: No Linux-HA hearbeat daemon for the load balancing No Galera synchronous replication for MySQL I freely admit to having nearly no Windows knowledge when it comes to clustering. Is there a way to closely mimic the setup I have described or is it a total write-off?

    Read the article

  • hosting people asking for my account username and password to enable curl and socket function only for me

    - by Jayapal Chandran
    I have hosted my site in a shared environment. My hosting people disabled socket function all together. and they said that we can enable only for you if i given a written statement. I did but they asked for my control panel login details so they will run some kind of script to enable it. Is it right for the hosting company to ask for credentials. They have the total control so why cant they do it? Edit: Before six months many websites in their server got hacked. So they think it would be because of socket functions and had disabled it. They say they can enable it for specific users who do programming using that and that is by email request.

    Read the article

  • Content server backups

    - by Dan Sosedoff
    What is the best way to backup data on content servers? For example, I have 15 servers that just have content, no applications running on it. Each server has a 250 GB hard drive. So, it's a pretty big amount of data. All the data have external access (via HTTP). So, the question is: what methodology is best in my case? The most useful method I know is cross-backup: when each server contains its own data and backup of one other server. But, there is significant reduction in total capacity. RAID?

    Read the article

  • How to get AMD Catalyst working on Arch x86_64

    - by gh403
    I've got a Dell Inspiron 15R 7520 with AMD's hybrid "PowerXpress" graphics. The integrated graphics card is (if I understand it correctly) integrated with the i7-3612QM processor, and the discrete graphics card is a "Southern Islands" Radeon HD 7730M. The integrated graphics work perfectly under Arch. However, the discrete graphics don't. I have tried several different methods, and the one that seems to get me the farthest with the least effort is the AUR package catalyst-total-pxp. After installing, rebooting, and issuing the commands # aticonfig --initial # pxp_switch_catalyst amd # X X completely fails to start. The X log can be found here. I don't understand what is failing; potentially, it has something to do with the way my card is hooked up--I think it's muxless, but I really don't know. What is the matter here? Any help would be appreciated.

    Read the article

  • Incredibly high latency for Ubuntu guest on Hyper-V

    - by Mark Henderson
    I've got several Ubuntu 10.04 virtual machines running as Hyper-V guests on Windows Server 2008 R2 SP1 and they're all perfectly fine. Today I installed my first Ubuntu 11.10 virtual machine and I'm seeing rediculous pings: These servers are all connected via gigabit to a local LAN, with almost no network traffic at all1, with a legacy network adapter in Hyper-V. I'm a bit of an Ubuntu n00b so I don't really know where to go from here. Any ideas? free -m reports: total used free shared buffers cached Mem: 485 470 15 0 63 299 -/+ buffers/cache: 107 378 Swap: 507 20 487 This is within a few mb of our other Ubuntu servers that are on 10.04. I removed the Legacy NIC and installed a Synthetic one in Hyper-V and this did improve the numbers, in that they're around 10-30ms now, but I would still be expecting <1ms response times. 1As a comparison, I have another Ubuntu 10.04 guest on Hyper-V almost 1,000km away that has a ping of 33ms

    Read the article

  • Laptop battery liftime from Dell specs?

    - by user26535
    Question: When I buy a Dell Laptop, I get the following choice for battery: (Lithium-Ion main battery with X cells and Y Wh [included in price/at additional $] Lithium-Ionen-Hauptakku mit 4 Zellen und 24 Wh [Im Preis enthalten] Lithium-Ionen-Hauptakku mit 9 Zellen und 85 Wh [zuzgl. CHF 120.01] Lithium-Ionen-Hauptakku mit 6 Zellen und 46 Wh [zuzgl. CHF 30.00 I figured that I can calculate that a 86 Wh offers +254% of the 24 Wh lifetime, but... Is there any way to calculate to what battery time this amounts in hours ? I mean how many hours will the 24 Wh last (at normal operation - eg. writing a document - not watching video), else the +254% is a pretty useless number... Also anybody knows whether 4 cells means 4 times 24 Wh, or 24 Wh in total?

    Read the article

  • How do you recommend installing Linux on a computer that has no external drive or ability to boot fr

    - by 7777
    I have an old Toshiba Portege 3505 "ultralight" laptop, meaning it doesn't have any kind of disk drive on it at all, that I'd like to completely reformat and install Linux on. However, it won't boot from any drive (and I don't have any on hand), so I'll have to install it from a USB drive (which I doubt it boots from either). (I'm not sure how to change the settings in my BIOS to get my computer to boot from a USB stick. Any ideas for this?) How do you recommend I do this? I want to note that I don't want to run Linux off a LiveUSB, I want to actually install it on the machine. I was thinking about Damn Small Linux, it's tiny and all I need. Any advice or suggestions for something else though? Finally, I'm a total newbie to this, I've never installed Linux on anything before so I might be a little slow on some stuff! Thanks!

    Read the article

  • How to make an excel formula which totals several agecent rows based on cell values

    - by Yishai
    I have an excel sheet with three columns: date, person and percentage. I would like to put in a data validation that flags cells if the total for a given data/person combination do not equal 100%. Is that possible? In other words, in the custom formula of a data validation, I would like to make the following type of formula. =if(sum( cells with a (date = the date on this row, person = person on this row))=1) Is there a function which will return the cells in a range conditioned on certain values, or will sum the cells. Note that if it is not possible to do two cells, I have no issue adding a cell which combines both values for the purpose of effecting the lookup.

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >