Search Results

Search found 20163 results on 807 pages for 'struct size'.

Page 147/807 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • GlusterFs - high load 90-107% CPU

    - by Sara
    I try and try and try to performance and fix problem with gluster, i try all. I served on gluster webpages, php files, images etc. I have problem after update from 3.3.0 to 3.3.1. I try 3.4 when i think maybe fix it but still the same problem. I temporarily have 1 brick, but before upgrade will be fine. Config: Volume Name: ... Type: Replicate Volume ID: ... Status: Started Number of Bricks: 0 x 2 = 1 Transport-type: tcp Bricks: Brick1: ...:/... Options Reconfigured: cluster.stripe-block-size: 128KB performance.cache-max-file-size: 100MB performance.flush-behind: on performance.io-thread-count: 16 performance.cache-size: 256MB auth.allow: ... performance.cache-refresh-timeout: 5 performance.write-behind-window-size: 1024MB I use fuse, hmm "Maybe the high load is due to the unavailable brick" i think about it, but i cant find information on how to safely change type of volume. Maybe u know how?

    Read the article

  • Unable to create context rendering error whet run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Mail server not sending or receiving after removal from barracuda blacklist to white list

    - by user137765
    Mail server not sending or receiving after removal from barracuda blacklist to white list. I've checked against black lists and the ip and domain are clean. 1and1 are saying its Barracuda black list and barracuda are saying its not blacklisted and that its somethign with 1and1 server. section from log file... Sep 20 04:29:25 vegaserve postfix/smtpd[16906]: connect from mta860.chtah.net[63.236.31.146] Sep 20 04:29:25 vegaserve postfix/smtpd[16070]: connect from host81-136-144-117.in-addr.btopenworld.com[81.136.144.117] Sep 20 04:29:27 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: raidon - short names not allowed from @ [201.80.253.153]ERR: 1348111767.185119 LOGOUT, [email protected], ip=[86.143.136.249], top=0, retr=0, time=151, rcvd=18, sent=283, maildir=/var/qmail/mailnames/mbelectrics.net/mb/Maildir Sep 20 04:29:28 vegaserve pop3d: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:28 vegaserve postfix/smtpd[15388]: connect from mta965.emails.itv.com[8.30.201.55] Sep 20 04:29:29 vegaserve postfix/smtpd[18194]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:29 vegaserve postfix/cleanup[24879]: 95CB31E87556C: message-id=<[email protected] Sep 20 04:29:29 vegaserve postfix/qmgr[14378]: 95CB31E87556C: from=, size=975, nrcpt=1 (queue active) Sep 20 04:29:29 vegaserve postfix/smtpd[18194]: disconnect from uspmta172097.emarsys.net[195.54.172.97] Sep 20 04:29:29 vegaserve postfix/smtp[25748]: 95CB31E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.05/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:29 vegaserve postfix/bounce[25897]: warning: 95CB31E87556C: undeliverable postmaster notification discarded Sep 20 04:29:29 vegaserve postfix/qmgr[14378]: 95CB31E87556C: removed Sep 20 04:29:32 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:37 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rei - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:38 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:38 vegaserve postfix/smtpd[19328]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/smtpd[18331]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/smtpd[24464]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:40 vegaserve postfix/cleanup[24825]: BD1A71E87556C: message-id=<[email protected] Sep 20 04:29:40 vegaserve postfix/qmgr[14378]: BD1A71E87556C: from=, size=673, nrcpt=1 (queue active) Sep 20 04:29:40 vegaserve postfix/smtpd[24464]: disconnect from unknown[118.97.212.190] Sep 20 04:29:40 vegaserve postfix/smtp[25748]: BD1A71E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:40 vegaserve postfix/bounce[25995]: warning: BD1A71E87556C: undeliverable postmaster notification discarded Sep 20 04:29:40 vegaserve postfix/qmgr[14378]: BD1A71E87556C: removed Sep 20 04:29:41 vegaserve postfix/cleanup[24879]: 0A42B1E87556C: message-id=<[email protected] Sep 20 04:29:41 vegaserve postfix/qmgr[14378]: 0A42B1E87556C: from=, size=961, nrcpt=1 (queue active) Sep 20 04:29:41 vegaserve postfix/smtpd[18331]: disconnect from bay0-omc4-s10.bay0.hotmail.com[65.54.190.212] Sep 20 04:29:41 vegaserve postfix/smtp[25748]: 0A42B1E87556C: to=, orig_to=, relay=none, delay=0.03, delays=0.03/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:41 vegaserve postfix/bounce[25897]: warning: 0A42B1E87556C: undeliverable postmaster notification discarded Sep 20 04:29:41 vegaserve postfix/qmgr[14378]: 0A42B1E87556C: removed Sep 20 04:29:43 vegaserve postfix/smtpd[17511]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:43 vegaserve postfix/cleanup[24825]: 8F8991E87556C: message-id=<[email protected] Sep 20 04:29:43 vegaserve postfix/qmgr[14378]: 8F8991E87556C: from=, size=946, nrcpt=1 (queue active) Sep 20 04:29:43 vegaserve postfix/smtpd[17511]: disconnect from blu0-omc4-s22.blu0.hotmail.com[65.55.111.161] Sep 20 04:29:43 vegaserve postfix/smtp[25748]: 8F8991E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.02/0/0.02/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:43 vegaserve postfix/bounce[25995]: warning: 8F8991E87556C: undeliverable postmaster notification discarded Sep 20 04:29:43 vegaserve postfix/qmgr[14378]: 8F8991E87556C: removed Sep 20 04:29:44 vegaserve postfix/cleanup[24879]: 088641E87556C: message-id=<[email protected] Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 088641E87556C: from=, size=1078, nrcpt=1 (queue active) Sep 20 04:29:44 vegaserve postfix/smtpd[19328]: disconnect from smtp10.bis7.eu.blackberry.com[178.239.85.15] Sep 20 04:29:44 vegaserve postfix/smtp[25748]: 088641E87556C: to=, orig_to=, relay=none, delay=0.05, delays=0.03/0/0.01/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:44 vegaserve postfix/bounce[25995]: warning: 088641E87556C: undeliverable postmaster notification discarded Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 088641E87556C: removed Sep 20 04:29:44 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rin - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:44 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:44 vegaserve postfix/smtpd[18965]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:44 vegaserve postfix/cleanup[24825]: 946F51E87556C: message-id=<[email protected] Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 946F51E87556C: from=, size=1173, nrcpt=1 (queue active) Sep 20 04:29:44 vegaserve postfix/smtpd[18965]: disconnect from hubrelay-rd.bt.com[62.239.224.99] Sep 20 04:29:44 vegaserve postfix/smtp[25748]: 946F51E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:44 vegaserve postfix/bounce[25897]: warning: 946F51E87556C: undeliverable postmaster notification discarded Sep 20 04:29:44 vegaserve postfix/qmgr[14378]: 946F51E87556C: removed Sep 20 04:29:45 vegaserve postfix/smtpd[14816]: connect from col0-omc2-s12.col0.hotmail.com[65.55.34.86] Sep 20 04:29:47 vegaserve postfix/smtpd[16900]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:29:47 vegaserve postfix/cleanup[24879]: 961721E87556C: message-id=<[email protected] Sep 20 04:29:47 vegaserve postfix/qmgr[14378]: 961721E87556C: from=, size=1082, nrcpt=1 (queue active) Sep 20 04:29:47 vegaserve postfix/smtpd[16900]: disconnect from mta-35d2.livingsocial.com[199.91.53.210] Sep 20 04:29:47 vegaserve postfix/smtp[25748]: 961721E87556C: to=, orig_to=, relay=none, delay=0.04, delays=0.04/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:29:47 vegaserve postfix/bounce[25995]: warning: 961721E87556C: undeliverable postmaster notification discarded Sep 20 04:29:47 vegaserve postfix/qmgr[14378]: 961721E87556C: removed Sep 20 04:29:50 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: rini - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:50 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:29:52 vegaserve postfix/smtpd[24478]: connect from col0-omc2-s13.col0.hotmail.com[65.55.34.87] Sep 20 04:29:52 vegaserve postfix/smtpd[18923]: connect from www.idbwplan.com[193.181.254.21] Sep 20 04:29:55 vegaserve postfix/smtpd[15968]: connect from 105-48.mta.dotmailer.com[94.143.105.48] Sep 20 04:29:56 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: ringo - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:29:56 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:30:00 vegaserve postfix/smtpd[18772]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:30:01 vegaserve postfix/cleanup[24825]: 1DAD71E87556C: message-id=<[email protected] Sep 20 04:30:01 vegaserve postfix/qmgr[14378]: 1DAD71E87556C: from=, size=1022, nrcpt=1 (queue active) Sep 20 04:30:01 vegaserve postfix/smtpd[18772]: disconnect from mail95.us2.mcsv.net[173.231.139.95] Sep 20 04:30:01 vegaserve postfix/smtp[25748]: 1DAD71E87556C: to=, orig_to=, relay=none, delay=0.06, delays=0.05/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself) Sep 20 04:30:01 vegaserve postfix/bounce[25897]: warning: 1DAD71E87556C: undeliverable postmaster notification discarded Sep 20 04:30:01 vegaserve postfix/qmgr[14378]: 1DAD71E87556C: removed Sep 20 04:30:02 vegaserve pop3d: IMAP connect from @ [201.80.253.153]checkmailpasswd: FAILED: ritsuko - short names not allowed from @ [201.80.253.153]ERR: LOGIN FAILED, ip=[201.80.253.153] Sep 20 04:30:02 vegaserve postfix/smtpd[16911]: warning: connect to proxy service 127.0.0.1:10025: Connection timed out Sep 20 04:30:02 vegaserve pop3d: Connection, ip=[201.80.253.153] Sep 20 04:30:02 vegaserve postfix/cleanup[24879]: 8AADD1E87556C: message-id=<[email protected] Sep 20 04:30:02 vegaserve postfix/qmgr[14378]: 8AADD1E87556C: from=, size=1003, nrcpt=1 (queue active) Sep 20 04:30:02 vegaserve postfix/smtpd[16911]: disconnect from mr133.createsend.com[184.106.86.133] Sep 20 04:30:02 vegaserve postfix/smtp[25748]: 8AADD1E87556C: to=, orig_to=, relay=none, delay=0.02, delays=0.02/0/0/0, dsn=5.4.6, status=bounced (mail for vegaserve.com loops back to myself)

    Read the article

  • limits.conf to set memory limits

    - by Rupert Jipe
    I would like to limit any process from using more than 500 MB of RAM. AFAIK this is done using RSS in /etc/security/limits.conf but the process called gnome-panel apparently is using 618436 kB of VmRSS. How can this be ? /etc/security/limits.conf * hard rss 512000 username@debian:~$ cat /proc/3002/status Name: gnome-panel State: S (sleeping) Tgid: 3002 Pid: 3002 PPid: 2910 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 20 24 25 29 44 46 112 116 117 1000 1002 1003 VmPeak: 916636 kB VmSize: 916636 kB VmLck: 0 kB VmHWM: 618436 kB VmRSS: 618436 kB VmData: 601972 kB VmStk: 104 kB VmExe: 516 kB VmLib: 29232 kB VmPTE: 1760 kB Threads: 1 SigQ: 0/14001 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000020001000 SigCgt: 0000000180000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: 3 Cpus_allowed_list: 0-1 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 871965 nonvoluntary_ctxt_switches: 47553 PaX: PeMRs username@debian:~$ cat /proc/3002/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • 2D Skeletal Animation Transformations

    - by Brad Zeis
    I have been trying to build a 2D skeletal animation system for a while, and I believe that I'm fairly close to finishing. Currently, I have the following data structures: struct Bone { Bone *parent; int child_count; Bone **children; double x, y; }; struct Vertex { double x, y; int bone_count; Bone **bones; double *weights; }; struct Mesh { int vertex_count; Vertex **vertices; Vertex **tex_coords; } Bone->x and Bone->y are the coordinates of the end point of the Bone. The starting point is given by (bone->parent->x, bone->parent->y) or (0, 0). Each entity in the game has a Mesh, and Mesh->vertices is used as the bounding area for the entity. Mesh->tex_coords are texture coordinates. In the entity's update function, the position of the Bone is used to change the coordinates of the Vertices that are bound to it. Currently what I have is: void Mesh_update(Mesh *mesh) { int i, j; double sx, sy; for (i = 0; i < vertex_count; i++) { if (mesh->vertices[i]->bone_count == 0) { continue; } sx, sy = 0; for (j = 0; j < mesh->vertices[i]->bone_count; j++) { sx += (/* ??? */) * mesh->vertices[i]->weights[j]; sy += (/* ??? */) * mesh->vertices[i]->weights[j]; } mesh->vertices[i]->x = sx; mesh->vertices[i]->y = sy; } } I think I have everything I need, I just don't know how to apply the transformations to the final mesh coordinates. What tranformations do I need here? Or is my approach just completely wrong?

    Read the article

  • Missing Operating System after trying to upgrade to Ubuntu 11

    - by Mauricio
    there! After trying to upgrade from Ubuntu 10.04 to 11, the upgrading process stopped when running and then I got an "out of disk, grub rescue" message when booting. After running Boot Repair, I got this results. Now I get "Missing Operating System" whent trying to boot. Bellow I show some results from some commands I gather from help foruns, but I still reached no solution. Could you please help me? Any enlightment will be very helpful! Disk Utility says "Disk has a few bad sectors". When trying to run the Self-test I get "FAILED (Read)" Here we have what Gparted says about the /dev/sda1 partition (ext4): Flags: boot Status: not mounted Warning: e2label: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1Couldn`t find valid filesystem superblockUnable to read the contents of this filesystem! From sudo fdisk -lI got: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000e0596 Device Boot Start End Blocks Id System/dev/sda1 * 2048 607428607 303713280 83 Linux/dev/sda2 607430654 625141759 8855553 5 Extended/dev/sda5 607430656 625141759 8855552 82 Linux swap / SolarisDisk /dev/sdb: 320.1 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c3c41 Device Boot Start End Blocks Id System /dev/sdb1 * 63 625137344 312568641 c W95 FAT32 (LBA) " and fromsudo fdisk /dev/sda1I got fdisk: unable to read /dev/sda1: Inappropriate ioctl for device` From sudo mount /dev/sda1 /mntI got: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so From sudo update-grubI got: error: cannot read from `/dev/sda'. /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?).

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • My processor is not detected intel core 2 duo

    - by walid
    My processor is not detected intel core 2 duo When I type $uname -m -p I get this i686 unknown I have Ubuntu 10.10 netbook remix but the cat /proc/cpuinfo gives right identification of two processors as below processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz stepping : 6 cpu MHz : 1826.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm dts tpr_shadow bogomips : 3657.53 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: The problem is with programs that uses more than one core like virtualbox and bitcoin which refuses to use more than one core Is there anythign wrong or anything that I can do? My installation is from a live usb iso on a USB

    Read the article

  • Ubuntu installation does not recognize drive partitioning

    - by Woltan
    I have a 1TB drive and installed Windows 7 on a 128GB partition. When I now try to install Ubuntu 11.04 it does not recognize the Windows partition but offers the complete 1TB drive to install Ubuntu on instead. It displays: However, in the Ubuntu Disk Utility the Windows partitions are recognized. What do I need to do in order for Ubuntu to recognize the Windows 7 partition and install Ubuntu as a dual boot? Response to comments The following commands were executed and the results are shown below: fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34a38165 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 16318 130969600 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x14a714a6 Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux parted -l Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label

    Read the article

  • Patterns for Handling Changing Property Sets in C++

    - by Bhargav Bhat
    I have a bunch "Property Sets" (which are simple structs containing POD members). I'd like to modify these property sets (eg: add a new member) at run time so that the definition of the property sets can be externalized and the code itself can be re-used with multiple versions/types of property sets with minimal/no changes. For example, a property set could look like this: struct PropSetA { bool activeFlag; int processingCount; /* snip few other such fields*/ }; But instead of setting its definition in stone at compile time, I'd like to create it dynamically at run time. Something like: class PropSet propSetA; propSetA("activeFlag",true); //overloading the function call operator propSetA("processingCount",0); And the code dependent on the property sets (possibly in some other library) will use the data like so: bool actvFlag = propSet["activeFlag"]; if(actvFlag == true) { //Do Stuff } The current implementation behind all of this is as follows: class PropValue { public: // Variant like class for holding multiple data-types // overloaded Conversion operator. Eg: operator bool() { return (baseType == BOOLEAN) ? this->ToBoolean() : false; } // And a method to create PropValues various base datatypes static FromBool(bool baseValue); }; class PropSet { public: // overloaded[] operator for adding properties void operator()(std::string propName, bool propVal) { propMap.insert(std::make_pair(propName, PropVal::FromBool(propVal))); } protected: // the property map std::map<std::string, PropValue> propMap; }; This problem at hand is similar to this question on SO and the current approach (described above) is based on this answer. But as noted over at SO this is more of a hack than a proper solution. The fundamental issues that I have with this approach are as follows: Extending this for supporting new types will require significant code change. At the bare minimum overloaded operators need to be extended to support the new type. Supporting complex properties (eg: struct containing struct) is tricky. Supporting a reference mechanism (needed for an optimization of not duplicating identical property sets) is tricky. This also applies to supporting pointers and multi-dimensional arrays in general. Are there any known patterns for dealing with this scenario? Essentially, I'm looking for the equivalent of the visitor pattern, but for extending class properties rather than methods. Edit: Modified problem statement for clarity and added some more code from current implementation.

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated ------------- processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • Technique to have screen independent grid based puzzle with sprite animation

    - by Yan Cheng CHEOK
    Hello all, let's say I have a fixed size grid puzzle game (8 x 10). I will be using sprites animation, when the "pieces" in the puzzle is moving from one grid to another grid. I was wondering, what is the technique to have this game being implemented as screen resolution independent. Here is what I plan to do. 1) The data structure coordinate will be represented using double, with 1.0 as max value. // Puzzle grid of 8 x 10 Environment { double width = 0.8; double height = 1.0; } // Location of Sprite at coordinate (1, 1) Sprite { double posX = 0.1; double posY = 0.1; double width = 0.1; double height = 0.1; } // scale = PYSICAL_SCREEN_SIZE drawBitmap ( sprite_image, sprite_image_rect, new Rect(sprite.posX * Scale, sprite.posY * Scale, (sprite.posX + sprite.width) * Scale, (sprite.posY + sprite.Height) * Scale), paint ); 2) A large size sprite image will be used (128x128). As sprite image shall look fine if we scale from large size down to small size, but not vice versa. Besides the above mentioned technique, is there any other consideration I had missed out?

    Read the article

  • Creating GPT partitions on EFI raw disk using diskpart

    - by kafka
    I've got a raw, blank GPT disk for use in a UEFI system. I need to create the partitions on it using diskpart. The only tutorial I've found so far is for diskpart.efi, which I believe is slightly different from the command-line diskpart. MS guide to GPT partitions with diskpart.efi Also the guide says to create a MSR of 32MB, but for a disk= 16GB I know it needs to be 128MB. I'm happy doing it with diskpart, just want to be sure I understand the fundamentals. I'm planning on installing, in this order: ESP partition, size 102 MB (create partition esp size=102) MSR partition, size 128 MB (create partition msr size=128) data partition, the remaining space (approx 460GB) Is this the correct thing to do, or is there anything I'm missing?

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • Best video codec for filmed powerpoint presentation

    - by rslite
    I have some presentations that are filmed. The audio is the presenter and the video is all the Powerpoint slides (size 1024x768, video codec H264, audio codec AAC). I would like to reduce their final file size since a 1 hour presentation is about 800 MB. Most of it is the video part which as I said is mostly powerpoint slides that don't change much over a matter of several seconds. Which codec would be better suited to encode this images and reduce the size of the end file?

    Read the article

  • Recommended RAM and disc space for Oracle 11g on Windows

    - by Álvaro G. Vicario
    I need to provide the recommended amount of RAM and disc space (divided in two partitions) so the customer can create an appropriate virtual machine to run Oracle. All I could find in the documentation was a brief listing with minimum RAM and typical/advanced install types. The virtual machine will run latest Oracle Standard Edition One (11g release 2 so far) under Windows Server 2008 x64 and will host a reasonably low traffic web application. How much RAM and disc must I ask for in order to be safe? (Feel free to ask for further details if I've omitted something relevant.) Update: Rough estimations: Database size: 10 MB after installation Growth rate: +3MB per day on average Size of database 'active' data: (not sure of what this means, there's not actual archive so I guess all data is current) Amount of data written per second in peak hours: a few KB Number of client sessions: 3 or 4 at most Frequency and response size of most heavy requests: some reports make heavy table JOINS that need up to 20 seconds to complete but they won't return more than a few thousand rows with plain text. The app also handles BLOBs (typical size from 50KB to 200KB)

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • My Android phone isn't being detected by Ubuntu

    - by Lara
    This is what I've got from terminal, looks like it can see the phone as a USB device just fine but isn't showing up under fdisk so I can't mount it. It automounts just fine in my VMWare Windows. And Internet tethering works fine while under Linux (haven't tried under Windows). Here's lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 019: ID 04d9:1135 Holtek Semiconductor, Inc. Bus 001 Device 003: ID 05ca:18c0 Ricoh Co., Ltd Bus 001 Device 004: ID 0489:e00f Foxconn / Hon Hai Foxconn T77H114 BCM2070 [Single-Chip Bluetooth 2.1 + EDR Adapter] Bus 003 Device 010: ID 04e8:6860 Samsung Electronics Co., Ltd Bus 002 Device 012: ID 046d:c315 Logitech, Inc. Classic New Touch Keyboard And here's sudo fdisk -l: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001ff06 Device Boot Start End Blocks Id System /dev/sda1 2048 681845797 340921875 7 HPFS/NTFS/exFAT /dev/sda2 * 681846784 845686783 81920000 83 Linux /dev/sda3 845686784 968566783 61440000 7 HPFS/NTFS/exFAT /dev/sda4 968568830 972475081 1953126 5 Extended /dev/sda5 968568832 972475081 1953125 82 Linux swap / Solaris Disk /dev/mapper/cryptswap1: 2000 MB, 2000000000 bytes 255 heads, 63 sectors/track, 243 cylinders, total 3906250 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xbe4c2ec7 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table

    Read the article

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Torque2D, Class vs Datablock

    - by Max Kielland
    I'm scripting my first game with Torque2D and have not fully understood the difference between "Class" and Datablock. To me it seems like Datablock is similar to a struct in C/C++ or a Record in Pascal. If I create Datablocks with new, are they instantiated in the same way as a "Class"? I have a large TileMap and need to attach some information to each Tile. I was thinking to use a Datablock, as a struct, to attach this information to the tile's CustomData property. The two questions are: What is a Datablock and should I use a Datablock or a "Class" for this tile information?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >