Search Results

Search found 2468 results on 99 pages for 'splattered bits'.

Page 13/99 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated ------------- processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Getting to grips with the stack in nasm

    - by MarkPearl
    Today I spent a good part of my day getting to grips with the stack and nasm. After looking at my notes on nasm I think this is one area for the course I am doing they could focus more on… So here are some snippets I have put together that have helped me understand a little bit about the stack… Simplest example of the stack You will probably see examples like the following in circulation… these demonstrate the simplest use of the stack… org 0x100 bits 16 jmp main main: push 42h push 43h push 44h mov ah,2h ;set to display characters pop dx    ;get the first value int 21h   ;and display it pop dx    ;get 2nd value int 21h   ;and display it pop dx    ;get 3rd value int 21h   ;and display it int 20h The output from above code would be… DCB Decoupling code using “call” and “ret” This is great, but it oversimplifies what I want to use the stack for… I do not know if this goes against the grain of assembly programmers or not, but I want to write loosely coupled assembly code – and I want to use the stack as a mechanism for passing values into my decoupled code. In nasm we have the call and return instructions, which provides a mechanism for decoupling code, for example the following could be done… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov ah,2h mov dx,41h int 21h ret ;---------------------------------------- main: call displayChar int 20h   This would output the following to the console A So, it would seem that call and ret allow us to jump to segments of our code and then return back to the calling position – a form of segmenting the code into what we would called in higher order languages “functions” or “methods”. The only issue is, in higher order languages there is a way to pass parameters into the functions and return results. Because of the primitive nature of the call and ret instructions, this does not seem to be obvious. We could of course use the registers to pass values into the subroutine and set values coming out, but the problem with this is we… Have a limited number of registers Are threading our code with tight coupling (it would be hard to migrate methods outside of their intended use in a particular program to another one) With that in mind, I turn to the stack to provide a loosely coupled way of calling subroutines… First attempt with the Stack Initially I thought this would be simple… we could use code that looks as follows to achieve what I want… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov ah,2h pop dx int 21h ret ;---------------------------------------- main: push 41h call displayChar int 20h   However running this application does not give the desired result, I want an ‘A’ to be returned, and I am getting something totally different (you will to). Reading up on the call and ret instructions a discovery is made… they are pushing and popping things onto and off the stack as well… When the call instruction is executed, the current value of IP (the address of the instruction to follow) is pushed onto the stack, when ret is called, the last value on the stack is popped off into the IP register. In effect what the above code is doing is as follows with the stack… push 41h push current value of ip pop current value of ip to dx pop 41h to ip This is not what I want, I need to access the 41h that I pushed onto the stack, but the call value (which is necessary) is putting something in my way. So, what to do? Remember we have other registers we can use as well as a thing called indirect addressing… So, after some reading around, I came up with the following approach using indirect addressing… org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov bp,sp mov ah,2h mov dx,[bp+2] int 21h ret ;---------------------------------------- main: push 41h call displayChar int 20h In essence, what I have done here is used a trick with the stack pointer… it goes as follows… Push 41 onto the stack Make the call to the function, which will push the IP register onto the stack and then jump to the displayChar label Move the value in the stack point to the bp register (sp currently points at IP register) Move the at the location of bp minus 2 bytes to dx (this is now the value 41h) display it, execute the ret instruction, which pops the ip value off the stack and goes back to the calling point This approach is still very raw, some further reading around shows that I should be pushing the value of bp onto the stack before replacing it with sp, but it is the starting thread to getting loosely coupled subroutines. Let’s see if you get what the following output would be? org 0x100 bits 16 jmp main ;---------------------------------------- displayChar: mov bp,sp mov ah,2h mov dx,[bp+4] int 21h mov dx,[bp+2] int 21h ret ;---------------------------------------- main: push 41h push 42h call displayChar int 20h The output is… AB Where to from here? If by any luck some assembly programmer comes along and see this code and notices that I have made some fundamental flaw in my logic… I would like to know, so please leave a comment… appreciate any feedback!

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • ASM programming, how to use loop?

    - by chris
    Hello. Im first time here.I am a college student. I've created a simple program by using assembly language. And im wondering if i can use loop method to run it almost samething as what it does below the program i posted. and im also eager to find someome who i can talk through MSN messanger so i can ask you questions right away.(if possible) ok thank you .MODEL small .STACK 400h .data prompt db 10,13,'Please enter a 3 digit number, example 100:',10,13,'$' ;10,13 cause to go to next line first_digit db 0d second_digit db 0d third_digit db 0d Not_prime db 10,13,'This number is not prime!',10,13,'$' prime db 10,13,'This number is prime!',10,13,'$' question db 10,13,'Do you want to contine Y/N $' counter dw 0d number dw 0d half dw ? .code Start: mov ax, @data ;establish access to the data segment mov ds, ax mov number, 0d LetsRoll: mov dx, offset prompt ; print the string (please enter a 3 digit...) mov ah, 9h int 21h ;execute ;read FIRST DIGIT mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov first_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, doubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 100d ;This is so we can calculate 100*1st digit +10*2nd digit + 3rd digit mul cx ;start to accumulate the 3 digit number in the variable imul cx ;it is understood that the other operand is ax ;AND that the result will use both dx::ax ;but we understand that dx will contain only leading zeros add number, ax ;save ;variable <number> now contains 1st digit * 10 ;---------------------------------------------------------------------- ;read SECOND DIGIT, multiply by 10 and add in mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov second_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer mov cx, 10d ;continue to accumulate the 3 digit number in the variable mul cx ;it is understood that the other operand is ax, containing first digit ;AND that the result will use both dx::ax ;but we understand that dx will contain only leading zeros. Ignore them add number, ax ;save -- nearly finished ;variable <number> now contains 1st digit * 100 + second digit * 10 ;---------------------------------------------------------------------- ;read THIRD DIGIT, add it in (no multiplication this time) mov ah, 1d ;bios code for read a keystroke int 21h ;call bios, it is understood that the ascii code will be returned in al mov third_digit, al ;may as well save a copy sub al, 30h ;Convert code to an actual integer cbw ;CONVERT BYTE TO WORD. This takes whatever number is in al and ;extends it to ax, boubling its size from 8 bits to 16 bits ;The first digit now occupies all of ax as an integer add number, ax ;Both my variable number and ax are 16 bits, so equal size mov ax, number ;copy contents of number to ax mov cx, 2h div cx ;Divide by cx mov half, ax ;copy the contents of ax to half mov cx, 2h; mov ax, number; ;copy numbers to ax xor dx, dx ;flush dx jmp prime_check ;jump to prime check print_question: mov dx, offset question ;print string (do you want to continue Y/N?) mov ah, 9h int 21h ;execute mov ah, 1h int 21h ;execute cmp al, 4eh ;compare je Exit ;jump to exit cmp al, 6eh ;compare je Exit ;jump to exit cmp al, 59h ;compare je Start ;jump to start cmp al, 79h ;compare je Start ;jump to start prime_check: div cx; ;Divide by cx cmp dx, 0h ;reset the value of dx je print_not_prime ;jump to not prime xor dx, dx; ;flush dx mov ax, number ;copy the contents of number to ax cmp cx, half ;compare half with cx je print_prime ;jump to print prime section inc cx; ;increment cx by one jmp prime_check ;repeat the prime check print_prime: mov dx, offset prime ;print string (this number is prime!) mov ah, 9h int 21h ;execute jmp print_question ;jumps to question (do you want to continue Y/N?) this is for repeat print_not_prime: mov dx, offset Not_prime ;print string (this number is not prime!) mov ah, 9h int 21h ;execute jmp print_question ;jumps to question (do you want to continue Y/N?) this is for repeat Exit: mov ah, 4ch int 21h ;execute exit END Start

    Read the article

  • Paste multiple lines before a line in vim?

    - by Umar
    How do I copy multiple lines and paste them as a block before a line? As an example I have the following code and I want to copy and paste the three lines after the if statement to after the else statement but before the line below it. [row col] = find(H); if (nargin < 4) delqmn = sparse(row, col, 0, M, N); % diff of msgs from bits to checks delrmn = sparse(row, col, 0, M, N);% diff of msgs from checks to bits rmn0 = sparse(row, col, 0, M, N);% msgs from checks to bits (p=0) else // Insert 3 lines after if statement here qn0 = 1-r;% pseudoposterior probabilities qn1 = r;% pseudoposterior probabilities Thanks

    Read the article

  • Is a larger hard drive with the same cache, rpm, and bus type faster?

    - by Joel Coehoorn
    I recently heard that, all else being equal, larger hard are faster than smaller. It has to do with more bits passing under the read head as the drive spins - since a large drive packs the bits more tightly, the same amount of spin/time presents more data to the read head. I had not heard this before, and was inclined to believe the the read heads expected bits at a specific rate and would instead stagger data, so that the two drives would be the same speed. I now find myself looking at purchasing one of two computer models for the school where I work. One model has an 80GB drive, the other a 400GB (for ~$13 more). The size of the drive is immaterial, since users will keep their files on a file server where they can be backed up. But if the 400GB drive will really deliver a performance boost to the hard drive, the extra money is probably worth it. Thoughts?

    Read the article

  • Compiling Assembly Manually [migrated]

    - by John Smith
    I am having trouble with translating a specific line in assembly to machine code for the Nios II. I have successfully compiled these lines: START_TIMER = 0xF68C r0 = 0x0 r8 = 0x8 label = 50000 addi r8, r8, %lo(label) - 01000 01000 1100001101010000 000100 subi r8, r8, 1 - 01000 01000 1111111111111111 000100 bne r8, r0, START_TIMER - 01000 00000 1111011010001100 011110 The line in question that I have trouble with is this one: orhi r8, r0, %hiadj(label) As explained in the handbook linked above, "%lo" means "Extract bits [15..0] of immed32" and "%hiadj" means "Extract bits [31..16] and adds bit 15 of immed32". However, 50000 in binary is 1100001101010000, and is therefore a 16 bit number. As far as I can see, it doesn't contain any bits between 16 and 31. I tried with 0000000000000001, but it's incorrect. What am I doing wrong?

    Read the article

  • ssh login fails for user with empty password

    - by Reid
    How do you enable ssh login on OS X 10.8 (Mountain Lion) for a user with an empty password? I've seen others asking this question, and like me it's for the same reason: a parent who can't deal with passwords. So "set a password" is not an option. I found references to adding "nullok" to various PAM config files. Didn't work. Found sshd config "PermitEmptyPasswords yes". Didn't work. I've done a diff on "ssh -vvv" between a successful ssh with a password-enabled account and the one with no password. 54,55c54,55 < debug2: dh_gen_key: priv key bits set: 133/256 < debug2: bits set: 533/1024 --- > debug2: dh_gen_key: priv key bits set: 140/256 > debug2: bits set: 508/1024 67c67 < debug2: bits set: 509/1024 --- > debug2: bits set: 516/1024 79c79 < debug2: key: /Users/rae/.ssh/rae (0x7f9a0241e2c0) --- > debug2: key: /Users/rae/.ssh/rae (0x7f81e0c1e2c0) 90,116c90,224 < debug1: Authentications that can continue: publickey,keyboard-interactive < debug2: we did not send a packet, disable method < debug3: authmethod_lookup keyboard-interactive < debug3: remaining preferred: password < debug3: authmethod_is_enabled keyboard-interactive < debug1: Next authentication method: keyboard-interactive < debug2: userauth_kbdint < debug2: we sent a keyboard-interactive packet, wait for reply < debug2: input_userauth_info_req < debug2: input_userauth_info_req: num_prompts 1 < debug3: packet_send2: adding 32 (len 14 padlen 18 extra_pad 64) < debug1: Authentications that can continue: publickey,keyboard-interactive < debug2: userauth_kbdint < debug2: we sent a keyboard-interactive packet, wait for reply < debug2: input_userauth_info_req < debug2: input_userauth_info_req: num_prompts 1 < debug3: packet_send2: adding 32 (len 14 padlen 18 extra_pad 64) < debug1: Authentications that can continue: publickey,keyboard-interactive < debug2: userauth_kbdint < debug2: we sent a keyboard-interactive packet, wait for reply < debug2: input_userauth_info_req < debug2: input_userauth_info_req: num_prompts 1 < debug3: packet_send2: adding 32 (len 14 padlen 18 extra_pad 64) < debug1: Authentications that can continue: publickey,keyboard-interactive < debug2: we did not send a packet, disable method < debug1: No more authentication methods to try. < Permission denied (publickey,keyboard-interactive). --- > debug1: Server accepts key: pkalg ssh-dss blen 433 > debug2: input_userauth_pk_ok: fp 6e:02:20:63:48:6a:08:99:b8:5f:12:d8:d5:3d:e1:fb > debug3: sign_and_send_pubkey: DSA 6e:02:20:63:48:6a:08:99:b8:5f:12:d8:d5:3d:e1:fb > debug1: read PEM private key done: type DSA > debug1: Authentication succeeded (publickey). > Authenticated to cme-mini.local ([192.168.1.5]:22). > debug2: fd 7 setting O_NONBLOCK > debug3: fd 8 is O_NONBLOCK > debug1: channel 0: new [client-session] > debug3: ssh_session2_open: channel_new: 0 > debug2: channel 0: send open > debug1: Requesting [email protected] > debug1: Entering interactive session. > debug2: callback start > debug2: client_session2_setup: id 0 > debug2: fd 5 setting TCP_NODELAY > debug2: channel 0: request pty-req confirm 1 > debug1: Sending environment.

    Read the article

  • Linux File Permissions & Access Control Query

    - by Jason
    Hi, Lets say I am user: bob & group: users. There is this file: -rw----r-- 1 root users 4 May 8 22:34 testfile First question, why can't bob read the file as it's readable by others? Is it simply that if you are denied by group, then you are auto-blacklisted for others? I always assumed that the final 3 bits too precedence over user/group permission bits, guess I was wrong... Second question, how is this implemented? I suppose it's linked to the first query, but how does this work in relation to Access Control, is it related to how ACLs work / are queried? Just trying to understand how these 9 permission bits are actually implemented/used in Linux. Thanks alot.

    Read the article

  • C++ std::vector insert segfault

    - by ArunSaha
    I am writing a test program to understand vector's better. In one of the scenarios, I am trying to insert a value into the vector at a specified position. The code compiles clean. However, on execution, it is throwing a Segmentation Fault from the v8.insert(..) line (see code below). I am confused. Can somebody point me to what is wrong in my code? #define UNIT_TEST(x) assert(x) #define ENSURE(x) assert(x) #include <vector> typedef std::vector< int > intVector; typedef std::vector< int >::iterator intVectorIterator; typedef std::vector< int >::const_iterator intVectorConstIterator; intVectorIterator find( intVector v, int key ); void test_insert(); intVectorIterator find( intVector v, int key ) { for( intVectorIterator it = v.begin(); it != v.end(); ++it ) { if( *it == key ) { return it; } } return v.end(); } void test_insert() { const int values[] = {10, 20, 30, 40, 50}; const size_t valuesLength = sizeof( values ) / sizeof( values[ 0 ] ); size_t index = 0; const int insertValue = 5; intVector v8; for( index = 0; index < valuesLength; ++index ) { v8.push_back( values[ index ] ); } ENSURE( v8.size() == valuesLength ); for( index = 0; index < valuesLength; ++index ) { printf( "index = %u\n", index ); intVectorIterator it = find( v8, values[ index ] ); ENSURE( it != v8.end() ); ENSURE( *it == values[ index ] ); // intVectorIterator itToInsertedItem = v8.insert( it, insertValue ); // line 51 // UNIT_TEST( *itToInsertedItem == insertValue ); } } int main() { test_insert(); return 0; } $ ./a.out index = 0 Segmentation Fault (core dumped) (gdb) bt #0 0xff3a03ec in memmove () from /platform/SUNW,T5140/lib/libc_psr.so.1 #1 0x00012064 in std::__copy_move_backward<false, true, std::random_access_iterator_tag>::__copy_move_b<int> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:575 #2 0x00011f08 in std::__copy_move_backward_a<false, int*, int*> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:595 #3 0x00011d00 in std::__copy_move_backward_a2<false, int*, int*> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:605 #4 0x000119b8 in std::copy_backward<int*, int*> (__first=0x23e48, __last=0x23450, __result=0x23454) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/stl_algobase.h:640 #5 0x000113ac in std::vector<int, std::allocator<int> >::_M_insert_aux (this=0xffbfeba0, __position=..., __x=@0xffbfebac) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/vector.tcc:308 #6 0x00011120 in std::vector<int, std::allocator<int> >::insert (this=0xffbfeba0, __position=..., __x=@0xffbfebac) at /local/gcc/4.4.1/lib/gcc/sparc-sun-solaris2.10/4.4.1/../../../../include/c++/4.4.1/bits/vector.tcc:126 #7 0x00010bc0 in test_insert () at vector_insert_test.cpp:51 #8 0x00010c48 in main () at vector_insert_test.cpp:58 (gdb) q

    Read the article

  • Convert mkv to mp4 with ffmpeg

    - by JohnS
    When I try converting mkv to mp4 using ffmpeg, the following error occurs: [ipod @ 0x16fa0a0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: -2 = -2 av_interleaved_write_frame(): Invalid argument I used this command to convert the file: ffmpeg -i input.mkv -vcodec copy -acodec copy -absf aac_adtstoasc output.m4v The input file has the following characteristics: mediainfo input.mkv General Unique ID : 200459305952356554213392832683163418790 (0x96CF0ED8DB5914CBB9E18163689280A6) Complete name : input.mkv Format : Matroska Format version : Version 2 File size : 1.46 GiB Duration : 1h 5mn Overall bit rate : 3 168 Kbps Encoded date : UTC 2010-09-26 21:44:02 Writing application : mkvmerge v2.9.5 ('Tu es le seul') built on Jun 17 2009 16:28:30 Writing library : libebml v0.7.8 + libmatroska v0.8.1 Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 4 frames Codec ID : V_MPEG4/ISO/AVC Duration : 1h 5mn Bit rate : 2 910 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 25.000 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.126 Stream size : 1.31 GiB (90%) Writing library : x264 core 105 r1724 b02df7b Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=6 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=-2 / threads=18 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=0 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc=2pass / mbtree=0 / bitrate=2910 / ratetol=1.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / ip_ratio=1.40 / pb_ratio=1.30 / aq=1:1.00 Default : Yes Forced : No Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Mode extension : CM (complete main) Codec ID : A_AC3 Duration : 1h 5mn Bit rate mode : Constant Bit rate : 256 Kbps Channel(s) : 2 channels Channel positions : Front: L R Sampling rate : 48.0 KHz Bit depth : 16 bits Compression mode : Lossy Stream size : 121 MiB (8%) Language : English Default : Yes Forced : No Being new to ffmpeg, I'm not sure what the error means or how to correct it. Thanks!

    Read the article

  • NVIDIA proprietary driver logging me to console instead of GUI

    - by Woozie
    Firstly i want to apologise for any mistakes, English is not my native language. My problem is I can't get NVIDIA proprietary drivers to work. I tried to install it on Ubuntu 12.04.1 32 and 64 bits, Ubuntu 12.10 Beta 2, Linux Mint 13 Cinnamon 64 bits and openSUSE 12.2 64 bits and the error code and symptoms (logging to tty1 instead of GUI logging, low-res bootscreen) are the same for all of these distros. Right, I didn't tell what's the error code. It appears on sudo startx. NVIDIA: could not open the device file /dev/nvidia0 (Input/output error). I know that's the common problem, but I tried to blacklist or even remove the noveau drivers, install NVIDIA driver from repo/from official script/in "Additional drivers", editing xorg.conf and using Xorg -configurate and nvidia-xconfig, actualizing the kernel and entire distro and many, many things that I don't remember. But the problem is even better: entire Cinnamon (Mint) is freezing during the work. I found the error code, which appears during the freeze: Oct 1 20:57:17 WoozieLaptop kernel: [ 308.120176] [drm] nouveau 0000:01:00.0: PFIFO_CACHE_ERROR - Ch 4/1 Mthd 0z0060 Data 0xbcef0201 My Xorg.0.log is here. It was made on Ubuntu 12.04.1 after installing NVIDIA drivers (obviously). inxi -G from Mint: Graphics: Card: NVIDIA GT216 [GeForce GT 240M] X.org: 1.11.3 drivers: (unloaded: nvidia) FAILED: nouveau,vesa,fbdev tty size: 80x25 Advanced Data: N/A for root out of X lspci -k | grep -A2 VGA from Mint: 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 240M] (rev a2) Subsystem: Lenovo Device 38ff Kernel driver in use: nvidia My hardware is: Lenovo IdeaPad Y550 Intel C2D T6600 NVIDIA GeForce GT 240M 4 GB of RAM Any help will be appreciated. This problem totally disabled my laptop from daily using. Cheers, Woozie

    Read the article

  • Why are my videos playing speeded up with no audio, but work fine if I log in as a guest?

    - by Martins Kruze
    Since the start of this week I have been experiencing a glitch in the multimedia on my Samsung R518 laptop. I have 2 problems: Videos in every player are speeded up around 2 or 4 times (including youtube.com (both HTML5 and flash variants), any other video on the web and videos on my laptop played by Totem Media Player), exception is VLC player, but 2nd problem does concern even that. There is no sound - simple as that (with or without headphones plugged in). These all problems are now, and has not seen before, I upgraded to Ubuntu 10.10 after it was possible, and from start I didn't have anything from this - it just started in this week. I haven't even putted new software in. I have more or less solved the question (kind of) - I just logged in as a guest - and it all works, but when I make a new user - it does not. Please help me. Some stats below: sudo lshw -c sound *-multimedia description: Audio device product: RV710/730 vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:48 memory:cfeec000-cfeeffff *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:47 memory:fc200000-fc203fff sudo lshw -c video *-display description: VGA compatible controller product: M92 LP [Mobility Radeon HD 4300 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:46 memory:d0000000-dfffffff ioport:2000(size=256) memory:cfef0000-cfefffff memory:cfe00000-cfe1ffff

    Read the article

  • Ubuntu won't fit 10" netbook's native display

    - by Daniel
    I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is some bits of the system doesn't fit the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as I can see the Unity Launcher. My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, the desktop area is lowered, with bits of it hidden beneath the screen; & there's a black space left at the top of the screen. I had to revert to the old setting 1024x768 to push the desktop upwards and remove the black space.

    Read the article

  • wireless card not detected

    - by user281219
    i have a old HP Compaq NX6125 with a new lunbuntu 14.04 32 bits install , but the wireless card is not working I have already done this steps and the wireless leds on the laptop are on but can't see any where the wireless interface , cloud`t do the last two steps . sudo apt-get remove bcmwl-kernel-source sudo apt-get install firmware-b43-installer b43-fwcutter Then Reboot And you need to activate the card from settings/network/wireless make it ON After this search for wireless icon at the task bar and look for your wifi. lshw -class network: *-network:0 description: Ethernet interface product: NetXtreme BCM5788 Gigabit Ethernet vendor: Broadcom Corporation physical id: 1 bus info: pci@0000:02:01.0 logical name: eth0 version: 03 serial: 00:0f:b0:f7:d1:0f size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 66MHz capabilities: pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.134 duplex=full firmware=5788-v3.26 ip=192.168.1.120 latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:23 memory:d0000000-d000ffff *-network:1 description: Network controller product: BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: bus_master configuration: driver=b43-pci-bridge latency=64 resources: irq:22 memory:d0010000-d0011fff *-network description: Wireless interface physical id: 1 logical name: wlan2 serial: 00:14:a5:77:9e:10 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.13.0-29-generic firmware=666.2 link=no multicast=yes wireless=IEEE 802.11bg

    Read the article

  • Low Graphics and laggy screen after update to 13.10 from 13.04

    - by Wh0RU
    After updating from 13.04 to 13.10 many of 3D unity stuff has stopped working, I am suing Samsung Series 3 (NP350V5X) Laptop which has Switchable Intel and AMD Radeon HD 7670M GFX. xserver-xorg-video-ati does works but NO 3D support and graphics are very low. [I am currently using this] fglrx & fglrx-updates shows blank screen after Login. Intel Graphic Install doesn't work either (dependency error) Output of $ sudo lshw -c video *-display description: VGA compatible controller product: Thames [Radeon HD 7500M/7600M Series] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:45 memory:e0000000-efffffff memory:c0120000-c013ffff ioport:3000(size=256) memory:c0100000-c011ffff *-display description: VGA compatible controller product: 3rd Gen Core processor Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:bfc00000-bfffffff memory:d0000000-dfffffff ioport:4000(size=64) Similarly $ /usr/lib/nux/unity_support_test -p OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.3, 256 bits) OpenGL version string: 2.1 Mesa 9.2.1 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Shows no 3D support. Can anyone please guide how to make things working again.

    Read the article

  • Problem with std::map and std::pair

    - by Tom
    Hi everyone. I have a small program I want to execute to test something #include <map> #include <iostream> using namespace std; struct _pos{ float xi; float xf; bool operator<(_pos& other){ return this->xi < other.xi; } }; struct _val{ float f; }; int main() { map<_pos,_val> m; struct _pos k1 = {0,10}; struct _pos k2 = {10,15}; struct _val v1 = {5.5}; struct _val v2 = {12.3}; m.insert(std::pair<_pos,_val>(k1,v1)); m.insert(std::pair<_pos,_val>(k2,v2)); return 0; } The problem is that when I try to compile it, I get the following error $ g++ m2.cpp -o mtest In file included from /usr/include/c++/4.4/bits/stl_tree.h:64, from /usr/include/c++/4.4/map:60, from m2.cpp:1: /usr/include/c++/4.4/bits/stl_function.h: In member function ‘bool std::less<_Tp>::operator()(const _Tp&, const _Tp&) const [with _Tp = _pos]’: /usr/include/c++/4.4/bits/stl_tree.h:1170: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::iterator, bool> std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_unique(const _Val&) [with _Key = _pos, _Val = std::pair<const _pos, _val>, _KeyOfValue = std::_Select1st<std::pair<const _pos, _val> >, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’ /usr/include/c++/4.4/bits/stl_map.h:500: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename _Alloc::rebind<std::pair<const _Key, _Tp> >::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const std::pair<const _Key, _Tp>&) [with _Key = _pos, _Tp = _val, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’ m2.cpp:30: instantiated from here /usr/include/c++/4.4/bits/stl_function.h:230: error: no match for ‘operator<’ in ‘__x < __y’ m2.cpp:9: note: candidates are: bool _pos::operator<(_pos&) $ I thought that declaring the operator< on the key would solve the problem, but its still there. What could be wrong? Thanks in advance.

    Read the article

  • Quartz compositions created in Snow Leopard (10.6) doesn't work in Leopard (10.5) despite testing in

    - by adib
    Hi I have a reasonably advanced (many patches and subpatches) quartz composition that was created in Snow Leopard but doesn't run well (many elements are not rendered) in Leopard. The composition tested OK via Quartz Composer's Test in Runtime option and works fine for both Leopard 32-bits and Leopard 64-bits (menu item "File | Test in Runtime | Leopard 32-bits". In an actual Leopard (32-bits) system, a lot of elements are not rendered in the quartz composition. Below are the log file excerpt when the composition is run in QuickTime Player under Leopard: QuickTime Player[134] *** <QCNodeManager | namespace = "com.apple.QuartzComposer" | 335 nodes>: Patch with name "/units to pixels" is missing QuickTime Player[134] *** Message from <QCPatch = 0x06D82880 "(null)">:Cannot create node of class "/units to pixels" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create node of class "/resize image to target" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create connection from ["outputValue" @ "Math_1"] to ["Target_Pixels" @ "Patch_2"] The patch "units to pixels" is a system patch in Snow Leopard whereas the patch "resize image to target" is a custom virtual patch located in my home directory. It seems that we can cross out problems in which the composition is referencing a missing virtual patch. I have tested the composition under another user's account and it ran fine which shows that it already embeds the "resize image to target" virtual patch that is located in my home directory. I'm really puzzled why the composition passes the Leopard Runtime test but yet fail to run in an actual Leopard OS? Is there a post-processing step that I need to run to the composition file? Is there any way to make this patch more compatible with Leopard? Thanks in advance.

    Read the article

  • 3x3 Sobel operator and gradient features

    - by pithyless
    Reading a paper, I'm having difficulty understanding the algorithm described: Given a black and white digital image of a handwriting sample, cut out a single character to analyze. Since this can be any size, the algorithm needs to take this into account (if it will be easier, we can assume the size is 2^n x 2^m). Now, the description states given this image we will convert it to a 512-bit feature (a 512-bit hash) as follows: (192 bits) computes the gradient of the image by convolving it with a 3x3 Sobel operator. The direction of the gradient at every edge is quantized to 12 directions. (192 bits) The structural feature generator takes the gradient map and looks in a neighborhood for certain combinations of gradient values. (used to compute 8 distinct features that represent lines and corners in the image) (128 bits) Concavity generator uses an 8-point star operator to find coarse concavities in 4 directions, holes, and lagrge-scale strokes. The image feature maps are normalized with a 4x4 grid. I'm for now struggling with how to take an arbitrary image, split into 16 sections, and using a 3x3 Sobel operator to come up with 12 bits for each section. (But if you have some insight into the other parts, feel free to comment :)

    Read the article

  • 48-bit bitwise operations in Javascript?

    - by randomhelp
    I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Operators/Bitwise_Operators for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like this.next = function(bits) { if (!bits) { bits = 48; } this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1); return this.seed >>> (48 - bits); }; (which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.) I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?

    Read the article

  • 32 bit depth jpg images problem in IE when referenced locally

    - by Stefan
    We have an webbapplication that takes an image that will be uploaded and resized. The resize-library we used saved all pictures with 32-bit depth whatever the depth was before. We have an online client that can view the pictures via an html-file and all is fine there. All pictures are shown correctly. The problem: We also have an vb-winform application that download the pictures and show them in an html-file locally in an webbrowser control. But here all pictures are rejected (not rendered), just the red cross. If we create an static html-file with img-tags in them locally, its the same. All pictures that has 32-bits depth are shown as red crosses. If we resave the pictures with 24-bits depth it magically works again. So ofcourse that was our "workaround", let the resize-library save all pictures with 24-bits depth instead. Summary: 32-bits jpg files shows correct in IE when online but not when referenced locally in a local html-file. (This is true for IE8 on both winxp and windows7). The same local html-file opened in mozilla showed OK. Question: I have googled this a lot but has not found anything about this "problem". Is this a bug in IE8?

    Read the article

  • Compilation error while compiling an existing code base

    - by brijesh
    Hi, While building an existing code base on Mac OS using its native build setup I am getting some basic strange error while compilation phase. /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/bits/locale_facets.h: In constructor 'std::collate_byname<_CharT::collate_byname(const char*, size_t)': /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/bits/locale_facets.h:1072: error: '_M_c_locale_collate' was not declared in this scope /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/ppc-darwin/bits/messages_members.h: In constructor 'std::messages_byname<_CharT::messages_byname(const char*, size_t)': /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/ppc-darwin/bits/messages_members.h:79: error: '_M_c_locale_messages' was not declared in this scope /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits: At global scope: /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: 'float __builtin_huge_valf()' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: 'float __builtin_huge_valf()' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:897: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: 'float __builtin_nanf(const char*)' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: 'float __builtin_nanf(const char*)' cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:899: error: a function call cannot appear in a constant-expression /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:900: error: field initializer is not constant /Developer/SDKs/MacOSX10.3.9.sdk/usr/include/gcc/darwin/3.3/c++/limits:915: error: field initializer is not constant

    Read the article

  • What does "cpuid level" means ? Asking just for curiosity

    - by ogzylz
    For example, I put just 2 core info of a 16 core machine. What does "cpuid level : 6" line means? If u can provide info about lines "bogomips : 5992.10" and "clflush size : 64" I will be appreciated processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5992.10 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 6 model name : Intel(R) Xeon(TM) CPU 3.00GHz stepping : 8 cpu MHz : 2992.689 cache size : 4096 KB physical id : 1 siblings : 4 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 6 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm bogomips : 5985.23 clflush size : 64 cache_alignment : 128 address sizes : 40 bits physical, 48 bits virtual power management:

    Read the article

  • Serial.begin(speed, config) not compiling for Leonardo Board

    - by forgemo
    I would like to configure my serial communication to have no parity, 1 start- and 2 stop-bits. The documentation for Serial.begin(speed, config) states: (...) An optional second argument configures the data, parity, and stop bits. The default is 8 data bits, no parity, one stop bit. The documentation also lists the possible configuration-values. According to my (limited) understanding, I need SERIAL_7N2 or SERIAL_8N2 to meet my requirements. (I'm not sure how the data-bits relate to the the 1-start-bit that I need.) However, I can't even compile because I have no idea how to supply that config value to the begin method. (I don't have much Arduino/C++ experience) I've tried in my code the following two variants: Serial.begin(9600, SERIAL_8N2); Serial.begin(9600, "SERIAL_8N2"); Am I missing something? Additional Information: Serial.begin(speed, config) has been introduced with the latest Arduino 1.0.2 IDE version. The code defining/implementing the begin methods can be found here. HardwareSerial.h HardwareSerial.cpp Edit: According to the replies from PeterJ and borges, the following variant is correct. Serial.begin(9600, SERIAL_8N2); However, it's still not working. I found that the compile error doesn't occur if I change the configured board from my Arduino Leonardo to Arduino uno. Therefore, it could be a bug occurring only with a subset of boards ... or maybe it's not supported?!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >