Search Results

Search found 2468 results on 99 pages for 'splattered bits'.

Page 23/99 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Rails ActiveRecord BigNum to JSON

    - by Jon Hoffman
    Hi, I am serializing an ActiveRecord model in rails 2.3.2 to_json and have noticed that BigNum values are serialized to JSON without quotes, however, javascript uses 64 bits to represent large numbers and only ~52(?) of those bits are available for the integer part, the rest are for the exponent. So my 17 digit numbers become rounded off, grrr. Try the following in the Firebug console: console.log(123456789012345678) So, I'm thinking that the json encoder should be smart enough to quote numbers that are too big for the javascript engines to handle. How do I fix up rails to do that? Or, is there a way to override the encoding for a single property on the model (I don't want to_s elsewhere)? Thanks.

    Read the article

  • C++ long long manipulation

    - by Krakkos
    Given 2 32bit ints iMSB and iLSB int iMSB = 12345678; // Most Significant Bits of file size in Bytes int iLSB = 87654321; // Least Significant Bits of file size in Bytes the long long form would be... // Always positive so use 31 bts long long full_size = ((long long)iMSB << 31); full_size += (long long)(iLSB); Now.. I don't need that much precision (that exact number of bytes), so, how can I convert the file size to MiBytes to 3 decimal places and convert to a string... tried this... long double file_size_megs = file_size_bytes / (1024 * 1024); char strNumber[20]; sprintf(strNumber, "%ld", file_size_megs); ... but dosen't seem to work. i.e. 1234567899878Bytes = 1177375.698MiB ??

    Read the article

  • What is the bit size of long on 64-bit Windows?

    - by acidzombie24
    Not to long ago someone told me that long are not 64 bits on 64 bit machines and i should always use int. This did not make sense to me. I seen docs (such as the one on apples official site) say that long are indeed 64 bits when compiling for a 64bit CPU. I looked up what it was on windows and found Windows: long and int remain 32-bit in length, and special new data types are defined for 64-bit integers. from http://www.intel.com/cd/ids/developer/asmo-na/eng/197664.htm?page=2 What should i use? should i define something like uw, sw ((un)signed width) as a long if not on windows. Otherwise do a check on the target CPU bitsize?

    Read the article

  • Simple hardware RNG

    - by roygbiv
    I made a tongue-in-cheek comment to this question about making a hardware RNG. Does anyone know of any simple plans or can anyone descibe a simple hardware based RNG and the software to drive it? Go to Radio Shack. Buy a diode, an NTR resistor, a capacitor and serial cable. Cut off the end of the serial cable that does not fit on your computer. Solder the diode and resistor in series between pins DTR and DSR of the cable. Solder the capacitor between DSR and TXD pins. Write a small C program to do the following: Set DTR to 1. Start Timer. Monitor DSR until it goes to 1. Stop Timer. Calculate resistance from elapsed time. Retreive serveral bits from that value to use as part of random number. Repeat until enough bits have accumulated.

    Read the article

  • Volatile or synchronized for primitive type?

    - by DKSRathore
    In java, assignment is atomic if the size of the variable is less that or equal to 32 bits but is not if more than 32 bits. What(volatile/synchronized) would be more efficient to use in case of double or long assignment. like, volatile double x = y; synchronized is not applicable with primitive argument. How do i use synchronized in this case. Of course I don't want to lock my class. so this should not be used.

    Read the article

  • Git pack file entry format

    - by Ben Collins
    My understanding of the Git pack file format is something like: Where the table is 32-bits wide, and the first three 32-bit words are the pack file header. The last row of 32 bits are the first 4 bytes of an entry. As I understand it, the size of the entry is specified by consecutive bytes with the MSB set, followed by compressed data. In the first byte whose MSB is not set, is the MSB part of the compressed data, or is it a gap? If it's part of the compressed data, how can you guarantee that when the data is compressed that bit won't be set?

    Read the article

  • OS memory allocation addresses

    - by user1777914
    Quick curious question, memory allocation addresses are choosed by the language compiler or is it the OS which chooses the addresses for the memory asked? This is from a doubt about virtual memory, where it could be quickly explained as "let the process think he owns all the memory", but what happens on 64 bits architectures where only 48 bits are used for memory addresses if the process wants a higher address? Lets say you do a int a = malloc(sizeof(int)); and you have no memory left from the previous system call so you need to ask the OS for more memory, is the compiler the one who determines the memory address to allocate this variable, or does it just ask the OS for memory and it allocates it on the address returned by it?

    Read the article

  • Getting an unexpected "?" at the end of a Registry GetValue in C#

    - by Wilhelm Peraud
    Hi, I use the Registry class to manage values in the Registry on Windows Seven in C#. Registry.GetValue(...); But, I'm facing a curious behavior : Every time, the returned value is the correct one, but sometimes, it is followed by an unexpected "?" When I check the Registry, (regedit), the "?" doesn't exist. I really don't understand from where this question mark come from. Could someone help me please ? Info : - C# - 3.5 framework - windows 7 64 bits (and i want my application to work on both 32 and 64 bits systems) Thank you in advance, Wilhelm

    Read the article

  • c# interop with ghostscript

    - by yodaj007
    I'm trying to access some Ghostscript functions like so: [DllImport(@"C:\Program Files\GPLGS\gsdll32.dll", EntryPoint = "gsapi_revision")] public static extern int Foo(gsapi_revision_t x, int len); public struct gsapi_revision_t { [MarshalAs(UnmanagedType.LPTStr)] string product; [MarshalAs(UnmanagedType.LPTStr)] string copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); Foo(foo, Marshal.SizeOf(foo)); This corresponds with these definitions from the iapi.h header from ghostscript: typedef struct gsapi_revision_s { const char *product; const char *copyright; long revision; long revisiondate; } gsapi_revision_t; GSDLLEXPORT int GSDLLAPI gsapi_revision(gsapi_revision_t *pr, int len); But my code is reading nothing into the string fields. If I add 'ref' to the function, it reads gibberish. However, the following code reads in the data just fine: public struct gsapi_revision_t { IntPtr product; IntPtr copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); IntPtr x = Marshal.AllocHGlobal(20); for (int i = 0; i < 20; i++) Marshal.WriteInt32(x, i, 0); int result = Foo(x, 20); IntPtr productNamePtr = Marshal.ReadIntPtr(x); IntPtr copyrightPtr = Marshal.ReadIntPtr(x, 4); long revision = Marshal.ReadInt64(x, 8); long revisionDate = Marshal.ReadInt64(x, 12); byte[] dest = new byte[1000]; Marshal.Copy(productNamePtr, dest, 0, 1000); string name = Read(productNamePtr); string copyright = Read(copyrightPtr); } public static string Read(IntPtr p) { List<byte> bits = new List<byte>(); int i = 0; while (true) { byte b = Marshal.ReadByte(new IntPtr(p.ToInt64() + i)); if (b == 0) break; bits.Add(b); i++; } return Encoding.ASCII.GetString(bits.ToArray()); } So what am I doing wrong with marshaling?

    Read the article

  • How do I detect if a display is in High Contrast mode?

    - by banjollity
    I'm testing my company's established Swing application for accessibility issues. With high contrast mode enabled on my PC certain parts of this application are rendered properly (white-on-black) and some incorrectly (black-on-white). The bits that are correct are the native components (JButton, JLabel and whatnot) and third party components from the likes of JIDE. The incorrect bits are custom components and renderers developed in-house without consideration for high-contrast mode. Clearly it's possible to detect when high-contrast mode is enabled. How do I do this?

    Read the article

  • G++, compiler warnings, c++ templates

    - by Ian
    During the compilatiion of the C++ program those warnings appeared: c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bc:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2317: instantiated from `void std::partial_sort(_RandomAccessIterator, _RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Compare = sortObjects<double>]' c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2506: instantiated from `void std::__introsort_loop(_RandomAccessIterator, _RandomAccessIterator, _Size, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Size = int, _Compare = sortObjects<double>]' c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2589: instantiated from `void std::sort(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Compare = sortObjects<double>]' io/../structures/objects/../../algorithm/analysis/../../structures/list/ObjectsList.hpp:141: instantiated from `void ObjectsList <T>::sortObjects(unsigned int, T, T, T, T, unsigned int) [with T = double]' I do not why, because all objects have only template parameter T, their local variables are also T. The only place, where I am using double is main. There are objects of type double creating and adding into the ObjectsList... Object <double> o1; ObjectsList <double> olist; olist.push_back(o1); .... T xmin = ..., ymin = ..., xmax = ..., ymax = ...; unsigned int n = ...; olist.sortAllObjects(xmin, ymin, xmax, ymax, n); and comparator template <class T> class sortObjects { private: unsigned int n; T xmin, ymin, xmax, ymax; public: sortObjects ( const T xmin_, const T ymin_, const T xmax_, const T ymax_, const int n_ ) : xmin ( xmin_ ), ymin ( ymin_ ), xmax ( xmax_ ), ymax ( ymax_ ), n ( n_ ) {} bool operator() ( const Object <T> *o1, const Object <T> *o2 ) const { T dmax = (std::max) ( xmax - xmin, ymax - ymin ); T x_max = ( xmax - xmin ) / dmax; T y_max = ( ymax - ymin ) / dmax; ... return ....; } representing ObjectsList method: template <class T> void ObjectsList <T> ::sortAllObjects ( const T xmin, const T ymin, const T xmax, const T ymax, const unsigned int n ) { std::sort ( objects.begin(), objects.end(), sortObjects <T> ( xmin, ymin, xmax, ymax, n ) ); }

    Read the article

  • Unreachable code detected by using const variables

    - by Anton Roth
    I have following code: private const FlyCapture2Managed.PixelFormat f7PF = FlyCapture2Managed.PixelFormat.PixelFormatMono16; public PGRCamera(ExamForm input, bool red, int flags, int drawWidth, int drawHeight) { if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono8) { bpp = 8; // unreachable warning } else if (f7PF == FlyCapture2Managed.PixelFormat.PixelFormatMono16){ bpp = 16; } else { MessageBox.Show("Camera misconfigured"); // unreachable warning } } I understand that this code is unreachable, but I don't want that message to appear, since it's a configuration on compilation which just needs a change in the constant to test different settings, and the bits per pixel (bpp) change depending on the pixel format. Is there a good way to have just one variable being constant, deriving the other from it, but not resulting in an unreachable code warning? Note that I need both values, on start of the camera it needs to be configured to the proper Pixel Format, and my image understanding code needs to know how many bits the image is in. So, is there a good workaround, or do I just live with this warning?

    Read the article

  • Why is Read-Modify-Write necessary for registers on embedded systems?

    - by Adam Shiemke
    I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/, which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by assignment or oring with a mask) a register only takes one cycle. On the other hand, if you first read the register, then modify, then write, an interrupt happening between the read and write may result in writing an old value to the register. So why read-modify-write? Is it still necessary?

    Read the article

  • gcc, UTF-8 and limits.h

    - by bobby
    My OS is Debian, my default locale is UTF-8 and my compiler is gcc. By default CHAR_BIT in limits.h is 8 which is ok for ASCII because in ASCII 1 char = 8 bits. But since I am using UTF-8, chars can be up to 32 bits which contradicts the CHAR_BIT default value of 8. If I modify CHAR_BIT to 32 in limits.h to better suit UTF-8, what do I have to do in order for this new value to come into effect ? I guess I have to recompile gcc ? Do I have to recompile the linux kernel ? What about the default installed Debian packages, will they work ?

    Read the article

  • Optimal password salt length

    - by Juliusz Gonera
    I tried to find the answer to this question on Stack Overflow without any success. Let's say I store passwords using SHA-1 hash (so it's 160 bits) and let's assume that SHA-1 is enough for my application. How long should be the salt used to generated password's hash? The only answer I found was that there's no point in making it longer than the hash itself (160 bits in this case) which sounds logical, but should I make it that long? E.g. Ubuntu uses 8-byte salt with SHA-512 (I guess), so would 8 bytes be enough for SHA-1 too or maybe it would be too much?

    Read the article

  • Bitwise Shifting in C

    - by user313943
    I've recently decided to undertake an SMS project for sending and receiving SMS though a mobile. The data is sent in PDU format - I am required to change ASCII characters to 7 bit GSM alphabet characters. To do this I've come across several examples, such as http://www.dreamfabric.com/sms/hello.html This example shows Rightmost bits of the second septet, being inserted into the first septect, to create an octect. Bitwise shifts do not cause this to happen, as will insert to the left, and << to the right. As I understand it, I need some kind of bitwise rotate to create this - can anyone tell me how to move bits from the right handside and insert them on the left? Thanks,

    Read the article

  • object representation and value representation

    - by FredOverflow
    3.9 §4 says: The object representation of an object of type T is the sequence of N unsigned char objects taken up by the object of type T, where N equals sizeof(T). The value representation of an object is the set of bits that hold the value of type T. For trivially copyable types, the value representation is a set of bits in the object representation that determines a value, which is one discrete element of an implementation-defined set of values. Does "The value representation of an object" imply that values are always stored in objects? What is the value representation of non-trivially copyable types?

    Read the article

  • Insriting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • Writing a C Macro

    - by shaharg
    Hi, I have to write a macro that get as parameter some variable, and for each two sequential bits with "1" value replace it with 0 bit. For example: 10110100 will become 10000100. And, 11110000-00000000 11100000-100000000 I'm having a troubles writing that macro. I've tried to write a macro that get wach bit and replace it if the next bit is the same (and they both 1), but it works only for 8 bits and it's very not friendly... P.S. I need a macro because I'm learning C and this is an exercise i found and i couldn't solve it myself. i know i can use function to make it easily... but i want to know how to do it with macros. Thanks!

    Read the article

  • Inserting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • SQL 2005 - Search stored procedures for text (Not all text is being searched)

    - by hamlin11
    The following bits of code do not seem to be searching the entire routine definition. Code block 1: select top 50 * from information_schema.routines where routine_definition like '%09/01/2008%' and specific_Name like '%NET' Code Block 2: SELECT ROUTINE_NAME, ROUTINE_DEFINITION FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE '%EffectiveDate%' AND ROUTINE_TYPE='PROCEDURE' and ROUTINE_NAME like '%NET' I know for a fact that these bits of SQL work under most circumstances. The problem is this: When I run this for "EffectiveDate" which is buried at line ~800 in a few stored procedures, these stored procedures never show up in the results. It's as if "like" only searches so deep. Any tips on fixing this? I want to search the ENTIRE stored procedure for the specified text. Thanks!

    Read the article

  • Bitwise operators versus .NET abstractions for bit manipulation in C# prespective

    - by Leron
    I'm trying to get basic skills in working with bits using C#.NET. I posted an example yesterday with a simple problem that needs bit manipulation which led me to the fact that there are two main approaches - using bitwise operators or using .NET abstractions such as BitArray (Please let me know if there are more build-in tools for working with bits other than BitArray in .NET and how to find more info for them if there are?). I understand that bitwise operators work faster but using BitArray is something much more easier for me, but one thing I really try to avoid is learning bad practices. Even though my personal preferences are for the .NET abstraction(s) I want to know which i actually better to learn and use in a real program. Thinking about it I'm tempted to think that .NET abstractions are not that bad at, after all there must be reason to be there and maybe being a beginner it's more natural to learn the abstraction and later on improve my skills with low level operations, but this is just random thoughts.

    Read the article

  • Has anyone been successful at a assembler based led blinker for an xcore?

    - by dwelch
    I am liking the http://www.xmos.com chips but want to get a lower level understanding of what is going on. Basically assembler. I am trying to sort out something as simple as an led blinker, set the led, count to N clear the led, count to N, loop forever. Sure I can disassemble a 10 line XC program, but if you have tried that you will see there is a lot of bloat in there that is in every program, what bits are to support the compiler output and what bits are actually setting up the gpio?

    Read the article

  • Is incrementing in a loop exponential time?

    - by user356106
    I've a simple but confusing doubt about whether the program below runs in exponential time. The question is : given a +ve integer as input, print it out. The catch is that you deliberately do this in a loop, like this: int input,output=0; cininput; while(input--) ++output; // Takes time proportional to the value of input cout<< output; I'm claiming that this problem runs in exponential time. Because, the moment you increase the # of bits in input by 1, the program takes double the amount of time to execute. Put another way, to print out log2(input) bits, it takes O(input) time. Is this reasoning right?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >