Search Results

Search found 19796 results on 792 pages for 'bit twiddler'.

Page 114/792 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Segment register, IP register and memory addressing issue!

    - by Zia ur Rahman
    In the following text I asked two questions and I also described that what I know about these question so that you can understand my thinking. Your precious comments about the below text are required. Below is the Detail of 1ST Question As we know that if we have one mega byte memory then we need 20 bits to address this memory. Another thing is each memory cell has a physical address which is of 20 bits in 1Mb memory. IP register in IAPX88 is of 16 bits. Now my point of view is, we can not access the memory at all by the IP register because the memory need 20 bit address to be addressed but the IP register is of 16 bits. If we have a memory of 64k then IP register can access this memory because this memory needs 16 bits to be addressed. But incase of 1mb memory IP can’t.tell me am i right or not if not why? Suppose physical address of memory is 11000000000000000101 Now how can we access this memory location by 16 bits. Below is the detail of Next Question: My next question is , suppose IP register is pointing to memory location, and the segment register is also pointing to a memory location (start of the segment), the memory is of 1MB, how we can access a memory location by these two 16 bit registers tell me the sequence of steps how the 20 bits addressable memory location is accessed . If your answer is, we take the segment value and we shift it left by 4 bits and then add the IP value into it to get the 20 bits address, then this raises another question that is the address bus (the address bus should be 20 bits wide), the registers both the segment register and the IP register are of 16 bits each , now if address bus is 20 bits wide then this means that the address bus is connected to both these registers. If its not the case then another thing that comes into my mind is that both these registers generate a 20 bit address and there would be a register which can store 20 bits and this register would be connected to both these register and the address bus as well.

    Read the article

  • Correlating traditional Windows joystick axes with HID

    - by Wade Williams
    I'm a bit confused on the description of joystick axes and I'm hoping that someone has a link or document which could help clear my confusion. I'm not a Windows guy, so trying to port some traditional Windows gameport code has me a bit confused. We all know about the common first three axes: X Y Z My understanding was that in the gameport-style interface the three other axes are: R U V However, looking in my IOHIDUsageTables (OS X), I see: kHIDUsage_GD_X = 0x30, /* Dynamic Value */ kHIDUsage_GD_Y = 0x31, /* Dynamic Value */ kHIDUsage_GD_Z = 0x32, /* Dynamic Value */ kHIDUsage_GD_Rx = 0x33, /* Dynamic Value */ kHIDUsage_GD_Ry = 0x34, /* Dynamic Value */ kHIDUsage_GD_Rz = 0x35, /* Dynamic Value */ kHIDUsage_GD_Vx = 0x40, /* Dynamic Value */ kHIDUsage_GD_Vy = 0x41, /* Dynamic Value */ kHIDUsage_GD_Vz = 0x42, /* Dynamic Value */ kHIDUsage_GD_Vbrx = 0x43, /* Dynamic Value */ kHIDUsage_GD_Vbry = 0x44, /* Dynamic Value */ kHIDUsage_GD_Vbrz = 0x45, /* Dynamic Value */ kHIDUsage_GD_Vno = 0x46, /* Dynamic Value */ This has me a bit confused due to the three R axis (though that does not appear to be uncommon) and the lack of a U axis. Two questions: 1) Can anyone confirm to what axis the traditional U axis would be? I saw one document describe it as "the axis for rudder pedals" leading me to believe it would be Rz. 2) Can anyone describe in more detail the typical usages of the V and Vbr axes? I understand the descriptions are "vector" and "relative vector,' respectively, but I'm having difficult visualizing what that means in terms of a physical device. All enlightenment and documentation pointers welcome.

    Read the article

  • JVM segmentation faults due to "Invalid memory access of location"

    - by Dan
    I have a small project written in Scala 2.9.2 with unit tests written using ScalaTest. I use SBT for compiling and running my tests. Running sbt test on my project makes the JVM segfault regularly, but just compiling and running my project from SBT works fine. Here is the exact error message: Invalid memory access of location 0x8 rip=0x10959f3c9 [1] 11925 segmentation fault sbt I cannot locate a core dump anywhere, but would be happy to provide it if it can be obtained. Running java -version results in this: java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06-434-11M3909) Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01-434, mixed mode) But I've also got Java 7 installed (though I was never able to actually run a Java program with it, afaik). Another issue that may be related: some of my test cases contain titles including parentheses like ( and ). SBT or ScalaTest (not sure) will consequently insert square parens in the middle of the output. For example, a test case with the name (..)..(..) might suddenly look like (..[)..](..). Any help resolving these issues is much appreciated :-) EDIT: I installed the Java 7 JDK, so now java -version shows the right thing: java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) This also means that I now get a more detailed segfault error and a core dump: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x000000010a71a3e3, pid=16830, tid=19459 # # JRE version: 7.0_07-b10 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.3-b01 mixed mode bsd-amd64 compressed oops) # Problematic frame: # V [libjvm.dylib+0x3cd3e3] And the dump.

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • TextMate tips for Rails Development

    - by Ganesh Shankar
    Working on Rails code for a bit has started me on the spiral into obsessively customising my dev environment (I say obsessive as at the last Rails meetup I went to there was some guy who was raving about shaving milliseconds off each line of code and therefore upto half an hour a day... I hope I don't become that guy...) I spend most of my time in TextMate so it seemed like a great place to start the optimising... So far I've added a few TextMate bundles like Git Bundle, Project Plus and the theme from Railscasts. I've noticed some of the other TextMate users I've come into contact with using heaps of nifty keyboard shortcuts and other plugins to help make their dev environment more friendly. Looking around the net, I was a bit overwhelmed by the amount of shortcuts and plugins available... So I was hoping to hear from other Rails developers out there: What are some good keyboard shortcuts and plugins that I should be aware of for TextMate with specific reference to Rails Development? I've read this question on SO: http://stackoverflow.com/questions/99807/what-are-some-useful-textmate-shortcuts but I was wondering if there was something a bit more specific to Rails development.

    Read the article

  • Keeping track of leading zeros with BitSet in Java

    - by Ryan
    So, according to this question there are two ways to look at the size of a BitSet. size(), which is legacy and not really useful. I agree with this. The size is 64 after doing: BitSet b = new BitSet(8); length(), which returns the index of the highest set bit. In the above example, length() will return 0. This is somewhat useful, but doesn't accurately reflect the number of bits the BitSet is supposed to be representing in the event you have leading zeros. The information I'm dealing with rarely(if ever) falls evenly into 8-bit bytes, and the leading 0s are just as important to me as the 1s. I have some data fields that are 333 bits long, some that are 20, etc. Is there a better way to deal with bit-level details in Java that will keep track of leading zeros? Otherwise, I'm going to have to 'roll my own', so to speak. To which I have a few ideas already, but I'd prefer not to reinvent the wheel if possible.

    Read the article

  • Find max integer size that a floating point type can handle without loss of precision

    - by Checkers
    Double has range more than a 64-bit integer, but its precision is less dues to its representation (since double is 64-bit as well, it can't fit more actual values). So, when representing larger integers, you start to lose precision in the integer part. #include <boost/cstdint.hpp> #include <limits> template<typename T, typename TFloat> void maxint_to_double() { T i = std::numeric_limits<T>::max(); TFloat d = i; std::cout << std::fixed << i << std::endl << d << std::endl; } int main() { maxint_to_double<int, double>(); maxint_to_double<boost::intmax_t, double>(); maxint_to_double<int, float>(); return 0; } This prints: 2147483647 2147483647.000000 9223372036854775807 9223372036854775800.000000 2147483647 2147483648.000000 Note how max int can fit into a double without loss of precision and boost::intmax_t (64-bit in this case) cannot. float can't even hold an int. Now, the question: is there a way in C++ to check if the entire range of a given integer type can fit into a loating point type without loss of precision? Preferably, it would be a compile-time check that can be used in a static assertion, and would not involve enumerating the constants the compiler should know or can compute.

    Read the article

  • How do I use Perl's WWW::Facebook::API to publish to a user's newsfeed?

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • Why does the .NET tab in the 'Add Reference' dialog in Visual Studio not list the contents of the GA

    - by abroun
    Duplicate of: Getting assemblies to show in the .NET tab of Add Reference So, I'm using Visual C# 2008 Express Edition and I've just been on a bit of a detour as I found out that my assumption that the .NET tab of the 'Add Reference' dialog lists the contents of the GAC was incorrect. This was a bit of a problem for me as the assembly that I wanted to reference from my project was only available in the GAC. (It was Microsoft.XNA.Framework v2.0 obtained from the XNA 2.0 redistributalbe and as far as I could see it installed only into the GAC). I worked round the problem by setting the reference to Microsoft.XNA.Framework manually in the .csproj file and then getting a copy of the dll out of the cache. I was then able to create a directory for the DLL, add it to Visual Studio's list of assembly directories in the registry and then voila! I could see it in the .NET tab. This all seems like a bit of a faff to me and I don't think that my initial assumption (that the .NET tab shows the contents of the GAC) was that unreasonable or would be that uncommon. Can someone who knows more than me tell me why the contents of the GAC aren't shown? The documentation just says that they are not, but is there a good reason? is there actually a way to get the entire contents of the GAC to be listed? A tick box I've missed somewhere? Any info much appreciated.

    Read the article

  • Authenticated WCF: Getting the Current Security Context

    - by bradhe
    I have the following scenario: I have various user's data stored in my database. This data was entered via a web app. We'd like to expose this data back to the user over a web service so that they can integrate their data with their applications. We would also like to expose some business logic over these services. As such we do not want to use OData. This is a multi-tenant application so I only want to expose their data back to them and not other users. Likewise, the business logic we expose should be relative to the authenticated user. I would like let the user use an OASIS scheme to authenticate with the web service -- WCF already allows for this out of the box as far as I understand -- or perhaps we can issue them certificates to authenticate with. That bit hasn't really been worked out yet. Here is a bit of pseudo-code of how I envision this would work within the service: function GetUsersData(id) var user := Lookup User based on Username from Auth Context var data := Get Data From Repository based on "user" return data end function For the business logic scenario I think it would look something like this: function PerformBusinessLogic(someData) var user := Lookup User based on Username from Auth Context var returnValue := Perform some logic based on supplied data return returnValue end function The hard bit here is getting the current username (or cert info in the cert scenario) that the user authenticated with! Does WCF even enable this scenario? If not would WSE3 enable this? Thanks,

    Read the article

  • Fast sign in C++ float...are there any platform dependencies in this code?

    - by Patrick Niedzielski
    Searching online, I have found the following routine for calculating the sign of a float in IEEE format. This could easily be extended to a double, too. // returns 1.0f for positive floats, -1.0f for negative floats, 0.0f for zero inline float fast_sign(float f) { if (((int&)f & 0x7FFFFFFF)==0) return 0.f; // test exponent & mantissa bits: is input zero? else { float r = 1.0f; (int&)r |= ((int&)f & 0x80000000); // mask sign bit in f, set it in r if necessary return r; } } (Source: ``Fast sign for 32 bit floats'', Peter Schoffhauzer) I am weary to use this routine, though, because of the bit binary operations. I need my code to work on machines with different byte orders, but I am not sure how much of this the IEEE standard specifies, as I couldn't find the most recent version, published this year. Can someone tell me if this will work, regardless of the byte order of the machine? Thanks, Patrick

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [[email protected]][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [[email protected]][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [[email protected]][~/dev/study/java/mongodb]% javac test1.java [[email protected]][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • Pseudo code for instruction description

    - by Claus
    Hi, I am just trying to fiddle around what is the best and shortest way to describe two simple instructions with C-like pseudo code. The extract instruction is defined as follows: extract rd, rs, imm This instruction extracts the appropriate byte from the 32-bit source register rs and right justifies it in the destination register. The byte is specified by imm and thus can take the values 0 (for the least-significant byte) and 3 (for the most-significant byte). rd = 0x0; // zero-extend result, ie to make sure bits 31 to 8 are set to zero in the result rd = (rs && (0xff << imm)) >> imm; // this extracts the approriate byte and stores it in rd The insert instruction can be regarded as the inverse operation and it takes a right justified byte from the source register rs and deposits it in the appropriate byte of the destination register rd; again, this byte is determined by the value of imm tmp = 0x0 XOR (rs << imm)) // shift the byte to the appropriate byte determined by imm rd = (rd && (0x00 << imm)) // set appropriate byte to zero in rd rd = rd XOR tmp // XOR the byte into the destination register This looks all a bit horrible, so I wonder if there is a little bit a more elegant way to describe this bahaviour in C-like style ;) Many thanks, Claus

    Read the article

  • Help with animating an element in jQuery

    - by alex
    I have an unordered list with a few list elements. #tags { width: 300px; height: 300px; position: relative; border: 1px solid red; list-style: none none; } #tags li { position: absolute; background: gray; } I have also started writing a jQuery plugin to animate the list elements. So far, I place the list elements randomly in the parent container, and choose a random font size for the text. My next step (which I am a bit stuck on) is to animate the list elements... essentially, I want the list elements to do something like this Slide from left to right whilst getting slightly larger up to the middle and then dropping in size back to normal when it hits the right border. Then, it should do the same in reverse, however it should also set a negative 'z-index' and maybe fade in opacity a bit. The first bit I'm really stuck on, is how to determine if the element is near the middle, in a way that I can have a value that starts at 0.1 on the far left hand size and is 1 in the middle and then back to 0.1 on the far right hand size. Basically, I want them to appear as if they are going around in a faux 3D circle into the page. Then I could do something like this $(this).css({ fontSize: percentageTowardsMiddle * 14, }); Do you know how I could do this? Thanks

    Read the article

  • why unsigned int 0xFFFFFFFF is equal to int -1?

    - by conejoroy
    perhaps it's a very stupid question but I'm having a hard time figuring this out =) in C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see http://stackoverflow.com/questions/1420982/invalid-value-for-sizet Why?? I'm confused.. I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold. then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF? I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why. thanks!

    Read the article

  • How to convert a printer driver to a stand-alone console application which can generate a printer fi

    - by Thorbjørn Ravn Andersen
    I have a situation where the only way to generate a certain datafile is to print it manually to FILE: under Windows and save it in a file for further processing. I would really like to have a small stand-alone program which embeds this binary printer driver so I can run it from a batch file and have it generate that binary file for me, as we can then fully automate the "save file in Visio, 'print' it and upload it to the final destination and trigger a remote test". Is this possible with a suitable Windows SDK? I am a Java programmer, so I do not know Visual Studio and the possibilities with MSDN - yet! - but I'd appreciate pointers. EDIT: I have the installation files for that printer driver, both 32 and 64 bit. Older versions may include a 16 bit driver. EDIT: The "print to FILE:" functionality is just what was recommended by the documentation. I have played a little bit with using the LPR-protocol to see what it can do. I'd still prefer the "invoke small binary" approach.

    Read the article

  • Intel Assembly Programming

    - by Kay
    class MyString{ char buf[100]; int len; boolean append(MyString str){ int k; if(this.len + str.len>100){ for(k=0; k<str.len; k++){ this.buf[this.len] = str.buf[k]; this.len ++; } return false; } return true; } } Does the above translate to: start: push ebp ; save calling ebp mov ebp, esp ; setup new ebp push esi ; push ebx ; mov esi, [ebp + 8] ; esi = 'this' mov ebx, [ebp + 14] ; ebx = str mov ecx, 0 ; k=0 mov edx, [esi + 200] ; edx = this.len append: cmp edx + [ebx + 200], 100 jle ret_true ; if (this.len + str.len)<= 100 then ret_true cmp ecx, edx jge ret_false ; if k >= str.len then ret_false mov [esi + edx], [ebx + 2*ecx] ; this.buf[this.len] = str.buf[k] inc edx ; this.len++ aux: inc ecx ; k++ jmp append ret_true: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret true ret_false: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret false My greatest difficulty here is figuring out what to push onto the stack and the math for pointers. NOTE: I'm not allowed to use global variables and i must assume 32-bit ints, 16-bit chars and 8-bit booleans.

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • How to determine if a registry key is redirected by WOW64?

    - by Luke
    Is it possible to determine whether or not a given registry key is redirected? My problem is that I want to enumerate registry keys in both the 32-bit and 64-bit registry views in a generic manner from a 32-bit application. I could simply open each key twice, first with KEY_WOW64_64KEY and then with KEY_WOW64_32KEY. However, if the key is not redirected this gives you exactly the same key and you end up enumerating the exact same content twice; this is what I am trying to avoid. I did find some documentation on it, but it looks like the only way is to examine the hive and do a bunch of string comparisons on the key. Another possibility I thought of is to try to open Wow6432Node on each subkey; if it exists then the key must be redirected. I.e. if I am trying to open HKCU\Software\Microsoft\Windows I would try to open the following keys: HKCU\Wow6432Node, HKCU\Software\Wow6432Node, HKCU\Software\Microsoft\Wow6432Node, and HKCU\Software\Microsoft\Windows\Wow6432Node. Unfortunately, the documentation seems to imply that a child of a redirected key is not necessarily redirected so that route also has issues. So, what are my options here?

    Read the article

  • Show image from clipboard to defalut imageviewer of windows using c#.net

    - by Rajesh Rolen- DotNet Developer
    I am using below function to make a image of current form and set it in clipboard Image bit = new Bitmap(this.Width, this.Height); Graphics gs = Graphics.FromImage(bit); gs.CopyFromScreen(this.Location, new Point(0, 0), bit.Size); Guid guid = System.Guid.NewGuid(); string FileName = guid.ToString(); //Copy that image in the clipbaord. Image imgToCopy = Image.FromFile(Path.Combine(Environment.CurrentDirectory, FileName + ".jpg")); Clipboard.SetImage(imgToCopy); Now my image is in clipboard and i am able to show it in picturebox on other form using below code : mypicturebox.Image = Clipboard.GetImage(); Now the the problem is that i want to show it in default imageviewer of that system. so for that i think using "System.Diagnostics.Process.Start" we can do that.. but i dont know, how to find default imageviewer and how to set clipboard's image in that ... please help me out... if i find solution than thats good otherwise i am thinking to save that file from clipboard to harddisk and then view it in window's default imageviewer... please help me to resolve my problem.. i am using c#.net

    Read the article

  • How can I improve this search usability?

    - by Craig Whitley
    This is the first real programming attempt of mine, and theres some major flaws. It's a learning project, and I'm currently re-writing the entire thing as my php is is really messy. I really want to get an idea on how I can improve the actual usability and accesibility of the site at the same time though - so I know how to implement it correctly. The website is basically a comparison website for gameserver hosting. As I mentioned, its a learning project and I don't actually expect any revenue from it. At the moment theres only test data in it, so in the game input box select either 'Battlefied Bad Company 2' or 'Call of Duty 4: Modern Warfare' and ignore the actual search results. http://www.laglessfrag.com I wasn't really sure how to work the search functionality. Basically when you click a game in the drop down box, it sends an ajax request and finds all the locations available to that specific game in the database. After selecting the country theres another ajax to find all the citys available to the game in that country - which gives me the two unique identifiers I need to create the search results. One major and fundamental flaw is that without javascript enabled, the site ceases to function. I'll overcome that in the next re-write, but without the ajax functionality stopping the user 'going wrong' - how can I implement a search that requires two fields without creating extra steps in new pages after form submissions? I'm also no designer so my whole layout and css is a bit rubbish, but this was mainly a learning project as I'm interested in applications / programming rather than design. It's also slow as its on shared hosting, but if I can get it to work correctly then I'm not impartial to chucking a bit of money at it for faster hosting and maybe a bit of advertising and seeing where it goes (if anywhere!). Any info appreciated.

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • OpenSL ES decode 24bit FLAC

    - by yano
    I am trying to decode a FLAC file with 24bit sample format using OpenSL ES on Android. Originally, I had my SLDataFormat_PCM for the SLDataSink setup like this. _pcm.formatType = SL_DATAFORMAT_PCM; _pcm.numChannels = 2; _pcm.samplesPerSec = SL_SAMPLINGRATE_44_1; _pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _pcm.endianness = SL_BYTEORDER_LITTLEENDIAN; This is working well for basically any data format. Luckily the samplesPerSec is not respected (I don't want resampling). Now I want to support the full bit-depth of a FLAC file with 24bit samples. When using this format, it apparently performs a bit-depth conversion, because once I load the file, and then check the ANDROID_KEY_PCMFORMAT_BITSPERSAMPLE info, it is 16. When I put bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_24; or SL_PCMSAMPLEFORMAT_FIXED_32, then OpenSL ES rejects it E/libOpenSLES(22706): pAudioSnk: bitsPerSample=32 W/libOpenSLES(22706): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED) Any idea how this is meant to work? Is Android currently restricted to 16 bit int only? I would also accept 32bit float, but I don't suppose that will work either.

    Read the article

  • Implementing scroll view that is much larger than the screen view with random images

    - by pulegium
    What I'm trying to do is to implement something like the fruit machine scroll view. Basically I have a sequence of images (randomly generated on the fly) and I want to allow the users to scroll through this line. There can fit approx 10 images on the screen, but the line is virtually unlimited. My question is what would be the best approach in implementing this? So far I've thought of having a UIImageView going across the screen (with width equal to the sum of 10 images) and the image associated with it would be a combination of 12 or so images, with two images falling out of the visible area, this would allow for smooth scrolling. If the user scrolls further, then I would reconstruct the image associated with the view so the new images are appended and the old one's are discarded. This image reconstruction business sounds bit complicated, so I was wondering if there's a more logical way to implement this. There's one more thing, I want to have two lines crossing each other with images, bit like conveyor belts crossing. If that makes any sense... Bit like below: V1 V2 H1 H2 H3 H4 H5 V4 V5 So if the vertical belt is moved it'd be like: V2 H3 H1 H2 V4 H4 H5 V5 V6 H1-H5, V1-V6 being automatically generated images. I'm not asking for the implementation code, just the thoughts on the principles how to implement this. Thanks!! :)

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >