Search Results

Search found 19969 results on 799 pages for 'nate bit'.

Page 72/799 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • zsh right-justify in ps1

    - by Nate
    I'd like a multi-line zsh prompt with a right alined part, that will look something like this: 2.nate@host:/current/dir 16:00 -> I know about RPROMPT in zsh, but that has a right-aligned prompt opposite your normal prompt, which is on the same line of text as your typing. Is there a way to have a right-aligned portion to the first line of a multi-line command prompt? I'm looking for either a directive in the PS1 variable that says 'right align now' or a variable that is to PS1 what RPROMPT is to PROMPT. Thanks!

    Read the article

  • GCC, -O2, and bitfields - is this a bug or a feature?

    - by Rooke
    Today I discovered alarming behavior when experimenting with bit fields. For the sake of discussion and simplicity, here's an example program: #include <stdio.h> struct Node { int a:16 __attribute__ ((packed)); int b:16 __attribute__ ((packed)); unsigned int c:27 __attribute__ ((packed)); unsigned int d:3 __attribute__ ((packed)); unsigned int e:2 __attribute__ ((packed)); }; int main (int argc, char *argv[]) { Node n; n.a = 12345; n.b = -23456; n.c = 0x7ffffff; n.d = 0x7; n.e = 0x3; printf("3-bit field cast to int: %d\n",(int)n.d); n.d++; printf("3-bit field cast to int: %d\n",(int)n.d); } The program is purposely causing the 3-bit bit-field to overflow. Here's the (correct) output when compiled using "g++ -O0": 3-bit field cast to int: 7 3-bit field cast to int: 0 Here's the output when compiled using "g++ -O2" (and -O3): 3-bit field cast to int: 7 3-bit field cast to int: 8 Checking the assembly of the latter example, I found this: movl $7, %esi movl $.LC1, %edi xorl %eax, %eax call printf movl $8, %esi movl $.LC1, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp The optimizations have just inserted "8", assuming 7+1=8 when in fact the number overflows and is zero. Fortunately the code I care about doesn't overflow as far as I know, but this situation scares me - is this a known bug, a feature, or is this expected behavior? When can I expect gcc to be right about this? Edit (re: signed/unsigned) : It's being treated as unsigned because it's declared as unsigned. Declaring it as int you get the output (with O0): 3-bit field cast to int: -1 3-bit field cast to int: 0 An even funnier thing happens with -O2 in this case: 3-bit field cast to int: 7 3-bit field cast to int: 8 I admit that attribute is a fishy thing to use; in this case it's a difference in optimization settings I'm concerned about.

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • Why 32-bit color EGL configurations fail with EGL_BAD_MATCH on Moto Droid?

    - by Gilead
    I'm trying to figure out why certain EGL configurations cause eglMakeCurrent() call to return EGL_BAD_MATCH on Motorola Droid running Android 2.1u1. This is a full list of hardware-accelerated EGL configurations (those with EGL_CONFIG_CAVEAT == EGL_NONE) as there's a few others with EGL_CONFIG_CAVEAT == EGL_SLOW_CONFIG but those are backed by PixelFlinger 1.2 meaning they're using software renderer. ID: 0 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 1 RGB: 8, 8, 8 Alpha: 8 Depth: 0 Stencil: 0 // BAD MATCH ID: 2 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 3 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 4 RGB: 8, 8, 8 Alpha: 8 Depth: 0 Stencil: 0 // BAD MATCH ID: 5 RGB: 8, 8, 8 Alpha: 8 Depth: 24 Stencil: 8 // BAD MATCH ID: 6 RGB: 5, 6, 5 Alpha: 0 Depth: 24 Stencil: 8 ID: 7 RGB: 5, 6, 5 Alpha: 0 Depth: 0 Stencil: 0 ID: 8 RGB: 5, 6, 5 Alpha: 0 Depth: 24 Stencil: 8 Clearly, all configurations with 32-bit color depth fail and all 16-bit ones are OK but: 1. Why? 2. WHY?! :) 3. How do I tell which ones would fail before actually trying to use them? The code below is as simple as it can get. I put if (v[0] == 6) there to check different configs, normally they're chosen by half-clever config matcher :) private void createSurface(SurfaceHolder holder) { egl = (EGL10)EGLContext.getEGL(); eglDisplay = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); egl.eglInitialize(eglDisplay, null); int[] numConfigs = new int[1]; egl.eglChooseConfig(eglDisplay, new int[] { EGL10.EGL_NONE }, null, 0, numConfigs); EGLConfig[] configs = new EGLConfig[numConfigs[0]]; egl.eglChooseConfig(eglDisplay, new int[] { EGL10.EGL_NONE }, configs, numConfigs[0], numConfigs); int[] v = new int[1]; for (EGLConfig c : configs) { egl.eglGetConfigAttrib(eglDisplay, c, EGL10.EGL_CONFIG_ID, v); if (v[0] == 6) { eglConfig = c; } } eglContext = egl.eglCreateContext(eglDisplay, eglConfig, EGL10.EGL_NO_CONTEXT, null); if (eglContext == null || eglContext == EGL10.EGL_NO_CONTEXT) { throw new RuntimeException("Unable to create EGL context"); } eglSurface = egl.eglCreateWindowSurface(eglDisplay, eglConfig, holder, null); if (eglSurface == null || eglSurface == EGL10.EGL_NO_SURFACE) { throw new RuntimeException("Unable to create EGL surface"); } if (!egl.eglMakeCurrent(eglDisplay, eglSurface, eglSurface, eglContext)) { throw new RuntimeException("Unable to make EGL current"); } gl = (GL10)eglContext.getGL(); }

    Read the article

  • Any downsides to UPX-ing my 32-bit Python 2.6.4 development environment EXE/PYD/DLL files?

    - by Malcolm
    Are there any downsides to UPX-ing my 32-bit Python 2.6.4 development environment EXE/PYD/DLL files? The reason I'm asking is that I frequently use a custom PY2EXE script that UPX's copies of these files on every build. Yes, I could get fancy and try to cache UPXed files, but I think a simpler, safer, and higher performance solution would be for me to just UPX my Python 2.6.4 directory once and be done with it. Thoughts? Malcolm

    Read the article

  • Microsoft Windows 64-bit application development best practises installation folder.

    - by abmv
    My problem is that a vendor is providing me with a 64bit application (packed in a 64bit installer) but it goes and installs to the x86 (Program Files) Folder and he keeps telling me its OK but I want it to install in the Program Files directory; as the 32 bit version does that and scripts for the app are developed based on this assumption. Can someone direct me to the Microsoft recommended best practices for 64bit applications(links). Thanks in advance.

    Read the article

  • What alternatives to __attribute__ exist on 64-bit kernels?

    - by Saifi Khan
    Hi: Is there any alternative to non-ISO gcc specific extension __attribute__ on 64-bit kernels ? Three types that i've noticed are: function attributes, type attributes and variable attributes. eg. i'd like to avoid using __attribute__((__packed__)) for structures passed over the network, even though some gcc based code do use it. Any suggestions or pointers on how to entirely avoid __attribute__ usage in C systems/kernel code ? thanks Saifi.

    Read the article

  • How do I bit shift a long by more than 32 bits?

    - by mach7
    It seems like I should be able to perform bit shift in C/C++ by more than 32 bits provided the left operand of the shift is a long. But this doesn't seem to work, at least with the g++ compiler. Example: unsigned long A = (1L << 37) gives A = 0 which isn't what I want. Am I missing something or is this just not possible? -J

    Read the article

  • Java plugin in the browser doesn't work even though it is enabled

    - by Pratyush Nalam
    I installed the Java Development Kit (64-bit) recently and saw it includes the JRE plugin for 64-bit as well. But, since Firefox is 32-bit, I also installed JRE 32-bit version. This is what is shown in Programs and Features. Now, the problem is, the other day, I opened a site which required the Java plugin. The frame showed the regular Java loading animation and hung. Nothing happened after that. Like this: I checked Firefox's plugins section and it shows Java is enabled, so no issue there I tried other browsers - IE10 and Chrome but to no avail. It doesn't work anywhere. I saw another question which said that you have to install 64-bit then 32-bit. That's what I actually did as well. First, installed JDK 7 64-bit (which includes JRE 7 64-bit) and then installed JRE 7 32-bit. I even tried the Java website's Do I have Java? section and over there too, it just keeps spinning for ages (I have waited for more than 10-20 seconds). How do I go about now? This never happened to me in Windows 7. I am on Windows 8 Pro.

    Read the article

  • Anyone know how to get dual screens working on a Dell E6410 laptop with Ubuntu 10.04 64 bit?

    - by Curtis
    I've installed the drivers from nVidia. When I go into the NVIDIA X Server Settings application, in the X Server Display Configuration setcion, and click the "Configure" button, "TwinView" is disabled. Also, clicking "Detect Displays" doesn't pick up my monitor (which is connected through a port replicator - keyboard and mouse in that port replicator work fine). Has anyone else seen this? Is this just a limitation of the current nvidia linux drivers?

    Read the article

  • PHP oci8 dll not loading on windows 64 bit XP. What am I doing wrong?

    - by user47354
    on win 64, I installed apache, php etc. Everything works fine, except the oracle part. I can connect to oracle from sql developer which means my tnsnames.ora file is correct. When apache starts, there are no errors in the logs. But when I try to connect to oracle from my database, oracle module php_oci8.dll is not loaded. What am I doing wrong? The oci8.dll line in php.ini is there, it is uncommented There are no errors in the apache logs extension_dir in php.ini file points to the correct location

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >