Search Results

Search found 10366 results on 415 pages for 'const char pointer'.

Page 289/415 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Does replacing statements by expressions using the C++ comma operator could allow more compiler opti

    - by Gabriel Cuvillier
    The C++ comma operator is used to chain individual expressions, yielding the value of the last executed expression as the result. For example the skeleton code (6 statements, 6 expressions): step1; step2; if (condition) step3; return step4; else return step5; May be rewritten to: (1 statement, 6 expressions) return step1, step2, condition? step3, step4 : step5; I noticed that it is not possible to perform step-by-step debugging of such code, as the expression chain seems to be executed as a whole. Does it means that the compiler is able to perform special optimizations which are not possible with the traditional statement approach (specially if the steps are const or inline)? Note: I'm not talking about the coding style merit of that way of expressing sequence of expressions! Just about the possible optimisations allowed by replacing statements by expressions.

    Read the article

  • Determining a location's side of the road position with MapKit

    - by idStar
    Is there a way to use MapKit in iOS7 to not only get geocoding for an address, but determine what side of the road it is on? I'm not trying to build a full function navigation app, but similar to navigation apps that tell you things like "Your destination is in 100 feet on the right", I'd like to be able to obtain this information. Does MapKit give one a way to do this? Do I need to adopt routing privileges to access such, or is this just not available in the SDK period? If not available in the iOS7 SDK, knowing what options exist as services, would be a helpful pointer.

    Read the article

  • llvm clang struct creating functions on the fly

    - by anon
    I'm using LLVM-clang on Linux. Suppose in foo.cpp I have: struct Foo { int x, y; }; How can I create a function "magic" such that: typedef (Foo) SomeFunc(Foo a, Foo b); SomeFunc func = magic("struct Foo { int x, y; };"); so that: func(SomeFunc a, SomeFunc b); // returns a.x + b.y; ? Note: So basically, "magic" needs to take a char*, have LLVM parse it to get how C++ lays out the struct, then create a function on the fly that returns a.x + b.y; Thanks!

    Read the article

  • Are there any Python reference counting/garbage collection gotchas when dealing with C code?

    - by Jason Baker
    Just for the sheer heck of it, I've decided to create a Scheme binding to libpython so you can embed Python in Scheme programs. I'm already able to call into Python's C API, but I haven't really thought about memory management. The way mzscheme's FFI works is that I can call a function, and if that function returns a pointer to a PyObject, then I can have it automatically increment the reference count. Then, I can register a finalizer that will decrement the reference count when the Scheme object gets garbage collected. I've looked at the documentation for reference counting, and don't see any problems with this at first glance (although it may be sub-optimal in some cases). Are there any gotchas I'm missing? Also, I'm having trouble making heads or tails of the cyclic garbage collector documentation. What things will I need to bear in mind here? In particular, how do I make Python aware that I have a reference to something so it doesn't collect it while I'm still using it?

    Read the article

  • Objective c key path operators @avg,@max .....

    - by davide
    arr = [[NSMutableArray alloc]init]; [arr addObject:[NSNumber numberWithInt:4]]; [arr addObject:[NSNumber numberWithInt:45]]; [arr addObject:[NSNumber numberWithInt:23]]; [arr addObject:[NSNumber numberWithInt:12]]; NSLog(@"The avg = %@", [arr valueForKeyPath:@"@avg.intValue"]); This code works fine, but why? valueForKeyPath:@"@avg.intValue" is requesting (int) from each NSNumber, but we are outputting a %@ string in the log. If i try to output a decimal %d i get a number that possibly is a pointer to something. Can somebody explain why the integers become NSNumbers when i call the @avg operator?

    Read the article

  • deleting dublicates in listview delphi

    - by radick
    hi all I am trying to remove dublicates in my listview my function like this below procedure RemoveDuplicates(const LV:TbsSkinListView); var i,j: Integer; begin LV.Items.BeginUpdate; LV.SortType := stText; try for i := 0 to LV.Items.Count-1 do begin for j:=i+1 to LV.Items.Count-1 do begin if SameText(LV.Items[i].SubItems[0], LV.Items[j].SubItems[0]) and SameText(LV.Items[i].SubItems[1], LV.Items[j].SubItems[1]) and SameText(LV.Items[i].SubItems[2], LV.Items[j].SubItems[2]) and SameText(LV.Items[i].SubItems[3], LV.Items[j].SubItems[3]) then LV.Items.Delete(j); end; end; finally LV.SortType := stNone; LV.Items.EndUpdate; end; ShowMessage('Deleted'); end; but its not doing what i intended can anyone help me ?

    Read the article

  • c++ library for endian-aware reading of raw file stream metadata?

    - by Kache4
    I've got raw data streams from image files, like: vector<char> rawData(fileSize); ifstream inFile("image.jpg"); inFile.read(&rawData[0]); I want to parse the headers of different image formats for height and width. Is there a portable library that can can read ints, longs, shorts, etc. from the buffer/stream, converting for endianess as specified? I'd like to be able to do something like: short x = rawData.readLeShort(offset); or long y = rawData.readBeLong(offset) An even better option would be a lightweight & portable image metadata library (without the extra weight of an image manipulation library) that can work on raw image data. I've found that Exif libraries out there don't support png and gif.

    Read the article

  • Pass 2d array to function in C?

    - by Evelyn
    I know it's simple, but I can't seem to make this work. My function is like so: int GefMain(int array[][5]) { //do stuff return 1; } In my main: int GefMain(int array[][5]); int main(void) { int array[1800][5]; GefMain(array); return 0; } I referred to this helpful resource, but I am still getting the error "warning: passing argument 1 of GefMain from incompatible pointer type." What am I doing wrong?

    Read the article

  • Add every value in stack

    - by rezivor
    I am trying to figure out a method that will add every value in a stack. The goal is to use that value to determine if all the values in the stack are even. I have written to code to do this template <class Object> bool Stack<Object>::objectIsEven( Object value ) const { bool answer = false; if (value % 2 == 0) answer = true; return( answer ); } However, I am stumped on how to add all of the stack's values in a separate method

    Read the article

  • [C++] instantiating bitset using hex character.

    - by bndz
    Hey, I'm trying to figure out how to instantiate a 4 bit bitset based on a hex character. For instance, If I have a character with value 'F', I want to create a bitset of size 4 initialized to 1111 or if it is A, i want to initialize it to 1010. I could use a bunch of if statements like so: fn(char c) { bitset<4 temp; if(c == 'F') temp.set(); //... if(c == '9') { temp.set(1); temp.set(3); } //... } This isn't efficient, is there a way of easily converting the string to a decimal integer and constructing the bitset using the last 4 bits of the int? Thanks for any help.

    Read the article

  • Get the lua command when a c function is called

    - by gamernb
    Supposed I register many different function names in Lua to the same function in C. Now, everytime my C function is called, is there a way to determine which function name was invoked? for example: int runCommand(lua_State *lua) { const char *name = // getFunctionName(lua) ? how would I do this part for(int i = 0; i < functions.size; i++) if(functions[i].name == name) functions[i].Call() } int main() { ... lua_register(lua, "delay", runCommand); lua_register(lua, "execute", runCommand); lua_register(lua, "loadPlugin", runCommand); lua_register(lua, "loadModule", runCommand); lua_register(lua, "delay", runCommand); } So, how do I get the name of what ever function called it?

    Read the article

  • C# SendMessage to C++ WinProc

    - by jws
    I need to send a string from C# to a C++ WindowProc. There are a number of related questions on SO related to this, but none of the answers have worked for me. Here's the situation: PInvoke: [DllImport("user32", CharSet = CharSet.Auto)] public extern static int SendMessage(IntPtr hWnd, uint wMsg, IntPtr wParam, string lParam); C#: string lparam = "abc"; NativeMethods.User32.SendMessage(handle, ConnectMsg, IntPtr.Zero, lparam); C++: API LRESULT CALLBACK HookProc (int code, WPARAM wParam, LPARAM lParam) { if (code >= 0) { CWPSTRUCT* cwp = (CWPSTRUCT*)lParam; ... (LPWSTR)cwp->lParam <-- BadPtr ... } return ::CallNextHookEx(0, code, wParam, lParam); } I've tried a number of different things, Marshalling the string as LPStr, LPWStr, also tried creating an IntPtr from unmanaged memory, and writing to it with Marshal.WriteByte. The pointer is the correct memory location on the C++ side, but the data isn't there. What am I missing?

    Read the article

  • Put logic behind generated LinqToSql fields

    - by boris callens
    In a database I use throughout several projects, there is a field that should actually be a boolean but is for reasons nobody can explain to me a field duplicated over two tables where one time it is a char ('Y'/'N') and one time an int (1/0). When I generate a datacontext with LinqToSql the fields off course gets these datatypes. It would be nice if I don't have to drag this stupid choice of datatype throughout the rest of my application. Is there a way to give the generated classes a little bit of logic that just return me return this.equals('Y'); and return this==1; Preferably without having to make an EXTRA field in my partial class. It would be a solution to give the generated field a totally different name that can only be accessed through the partial class and then generate the extra field with the original name with my custom logic in the partial class. I don't know how to alter the accesibility level in my generated class though.. Any suggestions?

    Read the article

  • Are memcached entries saved by the Danga client compatible with the Spy client?

    - by jigjig
    After saving a String value into memcached using the Danga client, I attempted to get the entry using the Spy client. The two String values are not the same. The Danga client retrieves a string with an additional empty char prepended to the string, therefore violating the equality condition. Danga: t,e,s,t,s,t,r,i,n,g Spy: ,t,e,s,t,s,t,r,i,n,g I also attempted to save a serialized Map using the Danga client and get the Map using the Spy client. The Spy client is able to only get a String form of the Map. The string contains binary values, as below: sr

    Read the article

  • Assigning a view controller to be the delegate of a subview which is not directly descendent?

    - by ambertch
    I am writing an iphone app where in numerous cases a subview needs to talk to its superview. For example: View A has a table view that contains photos A has a subview B which allows users to add photos, upon which I want to auto append them to A's table view So far I have been creating a @protocol in B, and registering A as the delegate. The problem in my case is that B has a subview C that allows users to add content, and I want actions in C to invoke changes in its grandparent, A. Currently I am working around this by passing around a self pointer to my base view controller (C.delegate = B.delegate), but it doesn't seem very proper to me. Any thoughts? (and/or general advice on code organization when all sort of subviews needs to talk to superviews would be greatly appreciated) Thanks!

    Read the article

  • gcc, UTF-8 and limits.h

    - by bobby
    My OS is Debian, my default locale is UTF-8 and my compiler is gcc. By default CHAR_BIT in limits.h is 8 which is ok for ASCII because in ASCII 1 char = 8 bits. But since I am using UTF-8, chars can be up to 32 bits which contradicts the CHAR_BIT default value of 8. If I modify CHAR_BIT to 32 in limits.h to better suit UTF-8, what do I have to do in order for this new value to come into effect ? I guess I have to recompile gcc ? Do I have to recompile the linux kernel ? What about the default installed Debian packages, will they work ?

    Read the article

  • Convert CString to string (VC6)

    - by Yan Cheng CHEOK
    I want to convert CString to string. (Yup. I know what am I doing. I know the returned string will be incorrect, if CString value range is outside ANSI, but That's Is OK!) The following code will work under VC2008. std::string Utils::CString2String(const CString& cString) { // Convert a TCHAR string to a LPCSTR CT2CA pszConvertedAnsiString (cString); // construct a std::string using the LPCSTR input std::string strStd (pszConvertedAnsiString); return strStd; } But VC6 doesn't have CT2CA macro. How I can make the code to work as well in both VC6 and VC2008?

    Read the article

  • How can I ensure that a Java object (containing cryptographic material) is zeroized?

    - by Jeremy Powell
    My concern is that cryptographic keys and secrets that are managed by the garbage collector may be copied and moved around in memory without zeroization. As a possible solution, is it enough to: public class Key { private char[] key; // ... protected void finalize() throws Throwable { try { for(int k = 0; k < key.length; k++) { key[k] = '\0'; } } catch (Exception e) { //... } finally { super.finalize(); } } // ... }

    Read the article

  • Puzzle: Overload a C++ function according to the return value

    - by Motti
    We all know that you can overload a function according to the parameters: int mul(int i, int j) { return i*j; } std::string mul(char c, int n) { return std::string(n, c); } Can you overload a function according to the return value? Define a function that returns different things according to how the return value is used: int n = mul(6, 3); // n = 18 std::string s = mul(6, 3); // s = "666" // Note that both invocations take the exact same parameters (same types) You can assume the first parameter is between 0-9, no need to verify the input or have any error handling.

    Read the article

  • Are there any C++ tools that detect misuse of static_cast, dynamic_cast, and reinterpret_cast?

    - by chrisp451
    The answers to the following question describe the recommended usage of static_cast, dynamic_cast, and reinterpret_cast in C++: http://stackoverflow.com/questions/332030/when-should-static-cast-dynamic-cast-and-reinterpret-cast-be-used Do you know of any tools that can be used to detect misuse of these kinds of cast? Would a static analysis tool like PC-Lint or Coverity Static Analysis do this? The particular case that prompted this question was the inappropriate use of static_cast to downcast a pointer, which the compiler does not warn about. I'd like to detect this case using a tool, and not assume that developers will never make this mistake.

    Read the article

  • What's the best practice to "look up" Java Enums?

    - by Marcus
    We have a REST API where clients can supply parameters representing values defined on the server in Java Enums. So we can provide a descriptive error, we add this lookup method to each Enum. Seems like we're just copying code (bad). Is there a better practice? public enum MyEnum { A, B, C, D; public static MyEnum lookup(String id) { try { return MyEnum.valueOf(id); } catch (IllegalArgumentException e) { throw new RuntimeException("Invalid value for my enum blah blah: " + id); } } } Update: The default error message provided by valueOf(..) would be No enum const class a.b.c.MyEnum.BadValue. I would like to provide a more descriptive error from the API.

    Read the article

  • How to keep historic details of modification in a database (Audit trail)?

    - by mada
    I'm a J2EE developer & we are using hibernate mapping with a PostgreSQL database. We have to keep track of any changes occurs in the database, in others words all previous & current values of any field should be saved. Each field can be any type (bytea, int, char...) With a simple table it is easy but we a graph of objects things are more difficult. So we have, speaking in a UML point of view, a graph of objects to store in the database with every changes & the user. Any idea or pattern how to do that?

    Read the article

  • got a question about Linked-List in java code.

    - by glacier89
    Linked-List: Mirror Consider the following private class for a node of a singly-linked list of integers: private class Node{ public int value; public Node next; } A wrapper-class, called, ListImpl, contains a pointer, called start to the first node of a linked list of Node. Write an instance-method for ListImpl with the signature: public void mirror(); That makes a reversed copy of the linked-list pointed to by start and appends that copy to the end of the list. So, for example the list: start 1 2 3 after a call to mirror, becomes: start 1 2 3 3 2 1 Note: in your answer you do not need to dene the rest of the class for ListImpl just the mirror method.

    Read the article

  • Is Boost.Tuple compatible with C++0x variadic templates ?

    - by Thomas Petit
    Hi, I was playing around with variadic templates (gcc 4.5) and hit this problem : template <typename... Args> boost::tuple<Args...> my_make_tuple(Args... args) { return boost::tuple<Args...>(args...); } int main (void) { boost::tuple<int, char> t = my_make_tuple(8, 'c'); } GCC error message : sorry, unimplemented: cannot expand 'Arg ...' into a fixed-length argument list In function 'int my_make_tuple(Arg ...)' If I replace every occurrence of boost::tuple by std::tuple, it compiles fine. Is there a problem in boost tuple implementation ? Or is this a gcc bug ? I must stick with Boost.Tuple for now. Do you know any workaround ? Thanks.

    Read the article

  • Why won't C++ allow this default value

    - by nieldw
    Why won't GCC allow a default parameter here? template<class edgeDecor, class vertexDecor, bool dir> Graph<edgeDecor,int,dir> Graph<edgeDecor,vertexDecor,dir>::Dijkstra(vertex s, bool print = false) const { This is the output I get: graph.h:82: error: default argument given for parameter 2 of ‘Graph<edgeDecor, int, dir> Graph<edgeDecor, vertexDecor, dir>::Dijkstra(Vertex<edgeDecor, vertexDecor, dir>, bool)’ graph.h:36: error: after previous specification in ‘Graph<edgeDecor, int, dir> Graph<edgeDecor, vertexDecor, dir>::Dijkstra(Vertex<edgeDecor, vertexDecor, dir>, bool)’ Can anyone see why I'm getting this?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >