Search Results

Search found 2655 results on 107 pages for 'conversion'.

Page 94/107 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • Which is the best way to encode batch videos on server side?

    - by albanx
    Hello I am making a general question since I am a developer and I have no advance experience on video elaboration. I have to preparare a web application with the purpose to allow video files upload on our company server and then video elaboration by server, on user command. The purpose of the web application is to allow to the user to make some elaboration on video depending on user action launch from the web app: (server has to ) convert video in different format(mp4, flv...) extact keyframes from video and saves them in jpeg format possibility to extract audio from video automatic control of quality audio & video (black frames,silences detection) change scene detection and keyframe extraction ..... This what's my bosses wanted from the web based application (with the server support obviously), and I understand only the first 3 points of this list, the rest for me was arabic.... My question is: Which is the best and fastest server side application for this works, that can support multiple batch video conversions, from command line (comand line for php-soap-socket interaction or something else..)? Is suitable Adobe Media Server for batch video conversion? Which are adobe products that can be used for this purpose? Note: I have experience with Indesign Server scripting programing (sending xml with php and soap call...), and I am looking to something similiar for video elaboration. I will appreciate any answers. THANKS ALL

    Read the article

  • Square Brackets in Python Regular Expressions (re.sub)

    - by user1479984
    I'm migrating wiki pages from the FlexWiki engine to the FOSwiki engine using Python regular expressions to handle the differences between the two engines' markup languages. The FlexWiki markup and the FOSwiki markup, for reference. Most of the conversion works very well, except when I try to convert the renamed links. Both wikis support renamed links in their markup. For example, Flexwiki uses: "Link To Wikipedia":[http://www.wikipedia.org/] FOSwiki uses: [[http://www.wikipedia.org/][Link To Wikipedia]] both of which produce something that looks like I'm using the regular expression renameLink = re.compile ("\"(?P<linkName>[^\"]+)\":\[(?P<linkTarget>[^\[\]]+)\]") to parse out the link elements from the FlexWiki markup, which after running through something like "Link Name":[LinkTarget] is reliably producing groups <linkName> = Link Name <linkTarget = LinkTarget My issue occurs when I try to use re.sub to insert the parsed content into the FOSwiki markup. My experience with regular expressions isn't anything to write home about, but I'm under the impression that, given the groups <linkName> = Link Name <linkTarget = LinkTarget a line like line = renameLink.sub ( "[[\g<linkTarget>][\g<linkName>]]" , line ) should produce [[LinkTarget][Link Name]] However, in the output to the text files I'm getting [[LinkTarget [[Link Name]] which breaks the renamed links. After a little bit of fiddling I managed a workaround, where line = renameLink.sub ( "[[\g<linkTarget>][ [\g<linkName>]]" , line ) produces [[LinkTarget][ [[Link Name]] which, when displayed in FOSwiki looks like <[[Link Name> <--- Which WORKS, but isn't very pretty. I've also tried line = renameLink.sub ( "[[\g<linkTarget>]" + "[\g<linkName>]]" , line ) which is producing [[linkTarget [[linkName]] There are probably thousands of instances of these renamed links in the pages I'm trying to convert, so fixing it by hand isn't any good. For the record I've run the script under Python 2.5.4 and Python 2.7.3, and gotten the same results. Am I missing something really obvious with the syntax? Or is there an easy workaround?

    Read the article

  • issue with vhdl structural coding

    - by user3699982
    The code below is a simple vhdl structural architecture, however, the concurrent assignment to the signal, comb1, is upsetting the simulation with the outputs (tb_lfsr_out) and comb1 becoming undefined. Please, please help, thank you, Louise. library IEEE; use IEEE.STD_LOGIC_1164.all; entity testbench is end testbench; architecture behavioural of testbench is CONSTANT clock_frequency : REAL := 1.0e9; CONSTANT clock_period : REAL := (1.0/clock_frequency)/2.0; signal tb_master_clk, comb1: STD_LOGIC := '0'; signal tb_lfsr_out : std_logic_vector(2 DOWNTO 0) := "111"; component dff port ( q: out STD_LOGIC; d, clk: in STD_LOGIC ); end component; begin -- Clock/Start Conversion Generator tb_master_clk <= (NOT tb_master_clk) AFTER (1 SEC * clock_period); comb1 <= tb_lfsr_out(0) xor tb_lfsr_out(2); dff6: dff port map (tb_lfsr_out(2), tb_lfsr_out(1), tb_master_clk); dff7: dff port map (tb_lfsr_out(1), tb_lfsr_out(0), tb_master_clk); dff8: dff port map (tb_lfsr_out(0), comb1, tb_master_clk); end behavioural;

    Read the article

  • Android: Problems downloading images and converting to bitmaps

    - by Mike
    Hi all, I am working on an application that downloads images from a url. The problem is that only some images are being correctly downloaded and others are not. First off, here is the problem code: public Bitmap downloadImage(String url) { HttpClient client = new DefaultHttpClient(); HttpResponse response = null; try { response = client.execute(new HttpGet(url)); } catch (ClientProtocolException cpe) { Log.i(LOG_FILE, "client protocol exception"); return null; } catch (IOException ioe) { Log.i(LOG_FILE, "IOE downloading image"); return null; } catch (Exception e) { Log.i(LOG_FILE, "Other exception downloading image"); return null; } // Convert images from stream to bitmap object try { Bitmap image = BitmapFactory.decodeStream(response.getEntity().getContent()); if(image==null) Log.i(LOG_FILE, "image conversion failed"); return image; } catch (Exception e) { Log.i(LOG_FILE, "Other exception while converting image"); return null; } } So what I have is a method that takes the url as a string argument and then downloads the image, converts the HttpResponse stream to a bitmap by means of the BitmapFactory.decodeStream method, and returns it. The problem is that when I am on a slow network connection (almost always 3G rather than Wi-Fi) some images are converted to null--not all of them, only some of them. Using a Wi-Fi connection works perfectly; all the images are downloaded and converted properly. Does anyone know why this is happening? Or better, how can I fix this? How would I even go about testing to determine the problem? Any help is awesome; thank you!

    Read the article

  • C++: Maybe you know this fitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • Creating new image in a loop using OpenCV

    - by user565415
    I am programing some image conversion code with OpenCV and I don't know how can I create image memory buffer to load image on every iteration. I have number of iteration (maxImNumber) and I have an input image. In every loop program must create image that is resized and modified input image. Here is some basic code (concept). for (int imageIndex = 0; imageIndex < maxImNumber; imageIndex++){ cvCopy(inputImage, images[imageIndex], 0); cvReleaseImage(&inputImage); images[imageIndex+1] = cvCreateImage(cvSize((image[imageIndex]->width)/2, image[imageIndex]->height), IPL_DEPTH_8U, 1); for (i=1; i < image[imageIndex]->height; i++) { index = 0; // for(j=0; j < image[imageIndex]->width ; j=j+2){ // doing some basic matematical operation on image content and store it to new image images[imageIndex+1][i][index] = (image[imageIndex][i][j] + image[imageIndex][i][j+2])/2; index++ } } inputImage = cvCreateImage(cvSize((image[imageIndex+1]->width), image[imageIndex]->height), IPL_DEPTH_8U, 1); cvCopy(images[imageIndex+1], inputImage, 0); } Can somebody, please, explain how can I create this image buffer (images[]) and allocate memory for it. Also how can I access any image in this buffer? Thank you very much in advance!

    Read the article

  • Using groovy ws with enum types?

    - by Jared
    I'm trying to use groovy ws to call a webservice. One of the properties of the generated class is it's self a class with an enum type. Although the debug messages show that the com.test.FinalActionType is created at runtime when the WSDL is read I can't create an instance of it using code like proxy.create("com.test.FinalActionType") When I try and assign a string to my class uin place of an instance of FinalActionType groovy is not able to do the conversion. How can I get an instance of this class to use in a webservice call? I've pasted the important part of the WSDL below. <xsd:simpleType name="FinalActionType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="stop"/> <xsd:enumeration value="quit"/> <xsd:enumeration value="continue"/> <xsd:whiteSpace value="collapse"/> </xsd:restriction> </xsd:simpleType>

    Read the article

  • What /else/ causes this?

    - by Mordachai
    MFC Toolbox Library.lib(SimpleFileIO.obj) : error LNK2005: _wcsnlen already defined in libcmtd.lib(wcslen_s.obj) fatal error LNK1169: one or more multiply defined symbols found This is driving me nuts. Normally, one would get this if the various projects that are a part of their solution do not agree on which CRT to use (single threaded, multi-threaded, release or debug). However, I have been over this thing about 500 times now, and they all agree. Background: this is a VS 2010 project just converted from VS 2008. MFC Toolbox Library.lib is set to compile as a static library, using /MTd, as is the target .exe I am trying to compile in this solution. Further, the solution that this is being converted from (VS 2008) already compiles & links properly!!! So it's not like that there is a disagreement between the two .vcproj's - or at least there wasn't before the conversion. Furthermore, the MFC Toolbox Library is used by about 25 other projects in another solution - and in that solution (Master Build English) it compiles & links against those other projects without complaint in both debug and release targets. I have just spent the last hour going over every single project property for this target project (Cimex Header Viewer) vs. several different target exe projects in Master Build English solution - and I cannot find a difference. They appear to be identical, excepting that they're different names. I've tried doing a clean & build all. I'm simply out of ideas. Does anyone have a thought on what else I might investigate??? I think I'm ready to start chewing glass. :(

    Read the article

  • C++ iterator and const_iterator problem for own container class

    - by BaCh
    Hi there, I'm writing an own container class and have run into a problem I can't get my head around. Here's the bare-bone sample that shows the problem. It consists of a container class and two test classes: one test class using a std:vector which compiles nicely and the second test class which tries to use my own container class in exact the same way but fails miserably to compile. #include <vector> #include <algorithm> #include <iterator> using namespace std; template <typename T> class MyContainer { public: class iterator { public: typedef iterator self_type; inline iterator() { } }; class const_iterator { public: typedef const_iterator self_type; inline const_iterator() { } }; iterator begin() { return iterator(); } const_iterator begin() const { return const_iterator(); } }; // This one compiles ok, using std::vector class TestClassVector { public: void test() { vector<int>::const_iterator I=myc.begin(); } private: vector<int> myc; }; // this one fails to compile. Why? class TestClassMyContainer { public: void test(){ MyContainer<int>::const_iterator I=myc.begin(); } private: MyContainer<int> myc; }; int main(int argc, char ** argv) { return 0; } gcc tells me: test2.C: In member function ‘void TestClassMyContainer::test()’: test2.C:51: error: conversion from ‘MyContainer::iterator’ to non-scalar type ‘MyContainer::const_iterator’ requested I'm not sure where and why the compiler wants to convert an iterator to a const_iterator for my own class but not for the STL vector class. What am I doing wrong?

    Read the article

  • Defining and using controller methods in ember.js

    - by OriginalEXE
    first of all, I am total noob when it comes to OOP in JS, this is new to me so treat me like a noob. I am building my first ember.js application and I am stuck (not the first time but I would get unstuck by myself, this is a tough one though). I have two models: forms entries Entries is of course in (one to many) relationship to forms, so each form can have as many properties. Form properties: id : DS.attr( 'number' ), title : DS.attr( 'string' ), views : DS.attr( 'number' ), conversion : DS.attr( 'number' ), entries : DS.hasMany( 'entry' ) Entry properties: id : DS.attr( 'number' ), parent_id: DS.belongsTo( 'form' ) Now, I have forms route that displays all forms in tabled view, and each table row has some info like form id, name etc. and that works great. What I wanted to do is display the number of entries each form has. I figured I should do that via controller, so here is my controller now: // Form controller App.FormController = Ember.ObjectController.extend({ entriescount: function() { var entries = this.get( 'store').find( 'entry' ); return entries.filterBy( 'parent_id', this.get( 'id' ) ).get( 'length' ); }.property( '[email protected]_id') }); Now for some reason, when I use {{entriescount}} in {{#each}} loop, this returns nothing. It also returns nothing in single form route. Note that in both cases, {{title}} for example works. I am wondering, am I going the right way by using controller for this, and how do I get controller to output the data. Thanks

    Read the article

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • How to cast C struct just another struct type if their memory size are equal?

    - by Eonil
    I have 2 matrix structs means equal data but have different form like these: // Matrix type 1. typedef float Scalar; typedef struct { Scalar e[4]; } Vector; typedef struct { Vector e[4]; } Matrix; // Matrix type 2 (you may know this if you're iPhone developer) struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; typedef struct CATransform3D CATransform3D; Their memory size are equal. So I believe there is a way to convert these types without any pointer operations or copy like this: // Implemented from external lib. CATransform3D CATransform3DMakeScale (CGFloat sx, CGFloat sy, CGFloat sz); Matrix m = (Matrix)CATransform3DMakeScale ( 1, 2, 3 ); Is this possible? Currently compiler prints an "error: conversion to non-scalar type requested" message.

    Read the article

  • Trouble swapping values as keys in generic java BST class

    - by user1729869
    I was given a generic binary search tree class with the following declaration: public class BST<K extends Comparable<K>, V> I was asked to write a method that reverses the BST such that the values become the keys and keys become values. When I call the following method (defined in the class given) reverseDict.put(originalDict.get(key), key); I get the following two error messages from Netbeans: Exception in thread "main" java.lang.RuntimeException: Uncompilable source code - Erroneous sym type: BST.put And also: no suitable method found for put(V,K) method BST.put(BST<K,V>.Node,K,V) is not applicable (actual and formal argument lists differ in length) method BST.put(K,V) is not applicable (actual argument V cannot be converted to K by method invocation conversion) where V,K are type-variables: V extends Object declared in method <K,V>reverseBST(BST<K,V>) K extends Comparable<K> declared in method <K,V>reverseBST(BST<K,V>) From what the error messages are telling me, since my values do not extend Comparable I am unable to use them as keys. If I am right, how can I get around that without changing the class given (maybe a cast)?

    Read the article

  • OSX launchctl programmatically as root

    - by Lukas1
    I'm trying to start samba service using launchctl from OSX app as root, but I get error status -60031. I can run without problems the command in Terminal: sudo launchctl load -F /System/Library/LaunchDaemons/com.apple.AppleFileServer.plist` In the objective-c code, I'm using (I know it's deprecated, but that really shouldn't be the issue here) AuthorizationExecuteWithPrivileges method. Here's the code: NSString *command = @"launchctl"; // Conversion of NSArray args to char** args here (not relevant part of the code) OSStatus authStatus = AuthorizationCreate(NULL, kAuthorizationEmptyEnvironment, kAuthorizationFlagDefaults, &_authRef); if (authStatus != errAuthorizationSuccess) { NSLog(@"Failed to create application authorization: %d", (int)authStatus); return; } FILE* pipe = NULL; AuthorizationFlags flags = kAuthorizationFlagDefaults; AuthorizationItem right = {kAuthorizationRightExecute, 0, NULL, 0}; AuthorizationRights rights = {1, &right}; // Call AuthorizationCopyRights to determine or extend the allowable rights. OSStatus stat = AuthorizationCopyRights(_authRef, &rights, NULL, flags, NULL); if (stat != errAuthorizationSuccess) { NSLog(@"Copy Rights Unsuccessful: %d", (int)stat); return; } OSStatus status = AuthorizationExecuteWithPrivileges(_authRef, command.UTF8String, flags, args, &pipe); if (status != errAuthorizationSuccess) { NSLog(@"Error executing command %@ with status %d", command, status); } else { // some other stuff } I have also tried using different flags then kAuthorizationFlagDefaults, but that led to either the same problem or error code -60011 - invalid flags. What am I doing wrong here, please?

    Read the article

  • Is there any need for me to use wstring in the following case

    - by Yan Cheng CHEOK
    Currently, I am developing an app for a China customer. China customer are mostly switch to GB2312 language in their OS encoding. I need to write a text file, which will be encoded using GB2312. I use std::ofstream file I compile my application under MBCS mode, not unicode. I use the following code, to convert CString to std::string, and write it to file using ofstream std::string Utils::ToString(CString& cString) { /* Will not work correctly, if we are compiled under unicode mode. */ return (LPCTSTR)cString; } To my surprise. It just works. I thought I need to at least make use of wstring. I try to do some investigation. Here is the MBCS.txt generated. I try to print a single character named ? (its value is 0xBDC5) When I use CString to carry this character, its length is 2. When I use Utils::ToString to perform conversion to std::string, the returned string length is 2. I write to file using std::ofstream My question is : When I exam MBCS.txt using a hex editor, the value is displayed as BD (LSB) and C5 (MSB). But I am using little endian machine. Isn't hex editor should show me C5 (LSB) and BD (MSB)? I check from wikipedia. GB2312 seems doesn't specific endianness. It seems that using std::string + CString just work fine for my case. May I know in what case, the above methodology will not work? and when I should start to use wstring?

    Read the article

  • Convert Virtual Key Code to unicode string

    - by Joshua Weinberg
    I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion? My current code is below. TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource(); CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData); const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr); if(keyboardLayout) { UInt32 deadKeyState = 0; UniCharCount maxStringLength = 255; UniCharCount actualStringLength = 0; UniChar unicodeString[maxStringLength]; OSStatus status = UCKeyTranslate(keyboardLayout, keyCode, kUCKeyActionDown, 0, LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxStringLength, &actualStringLength, unicodeString); if(actualStringLength > 0 && status == noErr) return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString]; }

    Read the article

  • C++ private inheritance and static members/types

    - by WearyMonkey
    I am trying to stop a class from being able to convert its 'this' pointer into a pointer of one of its interfaces. I do this by using private inheritance via a middle proxy class. The problem is that I find private inheritance makes all public static members and types of the base class inaccessible to all classes under the inheriting class in the hierarchy. class Base { public: enum Enum { value }; }; class Middle : private Base { }; class Child : public Middle { public: void Method() { Base::Enum e = Base::value; // doesn't compile BAD! Base* base = this; // doesn't compile GOOD! } }; I've tried this in both VS2008 (the required version) and VS2010, neither work. Can anyone think of a workaround? Or a different approach to stopping the conversion? Also I am curios of the behavior, is it just a side effect of the compiler implementation, or is it by design? If by design, then why? I always thought of private inheritance to mean that nobody knows Middle inherits from Base. However, the exhibited behavior implies private inheritance means a lot more than that, in-fact Child has less access to Base than any namespace not in the class hierarchy!

    Read the article

  • How to negate a predicate function using operator ! in C++?

    - by Chan
    Hi, I want to erase all the elements that do not satisfy a criterion. For example: delete all the characters in a string that are not digit. My solution using boost::is_digit worked well. struct my_is_digit { bool operator()( char c ) const { return c >= '0' && c <= '9'; } }; int main() { string s( "1a2b3c4d" ); s.erase( remove_if( s.begin(), s.end(), !boost::is_digit() ), s.end() ); s.erase( remove_if( s.begin(), s.end(), !my_is_digit() ), s.end() ); cout << s << endl; return 0; } Then I tried my own version, the compiler complained :( error C2675: unary '!' : 'my_is_digit' does not define this operator or a conversion to a type acceptable to the predefined operator I could use not1() adapter, however I still think the operator ! is more meaningful in my current context. How could I implement such a ! like boost::is_digit() ? Any idea? Thanks, Chan Nguyen

    Read the article

  • Can I transform this asynchronous java network API into a monadic representation (or something else

    - by AlecZorab
    I've been given a java api for connecting to and communicating over a proprietary bus using a callback based style. I'm currently implementing a proof-of-concept application in scala, and I'm trying to work out how I might produce a slightly more idiomatic scala interface. A typical (simplified) application might look something like this in Java: DataType type = new DataType(); BusConnector con = new BusConnector(); con.waitForData(type.getClass()).addListener(new IListener<DataType>() { public void onEvent(DataType t) { //some stuff happens in here, and then we need some more data con.waitForData(anotherType.getClass()).addListener(new IListener<anotherType>() { public void onEvent(anotherType t) { //we do more stuff in here, and so on } }); } }); //now we've got the behaviours set up we call con.start(); In scala I can obviously define an implicit conversion from (T = Unit) into an IListener, which certainly makes things a bit simpler to read: implicit def func2Ilistener[T](f: (T => Unit)) : IListener[T] = new IListener[T]{ def onEvent(t:T) = f } val con = new BusConnector con.waitForData(DataType.getClass).addListener( (d:DataType) => { //some stuff, then another wait for stuff con.waitForData(OtherType.getClass).addListener( (o:OtherType) => { //etc }) }) Looking at this reminded me of both scalaz promises and f# async workflows. My question is this: Can I convert this into either a for comprehension or something similarly idiomatic (I feel like this should map to actors reasonably well too) Ideally I'd like to see something like: for( d <- con.waitForData(DataType.getClass); val _ = doSomethingWith(d); o <- con.waitForData(OtherType.getClass) //etc )

    Read the article

  • Building argv and argc

    - by Wylie Coyote SG.
    I'm a student programmer using Qt to build a GUI application for work. The primary purpose of this application is to open some of our old style files, allows better editing and then save the file in a new format and file extension. Recently I have been asked to allow this conversion to take place from a terminal. While I do know what argv and argc are along with what they represent I am unsure how to accomplish what they want. For instance how to handle relative paths vs. absolute... maybe how to get absolute from relative; perhaps none of that is even needed. My programming experience has been primarily with guis so this is a little new to me. Users would like the following to be ran from the terminal application -o /fileLocation /fileDestination template(to determine new format) I began to use for loops and if statements to begin accomplishing this when I relized that I might be taking the worng approach to all of this. I WOULD ALSO BE REALLY INTERESTED IF QT HAS SOMETHING FOR THIS! Here is what I have began coming up with: int main(int argc, char *argv[]) { if(argc > 1) { for(int i = 0; i < argc; i++) { if(argv[i] == "-c") { QString fileName = QString::fromStdString(argv[i+1]); QString fileDestination = QString::fromStdString(argv[i+2]); QString templateName = QString::fromStdString(argv[i+3]); QFile fileToConvert(fileName); if(fileToConvert.open(QFile::ReadOnly)) { //do stuff Thanks for reading my post and a big thanks for any contributions you make to helping me overcome this issue.

    Read the article

  • Attempted GCF app for Android

    - by Aaron
    I am new to Android and am trying to create a very basic app that calculates and displays the GCF of two numbers entered by the user. Here is a copy of my GCF.java: package com.example.GCF; import java.util.Arrays; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; public class GCF extends Activity { private TextView mAnswer; private EditText mA, mB; private Button ok; private String A, B; private int iA, iB; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mA = (EditText) findViewById(R.id.entry); mB = (EditText) findViewById(R.id.entry1); ok = (Button) findViewById(R.id.ok); mAnswer = (TextView) findViewById(R.id.answer1); ok.setOnClickListener(new OnClickListener() { public void onClick(View v) { A = mA.getText().toString(); B = mB.getText().toString(); } }); // the String to int conversion happens here iA = Integer.parseInt(A.trim()); iB = Integer.parseInt(B.trim()); while (iA != iB) { int[] nums={ iA, iB, Math.abs(iA-iB) }; Arrays.sort(nums); iA=nums[0]; iB=nums[1]; } updateDisplay(); } private void updateDisplay() { mAnswer.setText( new StringBuilder().append(iA)); } } Any Suggestions? Thank you!

    Read the article

  • Python to C/C++ const char question

    - by tsukemonoki
    I am extending Python with some C++ code. One of the functions I'm using has the following signature: int PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict, char *format, char **kwlist, ...); (link: http://docs.python.org/release/1.5.2p2/ext/parseTupleAndKeywords.html) The parameter of interest is kwlist. In the link above, examples on how to use this function are given. In the examples, kwlist looks like: static char *kwlist[] = {"voltage", "state", "action", "type", NULL}; When I compile this using g++, I get the warning: warning: deprecated conversion from string constant to ‘char*’ So, I can change the static char* to a static const char*. Unfortunately, I can't change the Python code. So with this change, I get a different compilation error (can't convert char** to const char**). Based on what I've read here, I can turn on compiler flags to ignore the warning or I can cast each of the constant strings in the definition of kwlist to char *. Currently, I'm doing the latter. What are other solutions? Sorry if this question has been asked before. I'm new.

    Read the article

  • Storing "binary" data type in C program

    - by puchu
    I need to create a program that converts one number system to other number systems. I used itoa in Windows (Dev C++) and my only problem is that I do not know how to convert binary numbers to other number systems. All the other number systems conversion work accordingly. Does this involve something like storing the input to be converted using %? Here is a snippet of my work: case 2: { printf("\nEnter a binary number: "); scanf("%d", &num); itoa(num,buffer,8); printf("\nOctal %s",buffer); itoa(num,buffer,10); printf("\nDecimal %s",buffer); itoa(num,buffer,16); printf("\nHexadecimal %s \n",buffer); break; } For decimal I used %d, for octal I used %o and for hexadecimal I used %x. What could be the correct one for binary? Thanks for future answers!

    Read the article

  • Nested bind expressions

    - by user328543
    This is a followup question to my previous question. #include <functional> int foo(void) {return 2;} class bar { public: int operator() (void) {return 3;}; int something(int a) {return a;}; }; template <class C> auto func(C&& c) -> decltype(c()) { return c(); } template <class C> int doit(C&& c) { return c();} template <class C> void func_wrapper(C&& c) { func( std::bind(doit<C>, std::forward<C>(c)) ); } int main(int argc, char* argv[]) { // call with a function pointer func(foo); func_wrapper(foo); // error // call with a member function bar b; func(b); func_wrapper(b); // call with a bind expression func(std::bind(&bar::something, b, 42)); func_wrapper(std::bind(&bar::something, b, 42)); // error // call with a lambda expression func( [](void)->int {return 42;} ); func_wrapper( [](void)->int {return 42;} ); return 0; } I'm getting a compile errors deep in the C++ headers: functional:1137: error: invalid initialization of reference of type ‘int (&)()’ from expression of type ‘int (*)()’ functional:1137: error: conversion from ‘int’ to non-scalar type ‘std::_Bind(bar, int)’ requested func_wrapper(foo) is supposed to execute func(doit(foo)). In the real code it packages the function for a thread to execute. func would the function executed by the other thread, doit sits in between to check for unhandled exceptions and to clean up. But the additional bind in func_wrapper messes things up...

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >