Search Results

Search found 3228 results on 130 pages for 'vb6 conversion'.

Page 116/130 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Basic problems (type inference or something else?) in Objective-C/Cocoa.

    - by Matt
    Hi, Apologies for how basic these questions are to some. Just started learning Cocoa, working through Hillegass' book, and am trying to write my first program (a GUI Cocoa app that counts the number of characters in a string). I tried this: NSString *string = [textField stringValue]; NSUInteger *stringLength = [string length]; NSString *countString = (@"There are %u characters",stringLength); [label setStringValue:countString]; But I'm getting errors like: Incompatible pointer conversion initializing 'NSUInteger' (aka 'unsigned long'), expected 'NSUInteger *'[-pedantic] for the first line, and this for the second line: Incompatible pointer types initializing 'NSUInteger *', expected 'NSString *' [-pedantic] I did try this first, but it didn't work either: [label setStringValue:[NSString stringWithFormat:@"There are %u characters",[[textField stringValue] length]]] On a similar note, I've only written in easy scripting languages before now, and I'm not sure when I should be allocing/initing objects and when I shouldn't. For example, when is it okay to do this: NSString *myString = @"foo"; or int *length = 5; instead of this: NSString *myString = [[NSString alloc] initWithString:"foo"]; And which ones should I be putting into the header files? I did check Apple's documentation, and CocoaDev, and the book I'm working for but without luck. Maybe I'm looking in the wrong places.. Thanks to anyone who takes the time to reply this: it's appreciated, and thanks for being patient with a beginner. We all start somewhere. EDIT Okay, I tried the following again: [label setStringValue:[NSString stringWithFormat:@"There are %u characters",[[textField stringValue] length]]] And it actually worked this time. Not sure why it didn't the first time, though I think I might have typed %d instead of %u by mistake. However I still don't understand why the code I posted at the top of my original post doesn't work, and I have no idea what the errors mean, and I'd very much like to know because it seems like I'm missing something important there.

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • Casting Type array to Generic array?

    - by George R
    The short version of the question - why can't I do this? I'm restricted to .NET 3.5. T[] genericArray; // Obviously T should be float! genericArray = new T[3]{ 1.0f, 2.0f, 0.0f }; // Can't do this either, why the hell not genericArray = new float[3]{ 1.0f, 2.0f, 0.0f }; Longer version - I'm working with the Unity engine here, although that's not important. What is - I'm trying to throw conversion between its fixed Vector2 (2 floats) and Vector3 (3 floats) and my generic Vector< class. I can't cast types directly to a generic array. using UnityEngine; public struct Vector { private readonly T[] _axes; #region Constructors public Vector(int axisCount) { this._axes = new T[axisCount]; } public Vector(T x, T y) { this._axes = new T[2] { x, y }; } public Vector(T x, T y, T z) { this._axes = new T[3]{x, y, z}; } public Vector(Vector2 vector2) { // This doesn't work this._axes = new T[2] { vector2.x, vector2.y }; } public Vector(Vector3 vector3) { // Nor does this this._axes = new T[3] { vector3.x, vector3.y, vector3.z }; } #endregion #region Properties public T this[int i] { get { return _axes[i]; } set { _axes[i] = value; } } public T X { get { return _axes[0];} set { _axes[0] = value; } } public T Y { get { return _axes[1]; } set { _axes[1] = value; } } public T Z { get { return this._axes.Length (Vector2 vector2) { Vector vector = new Vector(vector2); return vector; } public static explicit operator Vector(Vector3 vector3) { Vector vector = new Vector(vector3); return vector; } #endregion }

    Read the article

  • problem finding a header with a c++ makefile

    - by Max
    Hi. I've started working with my first makefile. I'm writing a roguelike in C++ using the libtcod library, and have the following hello world program to test if my environment's up and running: #include "libtcod.hpp" int main() { TCODConsole::initRoot(80, 50, "PartyHack"); TCODConsole::root->printCenter(40, 25, TCOD_BKGND_NONE, "Hello World"); TCODConsole::flush(); TCODConsole::waitForKeypress(true); } My project directory structure looks like this: /CppPartyHack ----/libtcod-1.5.1 # this is the libtcod root folder --------/include ------------libtcod.hpp ----/PartyHack --------makefile --------partyhack.cpp # the above code (while we're here, how do I do proper indentation? Using those dashes is silly.) and here's my makefile: SRCDIR = . INCDIR = ../libtcod-1.5.1/include CFLAGS = $(FLAGS) -I$(INCDIR) -I$(SRCDIR) -Wall CC = gcc CPP = g++ .SUFFIXES: .o .h .c .hpp .cpp $(TEMP)/%.o : $(SRCDIR)/%.cpp $(CPP) $(CFLAGS) -o $@ -c $< $(TEMP)/%.o : $(SRCDIR)/%.c $(CC) $(CFLAGS) -o $@ -c $< CPP_OBJS = $(TEMP)partyhack.o all : partyhack partyhack : $(CPP_OBJS) $(CPP) $(CPP_OBJS) -o $@ -L../libtcod-1.5.1 -ltcod -ltcod++ -Wl,-rpath,. clean : \rm -f $(CPP_OBJS) partyhack I'm using Ubuntu, and my terminal gives me the following errors: max@max-desktop:~/Desktop/Development/CppPartyhack/PartyHack$ make g++ -c -o partyhack.o partyhack.cpp partyhack.cpp:1:23: error: libtcod.hpp: No such file or directory partyhack.cpp: In function ‘int main()’: partyhack.cpp:5: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCOD_BKGND_NONE’ was not declared in this scope partyhack.cpp:7: error: ‘TCODConsole’ has not been declared partyhack.cpp:8: error: ‘TCODConsole’ has not been declared make: * [partyhack.o] Error 1 So obviously, the makefile can't find libtcod.hpp. I've double checked and I'm sure the relative path to libtcod.hpp in INCDIR is correct, but as I'm just starting out with makefiles, I'm uncertain what else could be wrong. My makefile is based off a template that the libtcod designers provided along with the library itself, and while I've looked at a few online makefile tutorials, the code in this makefile is a good bit more complicated than any of the examples the tutorials showed, so I'm assuming I screwed up something basic in the conversion. Thanks for any help.

    Read the article

  • boost::filesystem - how to create a boost path from a windows path string on posix plattforms?

    - by VolkA
    I'm reading path names from a database which are stored as relative paths in Windows format, and try to create a boost::filesystem::path from them on a Unix system. What happens is that the constructor call interprets the whole string as the filename. I need the path to be converted to a correct Posix path as it will be used locally. I didn't find any conversion functions in the boost::filesystem reference, nor through google. Am I just blind, is there an obvious solution? If not, how would you do this? Example: std::string win_path("foo\\bar\\asdf.xml"); std::string posix_path("foo/bar/asdf.xml"); // loops just once, as part is the whole win_path interpreted as a filename boost::filesystem::path boost_path(win_path); BOOST_FOREACH(boost::filesystem::path part, boost_path) { std::cout << part << std::endl; } // prints each path component separately boost::filesystem::path boost_path_posix(posix_path); BOOST_FOREACH(boost::filesystem::path part, boost_path_posix) { std::cout << part << std::endl; }

    Read the article

  • Suggestions for a django db structure

    - by rh0dium
    Hi Say I have the unknown number of questions. For example: Is the sky blue [y/n] What date were your born on [date] What is pi [3.14] What is a large integ [100] Now each of these questions poses a different but very type specific answer (boolean, date, float, int). Natively django can happily deal with these in a model. class SkyModel(models.Model): question = models.CharField("Is the sky blue") answer = models.BooleanField(default=False) class BirthModel(models.Model): question = models.CharField("What date were your born on") answer = models.DateTimeField(default=today) class PiModel(models.Model) question = models.CharField("What is pi") answer = models.FloatField() But this has the obvious problem in that each question has a specific model - so if we need to add a question later I have to change the database. Yuck. So now I want to get fancy - How do a set up a model where by the answer type conversion happens automagically? ANSWER_TYPES = ( ('boolean', 'boolean'), ('date', 'date'), ('float', 'float'), ('int', 'int'), ('char', 'char'), ) class Questions(models.model): question = models.CharField(() answer = models.CharField() answer_type = models.CharField(choices = ANSWER_TYPES) default = models.CharField() So in theory this would do the following: When I build up my views I look at the type of answer and ensure that I only put in that value. But when I want to pull that answer back out it will return the data in the format specified by the answer_type. Example 3.14 comes back out as a float not as a str. How can I perform this sort of automagic transformation? Or can someone suggest a better way to do this? Thanks much!!

    Read the article

  • Using entity framework to connect to multiple similar tables in .net MVC.

    - by Dite
    A relative newcomer to .net MVC2 and the entity framework, I am working on a project which requires a single web application, (C# .net 4), to connect to multiple different databases depending on the route of access, (ie subdomain). No problem with this in principle and all the logic is written to transform the subdomain into an entity connection and pass this through to the Entity Model. The problem comes with the fact that the different database whilst being largely similar in structure contain 3 or 4 unique tables bespoke to that instance. To my mind there are two ways to solve this issue, neither of which i am sure will be possible. 1/ Use a separate entity model for each database. -Attempts down this route have through up conflicts where table/sp names are the same across differnt db's, or implicit conversion errors when I try and put the different models in different namespaces. or 2/ Overwrite the classes which refer to the changeable database objects based on the value of a base controller property. -I have found nothing to suggest i can even do this. My question is if either of theser routes can ever work in principle or if i should just give up on the EF and connect to the dtabases directlky using ADO. Perhaps there is another way to solve this problem i haven't thought of? Thanks for any help...

    Read the article

  • Android: Problems downloading images and converting to bitmaps

    - by Mike
    Hi all, I am working on an application that downloads images from a url. The problem is that only some images are being correctly downloaded and others are not. First off, here is the problem code: public Bitmap downloadImage(String url) { HttpClient client = new DefaultHttpClient(); HttpResponse response = null; try { response = client.execute(new HttpGet(url)); } catch (ClientProtocolException cpe) { Log.i(LOG_FILE, "client protocol exception"); return null; } catch (IOException ioe) { Log.i(LOG_FILE, "IOE downloading image"); return null; } catch (Exception e) { Log.i(LOG_FILE, "Other exception downloading image"); return null; } // Convert images from stream to bitmap object try { Bitmap image = BitmapFactory.decodeStream(response.getEntity().getContent()); if(image==null) Log.i(LOG_FILE, "image conversion failed"); return image; } catch (Exception e) { Log.i(LOG_FILE, "Other exception while converting image"); return null; } } So what I have is a method that takes the url as a string argument and then downloads the image, converts the HttpResponse stream to a bitmap by means of the BitmapFactory.decodeStream method, and returns it. The problem is that when I am on a slow network connection (almost always 3G rather than Wi-Fi) some images are converted to null--not all of them, only some of them. Using a Wi-Fi connection works perfectly; all the images are downloaded and converted properly. Does anyone know why this is happening? Or better, how can I fix this? How would I even go about testing to determine the problem? Any help is awesome; thank you!

    Read the article

  • Disable ARC with Xcode 5

    - by user2187565
    First, sorry for my bad english, I'm french and had 15years old but StackOverFlow is for me the best forum for developers. So, in the previous versions of Xcode, we can disable ARC (Automatic Reference Counting) in the project settings when we create the project. Not now with Xcode 5 and ARC to pose me a problem: with an property list file, for the reading step, Xcode send me an error: "implicit conversion of 'int' to 'id' is disallowed with ARC". I had not the problem with the same code with Xcode 4. In my property list file, The keys are numbers and also in my viewController.m . NIKOS M.: No problem, but I don't see how I can add compiler flag with the 5th version of Xcode. The code (with french string...): NSString *error; NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *plistPath = [rootPath stringByAppendingPathComponent:@"Save.plist"]; NSArray *keys = [NSArray arrayWithObjects:@"valeurCompteur1", @"valeurCompteur2", @"valeurCompteur3", @"valeurCompteur4", @"valeurCompteur5", @"nomCompteur1", @"nomCompteur2", @"nomCompteur3", @"nomCompteur4", @"nomCompteur5", nil]; NSArray *objs = [NSArray arrayWithObjects: compteur1, compteur2, compteur3, compteur4, compteur5, nameC1, nameC2, nameC3, nameC4, nameC5, nil]; REVIEW: When I disallow ARC for the target, an warning persist. How I can resolve that please ? Thank you very much.

    Read the article

  • The same C# code produces different output in Visio Professional and Premium

    - by user615993
    I have built a simple conversion Add In, but its behavior is unfortunately different with the different Visio Editions (Visio 2010 Professional and Visio 2010 Premium). The Add In takes a Process-Diagram created with Shapes from Stencil_1.vss and creates a new slightly different Process-Diagram with Shapes from Stencil_2.vsd. It loops through a Visio page and for each shape founded creates a new shapes from new master shape (from Stencil_2.vsd) and drop it into the new page. Geometry, captions etc. are the same, only the shape-appearance is changed. Below is the source diagram: When I run the code into Visio 2010 Professional the swimlane shapes are drawn correctly. When I run the same code from Visio Premium the swimlane appearance and layout are mismatched: Both times i drop the SAME Shape("Swimlane" from the same stencil) into the Page with the SAME Code fragment: Visio.Master vm = swimlane_stencil.Masters.get_ItemU(@"Swimlane"); Visio.Shape TargetShape = targetPage.Drop(vm, shape_x, shape_y); How could I ensure, that the code produces any time the same (correct) output? Must I disable any (premium)features in the swimlane-shapesheet?

    Read the article

  • Square Brackets in Python Regular Expressions (re.sub)

    - by user1479984
    I'm migrating wiki pages from the FlexWiki engine to the FOSwiki engine using Python regular expressions to handle the differences between the two engines' markup languages. The FlexWiki markup and the FOSwiki markup, for reference. Most of the conversion works very well, except when I try to convert the renamed links. Both wikis support renamed links in their markup. For example, Flexwiki uses: "Link To Wikipedia":[http://www.wikipedia.org/] FOSwiki uses: [[http://www.wikipedia.org/][Link To Wikipedia]] both of which produce something that looks like I'm using the regular expression renameLink = re.compile ("\"(?P<linkName>[^\"]+)\":\[(?P<linkTarget>[^\[\]]+)\]") to parse out the link elements from the FlexWiki markup, which after running through something like "Link Name":[LinkTarget] is reliably producing groups <linkName> = Link Name <linkTarget = LinkTarget My issue occurs when I try to use re.sub to insert the parsed content into the FOSwiki markup. My experience with regular expressions isn't anything to write home about, but I'm under the impression that, given the groups <linkName> = Link Name <linkTarget = LinkTarget a line like line = renameLink.sub ( "[[\g<linkTarget>][\g<linkName>]]" , line ) should produce [[LinkTarget][Link Name]] However, in the output to the text files I'm getting [[LinkTarget [[Link Name]] which breaks the renamed links. After a little bit of fiddling I managed a workaround, where line = renameLink.sub ( "[[\g<linkTarget>][ [\g<linkName>]]" , line ) produces [[LinkTarget][ [[Link Name]] which, when displayed in FOSwiki looks like <[[Link Name> <--- Which WORKS, but isn't very pretty. I've also tried line = renameLink.sub ( "[[\g<linkTarget>]" + "[\g<linkName>]]" , line ) which is producing [[linkTarget [[linkName]] There are probably thousands of instances of these renamed links in the pages I'm trying to convert, so fixing it by hand isn't any good. For the record I've run the script under Python 2.5.4 and Python 2.7.3, and gotten the same results. Am I missing something really obvious with the syntax? Or is there an easy workaround?

    Read the article

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • C++ iterator and const_iterator problem for own container class

    - by BaCh
    Hi there, I'm writing an own container class and have run into a problem I can't get my head around. Here's the bare-bone sample that shows the problem. It consists of a container class and two test classes: one test class using a std:vector which compiles nicely and the second test class which tries to use my own container class in exact the same way but fails miserably to compile. #include <vector> #include <algorithm> #include <iterator> using namespace std; template <typename T> class MyContainer { public: class iterator { public: typedef iterator self_type; inline iterator() { } }; class const_iterator { public: typedef const_iterator self_type; inline const_iterator() { } }; iterator begin() { return iterator(); } const_iterator begin() const { return const_iterator(); } }; // This one compiles ok, using std::vector class TestClassVector { public: void test() { vector<int>::const_iterator I=myc.begin(); } private: vector<int> myc; }; // this one fails to compile. Why? class TestClassMyContainer { public: void test(){ MyContainer<int>::const_iterator I=myc.begin(); } private: MyContainer<int> myc; }; int main(int argc, char ** argv) { return 0; } gcc tells me: test2.C: In member function ‘void TestClassMyContainer::test()’: test2.C:51: error: conversion from ‘MyContainer::iterator’ to non-scalar type ‘MyContainer::const_iterator’ requested I'm not sure where and why the compiler wants to convert an iterator to a const_iterator for my own class but not for the STL vector class. What am I doing wrong?

    Read the article

  • How to cast C struct just another struct type if their memory size are equal?

    - by Eonil
    I have 2 matrix structs means equal data but have different form like these: // Matrix type 1. typedef float Scalar; typedef struct { Scalar e[4]; } Vector; typedef struct { Vector e[4]; } Matrix; // Matrix type 2 (you may know this if you're iPhone developer) struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; typedef struct CATransform3D CATransform3D; Their memory size are equal. So I believe there is a way to convert these types without any pointer operations or copy like this: // Implemented from external lib. CATransform3D CATransform3DMakeScale (CGFloat sx, CGFloat sy, CGFloat sz); Matrix m = (Matrix)CATransform3DMakeScale ( 1, 2, 3 ); Is this possible? Currently compiler prints an "error: conversion to non-scalar type requested" message.

    Read the article

  • C++ Operator Ambiguity

    - by Scott
    Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.

    Read the article

  • Is there any need for me to use wstring in the following case

    - by Yan Cheng CHEOK
    Currently, I am developing an app for a China customer. China customer are mostly switch to GB2312 language in their OS encoding. I need to write a text file, which will be encoded using GB2312. I use std::ofstream file I compile my application under MBCS mode, not unicode. I use the following code, to convert CString to std::string, and write it to file using ofstream std::string Utils::ToString(CString& cString) { /* Will not work correctly, if we are compiled under unicode mode. */ return (LPCTSTR)cString; } To my surprise. It just works. I thought I need to at least make use of wstring. I try to do some investigation. Here is the MBCS.txt generated. I try to print a single character named ? (its value is 0xBDC5) When I use CString to carry this character, its length is 2. When I use Utils::ToString to perform conversion to std::string, the returned string length is 2. I write to file using std::ofstream My question is : When I exam MBCS.txt using a hex editor, the value is displayed as BD (LSB) and C5 (MSB). But I am using little endian machine. Isn't hex editor should show me C5 (LSB) and BD (MSB)? I check from wikipedia. GB2312 seems doesn't specific endianness. It seems that using std::string + CString just work fine for my case. May I know in what case, the above methodology will not work? and when I should start to use wstring?

    Read the article

  • What /else/ causes this?

    - by Mordachai
    MFC Toolbox Library.lib(SimpleFileIO.obj) : error LNK2005: _wcsnlen already defined in libcmtd.lib(wcslen_s.obj) fatal error LNK1169: one or more multiply defined symbols found This is driving me nuts. Normally, one would get this if the various projects that are a part of their solution do not agree on which CRT to use (single threaded, multi-threaded, release or debug). However, I have been over this thing about 500 times now, and they all agree. Background: this is a VS 2010 project just converted from VS 2008. MFC Toolbox Library.lib is set to compile as a static library, using /MTd, as is the target .exe I am trying to compile in this solution. Further, the solution that this is being converted from (VS 2008) already compiles & links properly!!! So it's not like that there is a disagreement between the two .vcproj's - or at least there wasn't before the conversion. Furthermore, the MFC Toolbox Library is used by about 25 other projects in another solution - and in that solution (Master Build English) it compiles & links against those other projects without complaint in both debug and release targets. I have just spent the last hour going over every single project property for this target project (Cimex Header Viewer) vs. several different target exe projects in Master Build English solution - and I cannot find a difference. They appear to be identical, excepting that they're different names. I've tried doing a clean & build all. I'm simply out of ideas. Does anyone have a thought on what else I might investigate??? I think I'm ready to start chewing glass. :(

    Read the article

  • how to send image to remote server using web services in android

    - by Aswan
    get image from sdcard and store that image to remote server. i am getting the image from sdcard and i converterd that image to bytearray by using bitmap .but what's the problem if i oberver byte array it is showing some different values it is not matching with .net image byte array conversion. can u pl help if you have any solution it is very urgent to me following is the code i am using can u pl suggest me FileInputStream fin = new FileInputStream(new File("/sdcard/pictures/1.jpg")); BufferedInputStream bis = new BufferedInputStream(fin,3000); byte[] data = new byte[bis.available()]; bis.read(data, 0, data.length); byte[] data1=new byte[data.length]; for (int i = 0; i < data.length; i++) { System.out.print(data[i]); data1[i]=data[i]; } System.out.println("5..................."+data1); Bitmap bitmap = BitmapFactory.decodeByteArray(data1,0,data1.length); System.out.println("6..................."+data1.length); Log.v("hgfjohfjghjdfhgj",""+bitmap); if(bitmap!=null) image.setImageBitmap(bitmap); else Log.e("Bitmap "," Not Created");

    Read the article

  • Which is the best way to encode batch videos on server side?

    - by albanx
    Hello I am making a general question since I am a developer and I have no advance experience on video elaboration. I have to preparare a web application with the purpose to allow video files upload on our company server and then video elaboration by server, on user command. The purpose of the web application is to allow to the user to make some elaboration on video depending on user action launch from the web app: (server has to ) convert video in different format(mp4, flv...) extact keyframes from video and saves them in jpeg format possibility to extract audio from video automatic control of quality audio & video (black frames,silences detection) change scene detection and keyframe extraction ..... This what's my bosses wanted from the web based application (with the server support obviously), and I understand only the first 3 points of this list, the rest for me was arabic.... My question is: Which is the best and fastest server side application for this works, that can support multiple batch video conversions, from command line (comand line for php-soap-socket interaction or something else..)? Is suitable Adobe Media Server for batch video conversion? Which are adobe products that can be used for this purpose? Note: I have experience with Indesign Server scripting programing (sending xml with php and soap call...), and I am looking to something similiar for video elaboration. I will appreciate any answers. THANKS ALL

    Read the article

  • C++ compiler unable to find function (namespace related)

    - by CS student
    I'm working in Visual Studio 2008 on a C++ programming assignment. We were supplied with files that define the following namespace hierarchy (the names are just for the sake of this post, I know "namespace XYZ-NAMESPACE" is redundant): (MAIN-NAMESPACE){ a bunch of functions/classes I need to implement... (EXCEPTIONS-NAMESPACE){ a bunch of exceptions } (POINTER-COLLECTIONS-NAMESPACE){ Set and LinkedList classes, plus iterators } } The MAIN-NAMESPACE contents are split between a bunch of files, and for some reason which I don't understand the operator<< for both Set and LinkedList is entirely outside of the MAIN-NAMESPACE (but within Set and LinkedList's header file). Here's the Set version: template<typename T> std::ostream& operator<<(std::ostream& os, const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set<T>& set) Now here's the problem: I have the following data structure: Set A Set B Set C double num It's defined to be in a class within MAIN-NAMESPACE. When I create an instance of the class, and try to print one of the sets, it tells me that: error C2679: binary '<<' : no operator found which takes a right-hand operand of type 'const MAIN-NAMESPACE::POINTER-COLLECTIONS-NAMESPACE::Set' (or there is no acceptable conversion) However, if I just write a main() function, and create Set A, fill it up, and use the operator- it works. Any idea what is the problem? (note: I tried any combination of using and include I could think of).

    Read the article

  • C++: Maybe you know this fitfall?

    - by Martijn Courteaux
    Hi, I'm developing a game. I have a header GameSystem (just methods like the game loop, no class) with two variables: int mouseX and int mouseY. These are updated in my game loop. Now I want to access them from Game.cpp file (a class built by a header-file and the source-file). So, I #include "GameSystem.h" in Game.h. After doing this I get a lot of compile errors. When I remove the include he says of course: Game.cpp:33: error: ‘mouseX’ was not declared in this scope Game.cpp:34: error: ‘mouseY’ was not declared in this scope Where I want to access mouseX and mouseY. All my .h files have Header Guards, generated by Eclipse. I'm using SDL and if I remove the lines that wants to access the variables, everything compiles and run perfectly (*). I hope you can help me... This is the error-log when I #include "GameSystem.h" (All the code he is refering to works, like explained by the (*)): In file included from ../trunk/source/domein/Game.h:14, from ../trunk/source/domein/Game.cpp:8: ../trunk/source/domein/GameSystem.h:30: error: expected constructor, destructor, or type conversion before ‘*’ token ../trunk/source/domein/GameSystem.h:46: error: variable or field ‘InitGame’ declared void ../trunk/source/domein/GameSystem.h:46: error: ‘Game’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: ‘g’ was not declared in this scope ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘char’ ../trunk/source/domein/GameSystem.h:46: error: expected primary-expression before ‘bool’ ../trunk/source/domein/FPS.h:46: warning: ‘void FPS_SleepMilliseconds(int)’ defined but not used This is the code which try to access the two variables: SDL_Rect pointer; pointer.x = mouseX; pointer.y = mouseY; pointer.w = 3; pointer.h = 3; SDL_FillRect(buffer, &pointer, 0xFF0000);

    Read the article

  • User Defined Conversions in C++

    - by wash
    Recently, I was browsing through my copy of the C++ Pocket Reference from O'Reilly Media, and I was surprised when I came across a brief section and example regarding user-defined conversion for user-defined types: #include <iostream> class account { private: double balance; public: account (double b) { balance = b; } operator double (void) { return balance; } }; int main (void) { account acc(100.0); double balance = acc; std::cout << balance << std::endl; return 0; } I've been programming in C++ for awhile, and this is the first time I've ever seen this sort of operator overloading. The book's description of this subject is somewhat brief, leaving me with a few unanswered questions about this feature: Is this a particularly obscure feature? As I said, I've been programming in C++ for awhile and this is the first time I've ever come across this. I haven't had much luck finding more in-depth material regarding this. Is this relatively portable? (I'm compiling on GCC 4.1) Can user-defined conversions to user defined types be done? e.g. operator std::string () { /* code */ }

    Read the article

  • Using groovy ws with enum types?

    - by Jared
    I'm trying to use groovy ws to call a webservice. One of the properties of the generated class is it's self a class with an enum type. Although the debug messages show that the com.test.FinalActionType is created at runtime when the WSDL is read I can't create an instance of it using code like proxy.create("com.test.FinalActionType") When I try and assign a string to my class uin place of an instance of FinalActionType groovy is not able to do the conversion. How can I get an instance of this class to use in a webservice call? I've pasted the important part of the WSDL below. <xsd:simpleType name="FinalActionType"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="stop"/> <xsd:enumeration value="quit"/> <xsd:enumeration value="continue"/> <xsd:whiteSpace value="collapse"/> </xsd:restriction> </xsd:simpleType>

    Read the article

  • Convert Virtual Key Code to unicode string

    - by Joshua Weinberg
    I have some code I've been using to get the current keyboard layout and convert a virtual key code into a string. This works great in most situations, but I'm having trouble with some specific cases. The one that brought this to light is the accent key next to the backspace key on german QWERTZ keyboards. http://en.wikipedia.org/wiki/File:KB_Germany.svg That key generates the VK code I'd expect kVK_ANSI_Equal but when using a QWERTZ keyboard layout I get no description back. Its ending up as a dead key because its supposed to be composed with another key. Is there any way to catch these cases and do the proper conversion? My current code is below. TISInputSourceRef currentKeyboard = TISCopyCurrentKeyboardInputSource(); CFDataRef uchr = (CFDataRef)TISGetInputSourceProperty(currentKeyboard, kTISPropertyUnicodeKeyLayoutData); const UCKeyboardLayout *keyboardLayout = (const UCKeyboardLayout*)CFDataGetBytePtr(uchr); if(keyboardLayout) { UInt32 deadKeyState = 0; UniCharCount maxStringLength = 255; UniCharCount actualStringLength = 0; UniChar unicodeString[maxStringLength]; OSStatus status = UCKeyTranslate(keyboardLayout, keyCode, kUCKeyActionDown, 0, LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxStringLength, &actualStringLength, unicodeString); if(actualStringLength > 0 && status == noErr) return [[NSString stringWithCharacters:unicodeString length:(NSInteger)actualStringLength] uppercaseString]; }

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >