Search Results

Search found 2746 results on 110 pages for 'vector algebra'.

Page 97/110 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Calculating skew of text OpenCV

    - by Nick
    I am trying to calculate the skew of text in an image so I can correct it for the best OCR results. Currently this is the function I am using: double compute_skew(Mat &img) { // Binarize cv::threshold(img, img, 225, 255, cv::THRESH_BINARY); // Invert colors cv::bitwise_not(img, img); cv::Mat element = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 3)); cv::erode(img, img, element); std::vector<cv::Point> points; cv::Mat_<uchar>::iterator it = img.begin<uchar>(); cv::Mat_<uchar>::iterator end = img.end<uchar>(); for (; it != end; ++it) if (*it) points.push_back(it.pos()); cv::RotatedRect box = cv::minAreaRect(cv::Mat(points)); double angle = box.angle; if (angle < -45.) angle += 90.; cv::Point2f vertices[4]; box.points(vertices); for(int i = 0; i < 4; ++i) cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(255, 0, 0), 1, CV_AA); return angle; } When I look at then angle in debug I get 0.000000 However when I give it this image I get proper results of a skew of about 16 degrees: How can I properly detect the skew in the first image?

    Read the article

  • Advice using leaks in instruments for noobs

    - by Gyozo Kudor
    Hello I am pretty new to iphone development. I have run my app for the first time using the "Leaks" from "Instruments". It shows me several leaks around 20 the smallest is 32 bytes and there is one with 1KB. I have followed the memory management guidelines, (i (think i) understand how and when to use release, not to use it when adding to autorelease pools, for every copy, retain, init there should be a release,... etc). I don't think I understand the output of the Leaks in instruments. What does "Responsible library" and "Responsible frame" mean. Because there are some classes and methods i never used directly. Are there any good tutorials for debugging memory leaks in instruments or other advice you can give me regarding leaks. Thanks in advance. Here are the largest 2 leaks. Leaked Object # Address Size Responsible Library Responsible Frame Malloc 1.00 KB 0x4827400 1024 CFNetwork std::vector *, std::allocator * ::reserve(unsigned long) // i have no idea what this is. Leaked Object # Address Size Responsible Library Responsible Frame Malloc 128 Bytes 5 640 UIKit UIImagePickerLoadPhotoLibraryIfNecessary // so this means UIImagePicker is leaking memory? The first leak i get Leaked Object # Address Size Responsible Library Responsible Frame Malloc 128 Bytes 0x442dfd0 128 UIKit UIKeyboardInputManagerClassForInputMode I don't understand any of those.

    Read the article

  • Typedef and Struct in C and H files

    - by Leif Andersen
    I've been using the following code to create various struct, but only give people outside of the C file a pointer to it. (Yes, I know that they could potentially mess around with it, so it's not entirely like the private keyword in Java, but that's okay with me). Anyway, I've been using the following code, and I looked at it today, and I'm really surprised that it's actually working, can anyone explain why this is? In my C file, I create my struct, but don't give it a tag in the typedef namespace: struct LABall { int x; int y; int radius; Vector velocity; }; And in the H file, I put this: typedef struct LABall* LABall; I am obviously using #import "LABall.h" in the c file, but I am NOT using #import "LABall.c" in the header file, as that would defeat the whole purpose of a separate header file. So, why am I able to create a pointer to the LABall* struct in the H file when I haven't actually included it? Does it have something to do with the struct namespace working accross files, even when one file is in no way linked to another? Thank you.

    Read the article

  • Spatial Rotation in Gmod Expression2.

    - by Fascia
    I'm using expression2 to program behavior in Garry's mod (http://wiki.garrysmod.com/?title=Wire_Expression2) Okay so, to set the precedent. In Gmod I have a block and I am at a complete loss of how to get it to rotate around the 3 up, down and right vectors (Which are local. ie; if I pitch it 45 degrees the forward vector is 0.707, 0.707, 0). Essentially, From the 3 vectors I'd like to be able to get local Pitch/Roll/Yaw. By Local Pitch Roll Yaw I mean that they are completely independent of one another allowing true 3d rotation. So for example; if I place my craft so its nose is parallel to the floor the X,Y,Z would be 0,0,0. If I turn it parallel to the floor (World and Local Yaw) 90 degrees it's now 0, 0, 90. If I then pitch it (World Roll, Local Pitch) it 180 degrees it's now 180, 0, 90. I've already explored quaternions however I don't believe I should post my code here as I think I was re-inventing the wheel. I know I didn't explain that well but I believe the problem is pretty generic. Any help anyone could offer is greatly appreciated. Oh, I'd like to avoid gimblelock too. Essentially calculating the rotation around each of the crafts up/forward/right vectors using the up/forward/right vectors. To simply the question a generic implementation rather than one specific to Gmod is absolutely fine.

    Read the article

  • Drawing a PDF Right-Side-Up in CGContext

    - by Carter Allen
    I'm overriding the drawRect: method in a custom UIView, and I'm doing some custom drawing. All was going well, until I needed to draw a PDF resource (a vector glyph, to be precise) into the context. First I retrieve the PDF from a file: NSURL *pdfURL = [NSURL fileURLWithPath:[[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"CardKit.bundle/A.pdf"]]; CGPDFDocumentRef pdfDoc = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); CGPDFPageRef pdfPage = CGPDFDocumentGetPage(pdfDoc, 1); Then I create a box with the same dimensions as the loaded PDF: CGRect box = CGPDFPageGetBoxRect(pdfPage, kCGPDFArtBox); Then I save my graphics state, so that I don't screw anything up: CGContextSaveGState(context); And then I perform a scale+translate of the CTM, theoretically flipping the whole context: CGContextScaleCTM(context, 1.0, -1.0); CGContextTranslateCTM(context, 0.0, rect.size.height); I then scale the PDF so that it fits into the view properly: CGContextScaleCTM(context, rect.size.width/box.size.width, rect.size.height/box.size.height); And finally, I draw the PDF and restore the graphics state: CGContextDrawPDFPage(context, pdfPage); CGContextRestoreGState(context); The issue is that there is nothing visible drawn. All this code should theoretically draw the PDF glyph into the view, right? If I remove the scale+translate used to flip the context, it draws perfectly: it just draws upside-down. Any ideas?

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

  • Qt Object Linker Problem " undefined reverence to vtable"

    - by Thomas
    This is my header: #ifndef BARELYSOCKET_H #define BARELYSOCKET_H #include <QObject> //! The First Draw of the BarelySocket! class BarelySocket: public QObject { Q_OBJECT public: BarelySocket(); public slots: void sendMessage(Message aMessage); signals: void reciveMessage(Message aMessage); private: // QVector<Message> reciveMessages; }; #endif // BARELYSOCKET_H This is my class: #include <QTGui> #include <QObject> #include "type.h" #include "client.h" #include "server.h" #include "barelysocket.h" BarelySocket::BarelySocket() { //this->reciveMessages.clear(); qDebug("BarelySocket::BarelySocket()"); } void BarelySocket::sendMessage(Message aMessage) { } void BarelySocket::reciveMessage(Message aMessage) { } I get the Linker Problem : undefined reference to 'vtable for barelySocket' This should mean, i have a virtual Function not implemented. But as you can see, there is non. I comment the vector cause that should solve the Problem, but i does not. The Message is a complex struct, but even converting it to int did not solve it. I searched Mr G but he could not help me. Thank you for your support, Thomas

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • How bad is code using std::basic_string<t> as a contiguous buffer?

    - by BillyONeal
    I know technically the std::basic_string template is not required to have contiguous memory. However, I'm curious how many implementations exist for modern compilers that actually take advantage of this freedom. For example, if one wants code like the following it seems silly to allocate a vector just to turn around instantly and return it as a string: DWORD valueLength = 0; DWORD type; LONG errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, NULL, &valueLength); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); else if (valueLength == 0) return std::wstring(); std::wstring buffer; do { buffer.resize(valueLength/sizeof(wchar_t)); errorCheck = RegQueryValueExW( hWin32, value.c_str(), NULL, &type, &buffer[0], &valueLength); } while (errorCheck == ERROR_MORE_DATA); if (errorCheck != ERROR_SUCCESS) WindowsApiException::Throw(errorCheck); return buffer; I know code like this might slightly reduce portability because it implies that std::wstring is contiguous -- but I'm wondering just how unportable that makes this code. Put another way, how may compilers actually take advantage of the freedom having noncontiguous memory allows? Oh: And of course given what the code's doing this only matters for Windows compilers.

    Read the article

  • Trying to parse OpenCV YAML ouput with yaml-cpp

    - by Kenn Sebesta
    I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp I'm doing okay on simple stuff, but the matrix representation is proving difficult. # Center of table tableCenter: !!opencv-matrix rows: 1 cols: 2 dt: f data: [ 240, 240] This should map into the vector 240 240 with type float. My code looks like: #include "yaml.h" #include <fstream> #include <string> struct Matrix { int x; }; void operator >> (const YAML::Node& node, Matrix& matrix) { unsigned rows; node["rows"] >> rows; } int main() { std::ifstream fin("monsters.yaml"); YAML::Parser parser(fin); YAML::Node doc; Matrix m; doc["tableCenter"] >> m; return 0; } But I get terminate called after throwing an instance of 'YAML::BadDereference' what(): yaml-cpp: error at line 0, column 0: bad dereference Abort trap I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance. As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.

    Read the article

  • Trouble move-capturing std::unique_ptr in a lambda using std::bind

    - by user2478832
    I'd like to capture a variable of type std::vector<std::unique_ptr<MyClass>> in a lambda expression (in other words, "capture by move"). I found a solution which uses std::bind to capture unique_ptr (http://stackoverflow.com/a/12744730/2478832) and decided to use it as a starting point. However, the most simplified version of the proposed code I could get doesn't compile (lots of template mistakes, it seems to try to call unique_ptr's copy constructor). #include <functional> #include <memory> std::function<void ()> a(std::unique_ptr<int>&& param) { return std::bind( [] (int* p) {}, std::move(param)); } int main() { a(std::unique_ptr<int>(new int())); } Can anybody point out what is wrong with this code? EDIT: tried changing the lambda to take a reference to unique_ptr, it still doesn't compile. #include <functional> #include <memory> std::function<void ()> a(std::unique_ptr<int>&& param) { return std::bind( [] (std::unique_ptr<int>& p) {}, // also as a const reference std::move(param)); } int main() { a(std::unique_ptr<int>(new int())); }

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

  • Rotate a feature Image in Open Layers - ReApply StyleMap to layer.

    - by Ozaki
    I have an open layers map. It adds and removes my "imageFeature" every 10secs or so. I have a JSON feed that gives me the rotation that I need, I have a request to get that rotation and set it as a variable of rotationv. Everything works but the fact when the image is recreated it does not update the rotation. My code is as follows: JSON request: var rotationv = (function () { rotationv = null; $.ajax({ 'async': false, 'global': true, 'url': urldefault, 'dataType': "json", 'success': function (data) { rotationv = data.rotation; } }); return rotationv })(); Creating image feature: imageFeature = new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(mylon, mylat), { rotation: rotationv } ); The image is set in the styling elsewhere. Why does this not rotate my image? Debugging with firebug I now notice that "rotationv" is updating as per it should be. But when the point is destroyed then added again it does not apply the new rotation... Add: I realised that the "image" is applied as per the layer. So I need to figure out how to re apply the stylemap to the layer so it will trigger the "update" on the page. //redefine layer 2 so it can be updated - test.js is just a 0.0 ref// var layer2 = new OpenLayers.Layer.GML("Layer 2", "test.js", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); But even if I add the above into my function still no luck. Included map.addLayers([layer2]); into the function as well. Edit: Attaching stylemap: var styleMap = new OpenLayers.StyleMap({ "default": new OpenLayers.Style({ rotation: rotationv, labelXOffset: "0", labelYOffset: "-15", graphicZIndex: 1 }) });

    Read the article

  • Is a VCS appropriate for usage by a designer?

    - by iconiK
    I know that a VCS is absolutely critical for a developer to increase productivity and protect the code, no doubts about it. But what about a designer, using say, Photoshop (though it's not specific to any tools, just to make my point clearer). VCSs uses delta compression to store different versions of files. This works very well for code, but for images, that's a problem. Raster image files are binary formats, though vector image files are text (SVG comes to my mind) and pose to problem. The problem comes with .psd files (and any other image "source" file) - those can get pretty big and since I'm not familiar with the format, I'll consider them as binary files. How would a VCS work in this condition? The repository could be pretty darned big if the VCS server isn't able to diff the files efficiently (or worse, not at all) and over time this can become a really big pain when someone needs to check out the repository (or clone it if using a DVCS). Have any of you used a VCS for this purpose? How well does it work? I'm mostly interested in Mercurial, though this is a general situation that applies to any VCS.

    Read the article

  • C++: Check istream has non-space, non-tab, non-newline characters left without extracting chars

    - by KRao
    I am reading a std::istream and I need to verify without extracting characters that: 1) The stream is not "empty", i.e. that trying to read a char will not result in an fail state (solved by using peek() member function and checking fail state, then setting back to original state) 2) That among the characters left there is at least one which is not a space, a tab or a newline char. The reason for this is, is that I am reading text files containing say one int per line, and sometimes there may be extra spaces / new-lines at the end of the file and this causes issues when I try get back the data from the file to a vector of int. A peek(int n) would probably do what I need but I am stuck with its implementation. I know I could just read istream like: while (myInt << myIstream) {...} //Will fail when I am at the end but the same check would fail for a number of different conditions (say I have something which is not an int on some line) and being able to differentiate between the two reading errors (unexpected thing, nothing left) would help me to write more robust code, as I could write: while (something_left(myIstream)) { myInt << myIstream; if (myStream.fail()) {...} //Horrible things happened } Thank you!

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • How do I best display CheckBoxes in SQL Server Reporting Services?

    - by Yadyn
    One of the many quirks of Reporting Services we've run across is the complete and utter lack of a CheckBox control or even something remotely similar. We have a form that should appear automatically filled out based on information pulled from a database. We have several bit datatype fields. Printing out "True" or "False" just looks silly, as this is supposed to look like a form that has been auto-filled out, so we want to have a series of checkboxes and labels that are either checked or unchecked. We are running SSRS 2005 but I'm not aware of SSRS 2008 having added a CheckBox control. Even if it did, we'd need to have an alternative for the time being. The best we've found so far is: use Wingdings use images use text boxes with borders and print a blank/space or a capital X All three approaches require IIF expression shenanigans. The Wingdings approach seemed to work acceptably, and was the most aesthetically pleasing except that for whatever reason it didn't always print correctly. More importantly, PDF exports, also for whatever reason, converted all fonts (generally) to Arial and so we got funky letters instead of the Windings dingbats. Images, being a pixel-based raster, don't do so well when printed along side vector stuff like text. Unless handled carefully, they tend to stretch, pixelate, and do other unprofessional looking things. While these methods do work (some with limitations as mentioned above) none of them are particularly elegant. Are we missing something obvious? Not so obvious? Does someone at Microsoft have a good reason why such a control was not provided in SSRS 2000, let alone 2 versions and 8 years later? This can't be the first time this issue has come up...

    Read the article

  • WTK emulator bluetooth connection problem

    - by Gokhan B.
    Hi! I'm developing a J2ME program with eclipse / WTK 2.5.2 and having problem with connecting two emulators using bluetooth. There is one server and one .client running on two different emulators. The problem is client program cannot discover any bluetooth device. Here is the server and client codes: public Server() { try { LocalDevice local = LocalDevice.getLocalDevice(); local.setDiscoverable(DiscoveryAgent.GIAC); server = (StreamConnectionNotifier) Connector.open("btspp://localhost:" + UUID_STRING + ";name=" + SERVICE_NAME); Util.Log("EchoServer() Server connector open!"); } catch (Exception e) {} } after calling Connector.open, I get following warning in console, which i believe is related: Warning: Unregistered device: unspecified and client code that searches for devices: public SearchForDevices(String uuid, String nm) { UUIDStr = uuid; srchServiceName = nm; try { LocalDevice local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); deviceList = new Vector(); agent.startInquiry(DiscoveryAgent.GIAC, this); // non-blocking } catch (Exception e) {} } system never calls deviceDiscovered, but calls inquiryCompleted() with INQUIRY_COMPLETED paramter, so I suppose client program runs fine. Bluetooth is enabled at emulator settings.. any ideas ?

    Read the article

  • Certain transformations in Open Inventor(Coin3D)

    - by Marc
    Hi, I am quite new to Open Inventor(Coin3D) and have the following problem: I have a SoSelection holding a root node(also SoSeparator). And the root node holds a number of SoSeparator nodes. Each of these SoSeparator nodes holds a SoTransform node and a SoCube node. When I select one cube node I want all other cubes within a certain distance to the selected cube to arrange in a circle arround the selected cube. (Moreover all of the cubes should be on a plane than) An additional information: My cubes are always oriented in the camera direction with (cubeTransform_-rotation.connectFrom(&camera_-orientation) Assuming the selected cube is the center of the circle, how do I translate the other cubes in a circle on a plane(perpendicular to the vector between the selected cube and the camera)? Especially how do I find coordinates on the plain on which the circle should be which have a certain distance from the Axis (from center cube to camera). What I already did is to search for the for all cubes within a certain distance as soon as one cube is selected. As a result I already have the required separators (which are holding the according SoTransforms and SoCubes) in a SoPathList. Now I want to arrange the cubes by modifing the according SoTransform-translation values. Regards Mark

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • iphone encryption not working

    - by Futur
    I have this encryption/decryption implemented and executes with no error, but i am unable to decrypt the data and im not sure whether the data is encrypted, kindly help. - (NSData *)AES256EncryptWithKey:(NSString *)key { char keyPtr[kCCKeySizeAES256+1]; bzero(keyPtr, sizeof(keyPtr)); [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [strData length]; data = [[NSData alloc] init]; data = [strData dataUsingEncoding:NSUTF8StringEncoding]; size_t bufferSize = dataLength + kCCBlockSizeAES128; void *pribuffer = malloc(bufferSize); size_t numBytesEncrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL /* initialization vector (optional) */, [data bytes], dataLength, /* input */ [data bytes], bufferSize, /* output */ &numBytesEncrypted); if (cryptStatus == kCCSuccess) { return [NSData dataWithBytesNoCopy:data length:numBytesEncrypted]; }} The decryption is : -(NSData *)AES256DecryptWithKey:(NSString *)key andForData:(NSData *)objDataObject { char keyPtr[kCCKeySizeAES256+1]; bzero(keyPtr, sizeof(keyPtr)); [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [objDataObject length]; size_t bufferSize = dataLength + kCCBlockSizeAES128; void *buffer = malloc(bufferSize); size_t numBytesDecrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(kCCDecrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, NULL, [objDataObject bytes], dataLength, [objDataObject bytes], bufferSize, &numBytesDecrypted); if (cryptStatus == kCCSuccess) { return [NSData dataWithBytesNoCopy:objDataObject length:numBytesDecrypted]; } } i call the above methods like this: CryptoClass *obj = [CryptoClass new]; NSData *objData = [obj AES256EncryptWithKey:@"hell"]; NSData *val = [obj AES256DecryptWithKey:@"hell" andForData:objData ]; NSLog(@"decrypted string is : %@ AND LENGTH IS %d",[val description],[val length]); Decryption doesnt seem to happen at all and the encryption - i am not sure about it, kindly help me here.

    Read the article

  • How can I change the precision of printing with the stl?

    - by mmr
    This might be a repeat, but my google-fu failed to find it. I want to print numbers to a file using the stl with the number of decimal places, rather than overall precision. So, if I do this: int precision = 16; std::vector<double> thePoint(3); thePoint[0] = 86.3671436; thePoint[0] = -334.8866574; thePoint[0] = 24.2814; ofstream file1(tempFileName, ios::trunc); file1 << std::setprecision(precision) << thePoint[0] << "\\" << thePoint[1] << "\\" << thePoint[2] << "\\"; I'll get numbers like this: 86.36714359999999\-334.8866574\24.28140258789063 What I want is this: 86.37\-334.89\24.28 In other words, truncating at two decimal points. If I set precision to be 4, then I'll get 86.37\-334.9\24.28 ie, the second number is improperly truncated. I do not want to have to manipulate each number explicitly to get the truncation, especially because I seem to be getting the occasional 9 repeating or 0000000001 or something like that that's left behind. I'm sure there's something obvious, like using the printf(%.2f) or something like that, but I'm unsure how to mix that with the stl << and ofstream.

    Read the article

  • How to implement RSA-CBC?(I have uploaded the request document)

    - by tq0fqeu
    I don't konw more about cipher, I just want to implement RSA-CBC which maybe mean that the result of RSA encrypt in CBC mode, and I have implemented RSA. any code languages will be ok, java will be appreciated thx I copy the request as follow(maybe has spelling wrong), but that's French I don't konw that: Pr´esentation du mini-projet Le but du mini-projet est d’impl´ementer une version ´el´ementaire du chi?rement d’un bloc par RSA et d’inclure cette primitive dans un systeme de chi?rement par bloc avec chaˆinage de blocs et IV (Initial Vector ) al´eatoire. Dans ce systeme, un texte clair (`a chi?rer) est d´ecompos´e en blocs de taille t (?x´ee par l’utilisa- teur), chaque bloc (clair) est chi?r´e par RSA en un bloc crypt´e de mˆeme taille, puis le cryptogramme associ´e au texte clair initial est obtenu en chaˆinant les blocs crypt´es par la m´ethode CBC (cipher- block chaining) d´ecrite dans le cours (voir poly “Block Ciphers”) Votre programme devra demander a l’utilisateur la taille t, puis, apres g´en´eration des cl´es publique et priv´ee, lui proposer de chi?rer ou d´echi?rer un (court) ?chier ASCII. Il est indispensable que votre programme soit au moins capable de traiter le cas (tres peu r´ealiste du point de vue de la s´ecurit´e) t = 32. Pour les traiter des blocs plus grands, il vous faudra impl´ementer des routines d’arithm´etique multi-pr´ecision ; pour cela, je vous conseille de faire appela des bibliotheques libres comme GMP (GNU Multiprecision Library). Pour la g´en´eration al´eatoire des nombres premiers p et q, vous pouvez ´egalement faire appela des bibliotheques sp´ecialis´ees,a condition de me donner toutes les pr´ecisions n´ecessaires. Vous devez m’envoyer (avant une date qui reste a ?xer)a mon adresse ´electronique ([email protected]) un courriel (sujet : [MI1-crypto] : devoir, corps du message : les noms des ´etudiants ayant travaill´e sur le mini-projet) auquel sera attach´e un dossier compress´e regroupant vos sources C ou Java comment´ees, votre programme ex´ecutable, et un ?chier texte ou PDF donnant toutes les pr´ecisions sur les biblioth`eques utilis´ees, vos choix algorithmiques et d’impl´ementation, et les raisons de ces choix (complexit´e algorithmique, robus- tesse, facilit´e d’impl´ementation, etc.). Vous pouvez travailler en binˆome ou en trinˆome, mais je serai nettement plus exigeant avec les trinˆomes I have uploaded the request at http://uploading.com/files/22emmm6b/enonce_projet.pdf/ thx

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >