Search Results

Search found 5861 results on 235 pages for 'rich pixel vector'.

Page 71/235 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Resizing an Binarized image in C#

    - by mouthpiec
    Hi, I have the following code to resize a Binarized image (hence pixel value is 0[black] or 255[white]) with the following code Bitmap ResizedCharImage = new Bitmap(newwidth, newheight); using (Graphics g = Graphics.FromImage((Image)ResizedCharImage)) { g.CompositingQuality = CompositingQuality.HighQuality; g.InterpolationMode = InterpolationMode.HighQualityBilinear; g.SmoothingMode = SmoothingMode.HighQuality; g.PixelOffsetMode = PixelOffsetMode.HighQuality; g.DrawImage(CharBitmap, new Rectangle(0, 0, newwidth, newheight), new Rectangle(0, 0, CharBitmap.Width, CharBitmap.Height), GraphicsUnit.Pixel); } The problem that I am having is that when resizing (i am enlarging the image) some of the pixel values become 254, 253, 1, 2 etc. I need that this do not occur. Is this possible, maybe by changing one of the Graphins properties?

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • UITableViewCell and strange behaviour in grouped UITableView

    - by evangelion2100
    I'm working on a grouped UITableView, with 4 sections with one row per section, and have a strange behaviour with the cells. The cells are plain UITableViewCells, but the height of the cells are around 60 - 80 pixel. Now the tableview renders the cells correct with round corners, but when I select the cells, they appear blue and recangle. I don't know why the cells behave like this, because I have another grouped UITableView with custom cells and 88 pixel height and those cells work like they should. If I change the height to the default height of 44 pixel, the cells behave like the should. Does anyone know about this behaviour and what the cause is? Like I mentioned, I don't do any fancy stuff I use default UITableViewCells in a static, grouped UITableView with 4 sections with 1 row in each section. evangelion2100

    Read the article

  • The easiest way to draw an image?

    - by Benno
    Assume you want to read an image file in a common file format from the hard drive, change the color of one pixel, and display the resulting image to the screen, in C++. Which (open-source) libraries would you recommend to accomplish the above with the least amount of code? Alternatively, which libraries would do the above in the most elegant way possible? A bit of background: I have been reading a lot of computer graphics literature recently, and there are lots of relatively easy, pixel-based algorithms which I'd like to implement. However, while the algorithm itself would usually be straightforward to implement, the necessary amount of frame-work to manipulate an image on a per-pixel basis and display the result stopped me from doing it.

    Read the article

  • How can I modify an Android bitmap in C++ (JNI/NDK) so that I can use on the Java side?

    - by HardCoder
    I call a C++ function over JNI and pass a RGBA_8888 bitmap, lock it, change the values, unlock it, return and then display it in Java with this C++ code: AndroidBitmap_getInfo(env, map, &info) < 0); AndroidBitmap_lockPixels(env, map, (void**)&pixel); for(i=info.width*info.height-1;i>=0;i--) { pixel[i] = 0xf1f1f1f1; } AndroidBitmap_unlockPixels(env, map); The problem I have is that the bitmaps looks not as I expect it and the pixel values (verified with getPixel) are not the same when I check them in Java from what I set them in C++. When I set the bitmap values to 0xffffffff I get the correct value in Java, but for many others I don't. 0xf1f1f1f1 for example turns into 0xF1FFFFFF. What do I have to do to make it work ? PS: I am using Android 2.3.4

    Read the article

  • Image processing custom filter 7 by 7

    - by ladiesMan217
    Lets say I have a 7 by 7 neighborhood around a pixel that looks like this 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 and I wanna filter the above by replacing the pixel p by the average of those pixels whose value lie in the range -10<=p_value <=10. I am new to image processing and I think in this case p_value is 25 and around 25 that are many pixel values in that range but don't exactly know to construct a convolution filter out of it.

    Read the article

  • How would I get an img element to render under a background-image in CSS.

    - by Nat Ryall
    Basically, I am trying to put a semi-transparent div over an image to serve as a background for text for a slideshow. Unfortunately, the browser seems intent on always rendering the img over the background-image. Is there any way to fix this? Here is my current CSS for the semi-transparent div: #slideshow .info { height: 80px; margin-top: -80px; background: url(../../images/slideshow-info-pixel.png) repeat; } ... with slideshow-info-pixel.png being a single pixel, 50% opacity, PNG 24. I have so far tried z-index and the CSS must be compatible with IE6.

    Read the article

  • Extracting pair member in lambda expressions and template typedef idiom

    - by Nazgob
    Hi, I have some complex types here so I decided to use nifty trick to have typedef on templated types. Then I have a class some_container that has a container as a member. Container is a vector of pairs composed of element and vector. I want to write std::find_if algorithm with lambda expression to find element that have certain value. To get the value I have to call first on pair and then get value from element. Below my std::find_if there is normal loop that does the trick. My lambda fails to compile. How to access value inside element which is inside pair? I use g++ 4.4+ and VS 2010 and I want to stick to boost lambda for now. #include <vector> #include <algorithm> #include <boost\lambda\lambda.hpp> #include <boost\lambda\bind.hpp> template<typename T> class element { public: T value; }; template<typename T> class element_vector_pair // idiom to have templated typedef { public: typedef std::pair<element<T>, std::vector<T> > type; }; template<typename T> class vector_containter // idiom to have templated typedef { public: typedef std::vector<typename element_vector_pair<T>::type > type; }; template<typename T> bool operator==(const typename element_vector_pair<T>::type & lhs, const typename element_vector_pair<T>::type & rhs) { return lhs.first.value == rhs.first.value; } template<typename T> class some_container { public: element<T> get_element(const T& value) const { std::find_if(container.begin(), container.end(), bind(&typename vector_containter<T>::type::value_type::first::value, boost::lambda::_1) == value); /*for(size_t i = 0; i < container.size(); ++i) { if(container.at(i).first.value == value) { return container.at(i); } }*/ return element<T>(); //whatever } protected: typename vector_containter<T>::type container; }; int main() { some_container<int> s; s.get_element(5); return 0; }

    Read the article

  • 2D polygon triangulation

    - by logank9
    The code below is my attempt at triangulation. It outputs the wrong angles (it read a square's angles as 90, 90. 90, 176) and draws the wrong shapes. What am I doing wrong? //use earclipping to generate a list of triangles to draw std::vector<vec> calcTriDraw(std::vector<vec> poly) { std::vector<double> polyAngles; //get angles for(unsigned int i = 0;i < poly.size();i++) { int p1 = i - 1; int p2 = i; int p3 = i + 1; if(p3 > int(poly.size())) p3 -= poly.size(); if(p1 < 0) p1 += poly.size(); //get the angle from 3 points double dx, dy; dx = poly[p2].x - poly[p1].x; dy = poly[p2].y - poly[p1].y; double a = atan2(dy,dx); dx = poly[p3].x - poly[p2].x; dy = poly[p3].y - poly[p2].y; double b = atan2(dy,dx); polyAngles.push_back((a-b)*180/PI); } std::vector<vec> triList; for(unsigned int i = 0;i < poly.size() && poly.size() > 2;i++) { int p1 = i - 1; int p2 = i; int p3 = i + 1; if(p3 > int(poly.size())) p3 -= poly.size(); if(p1 < 0) p1 += poly.size(); if(polyAngles[p2] >= 180) { continue; } else { triList.push_back(poly[p1]); triList.push_back(poly[p2]); triList.push_back(poly[p3]); poly.erase(poly.begin()+p2); std::vector<vec> add = calcTriDraw(poly); triList.insert(triList.end(), add.begin(), add.end()); break; } } return triList; }

    Read the article

  • Variable sized packet structs with vectors

    - by Rev316
    Lately I've been diving into network programming, and I'm having some difficulty constructing a packet with a variable "data" property. Several prior questions have helped tremendously, but I'm still lacking some implementation details. I'm trying to avoid using variable sized arrays, and just use a vector. But I can't get it to be transmitted correctly, and I believe it's somewhere during serialization. Now for some code. Packet Header class Packet { public: void* Serialize(); bool Deserialize(void *message); unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; }; Packet ImpL typedef struct { unsigned int sender_id; unsigned int sequence_number; std::vector<char> data; } Packet; void* Packet::Serialize(int size) { Packet* p = (Packet *) malloc(8 + 30); p->sender_id = htonl(this->sender_id); p->sequence_number = htonl(this->sequence_number); p->data.assign(size,'&'); //just for testing purposes } bool Packet::Deserialize(void *message) { Packet *s = (Packet*)message; this->sender_id = ntohl(s->sender_id); this->sequence_number = ntohl(s->sequence_number); this->data = s->data; } During execution, I simply create a packet, assign it's members, and send/receive accordingly. The above methods are only responsible for serialization. Unfortunately, the data never gets transferred. Couple of things to point out here. I'm guessing the malloc is wrong, but I'm not sure how else to compute it (i.e. what other value it would be). Other than that, I'm unsure of the proper way to use a vector in this fashion, and would love for someone to show me how (code examples please!) :) Edit: I've awarded the question to the most comprehensive answer regarding the implementation with a vector data property. Appreciate all the responses!

    Read the article

  • GAE Datastore: persisting referenced objects

    - by David
    Two "I'm sorries" to begin with: 1) I've looked for a solution (here, and elsewhere), and couldn't find the answer. 2) English is not my mother tongue, so I may have some typos and the sort - please ignore them. To the point: I am trying to persist Java objects to the GAE datastore. I am not sure as to how to persist object having ("non-trivial") referenced object. That is, assume I have the following. public class Father { String name; int age; Vector<Child> offsprings; //this is what I call "non-trivial" reference //ctor, getters, setters... } public class Child { String name; int age; Father father; //this is what I call "non-trivial" reference //ctor, getters, setters... } The name field is unique in each type domain, and is considered a Primary-Key. In order to persist the "trivial" (String, int) fields, all I need is to add the correct annotation. So far so good. However, I don't understand how should I persist the home-brewed (Child, Father) types referenced. Should I: Convert each such reference to hold the Primary-Key (a name String, in this example) instead of the "actual" object. So, Vector<Child> offsprings; becomes Vector<String> offspringsNames; ? If that is the case, how do I handle the object at run-time? Do I just query for the Primary-Key from Class.getName, to retrieve the refrenced objects? Convert each such reference to hold the actual Key provided to me by the Datastore upon the proper put() operation? So, Vector<Child> offsprings; becomes Vector<Key> offspringsHashKeys; ? I would very much appreciate all kinds of comments. I have read ALL the offical relevant GAE docs/example. Throughout, they always persist "trivial" references, natively supported by the Datastore (e.g. in the Guestbook example, only Strings, and Longs). Many thanks, David

    Read the article

  • retrieving information from web service calls

    - by Monte Chan
    Hi all, I am trying to retrieve information from a web service call. The following is what I have so far. In my text view, it is showing Map {item=anyType{key=TestKey; value=2;}; item=anyType{key=TestField; value=adsfasd; };} When I ran that in the debugger, I can see the information above in the variable, tempvar. But the question is, how do I retrieve the information (i.e. the actual values of "key" and "value" in each of the array positions)? Yes, I know there is a lot going on in onCreate and I will fix it later. Thanks in advance, Monte My codes are as follows, import java.util.Vector; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.AndroidHttpTransport; public class ViewHitUpActivity extends Activity { private static final String SOAP_ACTION = "test_function"; private static final String METHOD_NAME = "test_function"; private static final String NAMESPACE = "http://www.monteandjanicechan.com/"; private static final String URL = "http://www.monteandjanicechan.com/ws/test_ws.cfc?wsdl"; // private Object resultRequestSOAP = null; private TextView tv; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); tv = (TextView)findViewById(R.id.people_view); //SoapObject request.addProperty("test_item", "1"); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.setOutputSoapObject(request); AndroidHttpTransport androidHttpTransport = new AndroidHttpTransport(URL); try { androidHttpTransport.call(SOAP_ACTION, envelope); /* resultRequestSOAP = envelope.getResponse(); Vector tempResult = (Vector) resultRequestSOAP("test_functionReturn"); */ SoapObject resultsRequestSOAP = (SoapObject) envelope.bodyIn; Vector tempResult = (Vector) resultsRequestSOAP.getProperty("test_functionReturn"); int testsize = tempResult.size(); // SoapObject test = (SoapObject) tempResult.get(0); //String[] results = (String[]) resultRequestSOAP; Object tempvar = tempResult.elementAt(1); tv.setText(tempvar.toString()); } catch (Exception aE) { aE.printStackTrace (); tv.setText(aE.getClass().getName() + ": " + aE.getMessage()); } } }

    Read the article

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • C++ Using a class template argument as a template argument for another type

    - by toefel
    Hey Everyone, I'm having this problem while writing my own HashTable. It all works, but when I try to templatize the thing, it gave me errors. I recreated the problem as follows: THIS CODE WORKS: typedef double Item; class A { public: A() { v.push_back(pair<string, Item>("hey", 5.0)); } void iterate() { for(Iterator iter = v.begin(); iter != v.end(); ++iter) cout << iter->first << ", " << iter->second << endl; } private: vector<pair<string, double> > v; typedef vector< pair<string, double> >::iterator Iterator; }; THIS CODE DOES NOT: template<typename ValueType> class B { public: B(){} void iterate() { for(Iterator iter = v.begin(); iter != v.end(); ++iter) cout << iter->first << ", " << iter->second << endl; } private: vector<pair<string, ValueType> > v; typedef vector< pair<string, ValueType> >::iterator Iterator; }; the error messages: g++ -O0 -g3 -Wall -c -fmessage-length=0 -omain.o ..\main.cpp ..\main.cpp:50: error: type std::vector<std::pair<std::string, ValueType>, std::allocator<std::pair<std::string, ValueType> > >' is not derived from typeB' ..\main.cpp:50: error: ISO C++ forbids declaration of `iterator' with no type ..\main.cpp:50: error: expected `;' before "Iterator" ..\main.cpp: In member function `void B::iterate()': ..\main.cpp:44: error: `Iterator' was not declared in this scope ..\main.cpp:44: error: expected `;' before "iter" ..\main.cpp:44: error: `iter' was not declared in this scope Does anybody know why this is happening? Thanks!

    Read the article

  • Emacs, C++ code completion for vectors

    - by Caglar Toklu
    Hi, I am new to Emacs, and I have the following code as a sample. I have installed GNU Emacs 23.1.1 (i386-mingw-nt6.1.7600), installed cedet-1.0pre7.tar.gz. , installed ELPA, and company. You can find my simple Emacs configuration at the bottom. The problem is, when I type q[0] in main() and press . (dot), I see the 37 members of the vector, not Person although first_name and last_name are expected. The completion works as expected in the function greet() but it has nothing to do with vector. My question is, how can I accomplish code completion for vector elements too? #include <iostream> #include <vector> using namespace std; class Person { public: string first_name; string last_name; }; void greet(Person a_person) { // a_person.first_name is completed as expected! cout << a_person.first_name << "|"; cout << a_person.last_name << endl; }; int main() { vector<Person> q(2); Person guy1; guy1.first_name = "foo"; guy1.last_name = "bar"; Person guy2; guy2.first_name = "stack"; guy2.last_name = "overflow"; q[0] = guy1; q[1] = guy2; greet(guy1); greet(guy2); // cout q[0]. I want to see first_name or last_name here! } My Emacs configuration: ;;; This was installed by package-install.el. ;;; This provides support for the package system and ;;; interfacing with ELPA, the package archive. ;;; Move this code earlier if you want to reference ;;; packages in your .emacs. (when (load (expand-file-name "~/.emacs.d/elpa/package.el")) (package-initialize)) (load-file "~/.emacs.d/cedet/common/cedet.el") (semantic-load-enable-excessive-code-helpers) (require 'semantic-ia) (global-srecode-minor-mode 1) (semantic-add-system-include "/gcc/include/c++/4.4.2" 'c++-mode) (semantic-add-system-include "/gcc/i386-pc-mingw32/include" 'c++-mode) (semantic-add-system-include "/gcc/include" 'c++-mode) (defun my-semantic-hook () (imenu-add-to-menubar "TAGS")) (add-hook 'semantic-init-hooks 'my-semantic-hook)

    Read the article

  • Remove never-run call to templated function, get allocation error on run-time

    - by Narfanator
    First off, I'm a bit at a loss as to how to ask this question. So I'm going to try throwing lots of information at the problem. Ok, so, I went to completely redesign my test project for my experimental core library thingy. I use a lot of template shenanigans in the library. When I removed the "user" code, the tests gave me a memory allocation error. After quite a bit of experimenting, I narrowed it down to this bit of code (out of a couple hundred lines): void VOODOO(components::switchBoard &board){ board.addComponent<using_allegro::keyInputs<'w'> >(); } Fundementally, what's weirding me out is that it appears that the act of compiling this function (and the template function it then uses, and the template functions those then use...), makes this bug not appear. This code is not being run. Similar code (the same, but for different key vals) occurs elsewhere, but is within Boost TDD code. I realize I certainly haven't given enough information for you to solve it for me; I tried, but it more-or-less spirals into most of the code base. I think I'm most looking for "here's what the problem could be", "here's where to look", etc. There's something that's happening during compile because of this line, but I don't know enough about that step to begin looking. Sooo, how can a (presumably) compilied, but never actually run, bit of templated code, when removed, cause another part of code to fail? Error: Unhandled exceptionat 0x6fe731ea (msvcr90d.dll) in Switchboard.exe: 0xC0000005: Access violation reading location 0xcdcdcdc1. Callstack: operator delete(void * pUser Data) allocator< class name related to key inputs callbacks ::deallocate vector< same class ::_Insert_n(...) vector< " " ::insert(...) vector<" "::push_back(...) It looks like maybe the vector isn't valid, because _MyFirst and similar data members are showing values of 0xcdcdcdcd in the debugger. But the vector is a member variable...

    Read the article

  • string s; &s+1; Legal? UB?

    - by John Dibling
    Consider the following code: #include <cstdlib> #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int main() { string myAry[] = { "Mary", "had", "a", "Little", "Lamb" }; const size_t numStrs = sizeof(myStr)/sizeof(myAry[0]); vector<string> myVec(&myAry[0], &myAry[numStrs]); copy( myVec.begin(), myVec.end(), ostream_iterator<string>(cout, " ")); return 0; } Of interest here is &myAry[numStrs]: numStrs is equal to 5, so &myAry[numStrs] points to something that doesn't exist; the sixth element in the array. There is another example of this in the above code: myVec.end(), which points to one-past-the-end of the vector myVec. It's perfecly legal to take the address of this element that doesn't exist. We know the size of string, so we know where the address of the 6th element of a C-style array of strings must point to. So long as we only evaluate this pointer and never dereference it, we're fine. We can even compare it to other pointers for equality. The STL does this all the time in algorithms that act on a range of iterators. The end() iterator points past the end, and the loops keep looping while a counter != end(). So now consider this: #include <cstdlib> #include <iostream> #include <string> #include <vector> #include <algorithm> using namespace std; int main() { string myStr = "Mary"; string* myPtr = &myStr; vector<string> myVec2(myPtr, &myPtr[1]); copy( myVec2.begin(), myVec2.end(), ostream_iterator<string>(cout, " ")); return 0; } Is this code legal and well-defined? It is legal and well-defined to take the address of an array element past the end, as in &myAry[numStrs], so should it be legal and well-defined to pretend that myPtr is also an array?

    Read the article

  • Simplifying const Overloading?

    - by templatetypedef
    Hello all- I've been teaching a C++ programming class for many years now and one of the trickiest things to explain to students is const overloading. I commonly use the example of a vector-like class and its operator[] function: template <typename T> class Vector { public: T& operator[] (size_t index); const T& operator[] (size_t index) const; }; I have little to no trouble explaining why it is that two versions of the operator[] function are needed, but in trying to explain how to unify the two implementations together I often find myself wasting a lot of time with language arcana. The problem is that the only good, reliable way that I know how to implement one of these functions in terms of the other is with the const_cast/static_cast trick: template <typename T> const T& Vector<T>::operator[] (size_t index) const { /* ... your implementation here ... */ } template <typename T> T& Vector<T>::operator[] (size_t index) { return const_cast<T&>(static_cast<const Vector&>(*this)[index]); } The problem with this setup is that it's extremely tricky to explain and not at all intuitively obvious. When you explain it as "cast to const, then call the const version, then strip off constness" it's a little easier to understand, but the actual syntax is frightening,. Explaining what const_cast is, why it's appropriate here, and why it's almost universally inappropriate elsewhere usually takes me five to ten minutes of lecture time, and making sense of this whole expression often requires more effort than the difference between const T* and T* const. I feel that students need to know about const-overloading and how to do it without needlessly duplicating the code in the two functions, but this trick seems a bit excessive in an introductory C++ programming course. My question is this - is there a simpler way to implement const-overloaded functions in terms of one another? Or is there a simpler way of explaining this existing trick to students? Thanks so much!

    Read the article

  • Collision Detection probelm (intersection with plane)

    - by Demi
    I'm doing a scene using openGL (a house). I want to do some collision detection, mainly with the walls in the house. I have tried the following code: // a plane is represented with a normal and a position in space Vector planeNor(0,0,1); Vector position(0,0,-10); Plane p(planeNor,position); Vector vel(0,0,-1); double lamda; // this is the intersection point Vector pNormal; // the normal of the intersection // this method is from Nehe's Lesson 30 coll= p.TestIntersionPlane(vel,Z,lamda,pNormal); glPushMatrix(); glBegin(GL_QUADS); if(coll) glColor3f(1,0,0); else glColor3f(1,1,1); glVertex3d(0,0,-10); glVertex3d(3,0,-10); glVertex3d(3,3,-10); glVertex3d(0,3,-10); glEnd(); glPopMatrix(); Nehe's method: #define EPSILON 1.0e-8 #define ZERO EPSILON bool Plane::TestIntersionPlane(const Vector3 & position,const Vector3 & direction, double& lamda, Vector3 & pNormal) { double DotProduct=direction.scalarProduct(normal); // Dot Product Between Plane Normal And Ray Direction double l2; // Determine If Ray Parallel To Plane if ((DotProduct<ZERO)&&(DotProduct>-ZERO)) return false; l2=(normal.scalarProduct(position))/DotProduct; // Find Distance To Collision Point if (l2<-ZERO) // Test If Collision Behind Start return false; pNormal= normal; lamda=l2; return true; } Z is initially (0,0,0) and every time I move the camera towards the plane, I reduce its z component by 0.1 (i.e. Z.z-=0.1 ). I know that the problem is with the vel vector, but I can't figure out what the right value should be. Can anyone please help me?

    Read the article

  • I am getting the below mentioned error in my program. what will be the solution?

    - by suvirai
    // Finaldesktop.cpp : Defines the entry point for the console application. // include include include include include using namespace std; int SearchDirectory(vector &refvecFiles, const string &refcstrRootDirectory, const string &refcstrExtension, bool bSearchSubdirectories = true) { string strFilePath; // Filepath string strPattern; // Pattern string strExtension; // Extension HANDLE hFile; // Handle to file WIN32_FIND_DATA FileInformation; // File information strPattern = refcstrRootDirectory + "\."; hFile = FindFirstFile(strPattern.c_str(), &FileInformation); if(hFile != INVALID_HANDLE_VALUE) { do { if(FileInformation.cFileName[0] != '.') { strFilePath.erase(); strFilePath = refcstrRootDirectory + "\" + FileInformation.cFileName; if(FileInformation.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) { if(bSearchSubdirectories) { // Search subdirectory int iRC = SearchDirectory(refvecFiles, strFilePath, refcstrExtension, bSearchSubdirectories); if(iRC) return iRC; } } else { // Check extension strExtension = FileInformation.cFileName; strExtension = strExtension.substr(strExtension.rfind(".") + 1); if(strExtension == refcstrExtension) { // Save filename refvecFiles.push_back(strFilePath); } } } } while(FindNextFile(hFile, &FileInformation) == TRUE); // Close handle FindClose(hFile); DWORD dwError = GetLastError(); if(dwError != ERROR_NO_MORE_FILES) return dwError; } return 0; } int main() { int iRC = 0; vector vecAviFiles; vector vecTxtFiles; // Search 'c:' for '.avi' files including subdirectories iRC = SearchDirectory(vecAviFiles, "c:", "avi"); if(iRC) { cout << "Error " << iRC << endl; return -1; } // Print results for(vector::iterator iterAvi = vecAviFiles.begin(); iterAvi != vecAviFiles.end(); ++iterAvi) cout << *iterAvi << endl; // Search 'c:\textfiles' for '.txt' files excluding subdirectories iRC = SearchDirectory(vecTxtFiles, "c:\textfiles", "txt", false); if(iRC) { cout << "Error " << iRC << endl; return -1; } // Print results for(vector::iterator iterTxt = vecTxtFiles.begin(); iterTxt != vecTxtFiles.end(); ++iterTxt) cout << *iterTxt << endl; // Wait for keystroke _getch(); return 0; }

    Read the article

  • How can I represent a line of music notes in a way that allows fast insertion at any index?

    - by chairbender
    For "fun", and to learn functional programming, I'm developing a program in Clojure that does algorithmic composition using ideas from this theory of music called "Westergaardian Theory". It generates lines of music (where a line is just a single staff consisting of a sequence of notes, each with pitches and durations). It basically works like this: Start with a line consisting of three notes (the specifics of how these are chosen are not important). Randomly perform one of several "operations" on this line. The operation picks randomly from all pairs of adjacent notes that meet a certain criteria (for each pair, the criteria only depends on the pair and is independent of the other notes in the line). It inserts 1 or several notes (depending on the operation) between the chosen pair. Each operation has its own unique criteria. Continue randomly performing these operations on the line until the line is the desired length. The issue I've run into is that my implementation of this is quite slow, and I suspect it could be made faster. I'm new to Clojure and functional programming in general (though I'm experienced with OO), so I'm hoping someone with more experience can point out if I'm not thinking in a functional paradigm or missing out on some FP technique. My current implementation is that each line is a vector containing maps. Each map has a :note and a :dur. :note's value is a keyword representing a musical note like :A4 or :C#3. :dur's value is a fraction, representing the duration of the note (1 is a whole note, 1/4 is a quarter note, etc...). So, for example, a line representing the C major scale starting on C3 would look like this: [ {:note :C3 :dur 1} {:note :D3 :dur 1} {:note :E3 :dur 1} {:note :F3 :dur 1} {:note :G3 :dur 1} {:note :A4 :dur 1} {:note :B4 :dur 1} ] This is a problematic representation because there's not really a quick way to insert into an arbitrary index of a vector. But insertion is the most frequently performed operation on these lines. My current terrible function for inserting notes into a line basically splits the vector using subvec at the point of insertion, uses conj to join the first part + notes + last part, then uses flatten and vec to make them all be in a one-dimensional vector. For example if I want to insert C3 and D3 into the the C major scale at index 3 (where the F3 is), it would do this (I'll use the note name in place of the :note and :dur maps): (conj [C3 D3 E3] [C3 D3] [F3 G3 A4 B4]), which creates [C3 D3 E3 [C3 D3] [F3 G3 A4 B4]] (vec (flatten previous-vector)) which gives [C3 D3 E3 C3 D3 F3 G3 A4 B4] The run time of that is O(n), AFAIK. I'm looking for a way to make this insertion faster. I've searched for information on Clojure data structures that have fast insertion but haven't found anything that would work. I found "finger trees" but they only allow fast insertion at the start or end of the list. Edit: I split this into two questions. The other part is here.

    Read the article

  • Calling compiled C from R with .C()

    - by Sarah
    I'm trying to call a program (function getNBDensities in the C executable measurementDensities_out) from R. The function is passed several arrays and the variable double runsum. Right now, the getNBDensities function basically does nothing: it prints to screen the values of passed parameters. My problem is the syntax of calling the function: array(.C("getNBDensities", hr = as.double(hosp.rate), # a vector (s x 1) sp = as.double(samplingProbabilities), # another vector (s x 1) odh = as.double(odh), # another vector (s x 1) simCases = as.integer(x[c("xC1","xC2","xC3")]), # another vector (s x 1) obsCases = as.integer(y[c("yC1","yC2","yC3")]), # another vector (s x 1) runsum = as.double(runsum), # double DUP = TRUE, NAOK = TRUE, PACKAGE = "measurementDensities_out")$f, dim = length(y[c("yC1","yC2","yC3")]), dimnames = c("yC1","yC2","yC3")) The error I get, after proper execution of the function (i.e., the right output is printed to screen), is Error in dim(data) <- dim : attempt to set an attribute on NULL I'm unclear what the dimensions are that I should be passing the function: should it be s x 5 + 1 (five vectors of length s and one double)? I've tried all sorts of combinations (including sx5+1) and have found only seemingly conflicting descriptions/examples online of what's supposed to happen here. For those who are interested, the C code is below: #include <R.h> #include <Rmath.h> #include <math.h> #include <Rdefines.h> #include <R_ext/PrtUtil.h> #define NUM_STRAINS 3 #define DEBUG void getNBDensities( double *hr, double *sp, double *odh, int *simCases, int *obsCases, double *runsum ); void getNBDensities( double *hr, double *sp, double *odh, int *simCases, int *obsCases, double *runsum ) { #ifdef DEBUG for ( int s = 0; s < NUM_STRAINS; s++ ) { Rprintf("\nFor strain %d",s); Rprintf("\n\tHospitalization rate = %lg", hr[s]); Rprintf("\n\tSimulation probability = %lg",sp[s]); Rprintf("\n\tSimulated cases = %d",simCases[s]); Rprintf("\n\tObserved cases = %d",obsCases[s]); Rprintf("\n\tOverdispersion parameter = %lg",odh[s]); } Rprintf("\nRunning sum = %lg",runsum[0]); #endif } naive solution While better (i.e., potentially faster or syntactically clearer) solutions may exist (see Dirk's answer below), the following simplification of the code works: out<-.C("getNBDensities", hr = as.double(hosp.rate), sp = as.double(samplingProbabilities), odh = as.double(odh), simCases = as.integer(x[c("xC1","xC2","xC3")]), obsCases = as.integer(y[c("yC1","yC2","yC3")]), runsum = as.double(runsum)) The variables can be accessed in >out.

    Read the article

  • What's the fastest lookup algorithm for a key, pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Game Object Factory: Fixing Memory Leaks

    - by Bunkai.Satori
    Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite the factory works well, ReleaseClassType() does not fix memory leaks. bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } Before taking a look at the code below, let me help you in that my CGameObjectFactory creates pointers to functions creating particular object type. The pointers are stored within vFactories vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100 different object data types, I wished to avoid continuously traversing very long Switch() statement. Therefore, to create an object, I call vFactoriesnEnumObjectTypeID via CGameObjectFactory::create() to call stored function that generates desired object. The position of the appropriate function in the vFactories is identical to the nObjectTypeID, so I can use indexing to access the function. So the question remains, how to proceed with garbage collection and avoid reported memory leaks? #ifndef GAMEOBJECTFACTORY_H_UNIPIXELS #define GAMEOBJECTFACTORY_H_UNIPIXELS //#include "MemoryManager.h" #include <vector> template <typename BaseObject> class CGameObjectFactory { public: // cleanup and release registered object data types bool ReleaseClassTypes() { unsigned int nRecordCount = vFactories.size(); for (unsigned int nLoop = 0; nLoop < nRecordCount; nLoop++ ) { // if the object exists in the container and is valid, then render it if( vFactories[nLoop] != NULL) delete vFactories[nLoop](); } return true; } // register new object data type template <typename Object> bool RegisterClassType(unsigned int nObjectIDParam ) { if(vFactories.size() < nObjectIDParam) vFactories.resize(nObjectIDParam); vFactories[nObjectIDParam] = &CreateObjectFunc<Object>; return true; } // create new object by calling the pointer to the appropriate type function BaseObject* create(unsigned int nObjectIDParam) const { return vFactories[nObjectIDParam](); } // resize the vector array containing pointers to function calls bool resize(unsigned int nSizeParam) { vFactories.resize(nSizeParam); return true; } private: //DECLARE_HEAP; template <typename Object> static BaseObject * CreateObjectFunc() { return new Object(); } typedef BaseObject*(*factory)(); std::vector<factory> vFactories; }; //DEFINE_HEAP_T(CGameObjectFactory, "Game Object Factory"); #endif // GAMEOBJECTFACTORY_H_UNIPIXELS

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >