Search Results

Search found 2746 results on 110 pages for 'vector algebra'.

Page 51/110 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Instruments memory leak iphone

    - by dubbeat
    Hi, I posted this problem a few days ago but it was very muddled and my question wasnt very clear so I removed it. I've been digging around and the memory leak is still persiting. Hopefully this attempt will be clearer. First off I've run the static analyzer and it reports no memory leaks. I then ran Instruments and it pointed to a memory leak at this line of code. As far as I can see there is no memory leak. featured=[[UILabel alloc]initWithFrame:CGRectMake(130,15, 200, 15)]; //[featured setFont:[UIFont UIFontboldSystemFontOfSize:20]]; featured.font = [UIFont boldSystemFontOfSize:20]; featured.backgroundColor= [UIColor clearColor]; featured.textColor=[UIColor blackColor]; featured.text= @"Featured Promo"; [self.view addSubview:featured]; [featured release]; featured=nil; If I comment out the above code Instruments reports another memory leak in another block of code where there is no discernible leak. UIButton *populartbutton = [[UIButton buttonWithType:UIButtonTypeRoundedRect]]; populartbutton.frame = CGRectMake(112, 145, 90, 22); // size and position of button [populartbutton setTitle:@"Popular" forState:UIControlStateNormal]; populartbutton.backgroundColor = [UIColor clearColor]; populartbutton.adjustsImageWhenHighlighted = YES; [populartbutton addTarget:self action:@selector(getpopular:) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:populartbutton]; Instruments also says Responsible Library = Core Graphics Responsible Frame = open_handle_to_dylib_path This Is the stack trace. 53 Promo start 52 Promo main /Users/..2/main.m:14 51 UIKit UIApplicationMain 50 UIKit -[UIApplication _run] 49 CoreFoundation CFRunLoopRunInMode 48 CoreFoundation CFRunLoopRunSpecific 47 GraphicsServices PurpleEventCallback 46 UIKit _UIApplicationHandleEvent 45 UIKit -[UIApplication sendEvent:] 44 UIKit -[UIApplication handleEvent:withNewEvent:] 43 UIKit -[UIApplication _reportAppLaunchFinished] 42 QuartzCore CA::Transaction::commit() 41 QuartzCore CA::Context::commit_transaction(CA::Transaction*) 40 QuartzCore CALayerLayoutIfNeeded 39 QuartzCore -[CALayer layoutSublayers] 38 UIKit -[UILayoutContainerView layoutSubviews] 37 UIKit -[UINavigationController _startDeferredTransitionIfNeeded] 36 UIKit -[UINavigationController _startTransition:fromViewController:toViewController:] 35 UIKit -[UINavigationController _layoutViewController:] 34 UIKit -[UINavigationController_computeAndApplyScrollContentInsetDeltaForViewController:] 33 UIKit -[UIViewController contentScrollView] 32 UIKit -[UIViewController view] 31 Promo -[FeaturedLevelViewController viewDidLoad] /Users/..s/FeaturedLevelViewController.m:67 // THIS IS MY CLASS WHERE THE CODE SAMPLES ABOVE ARE FROM 30 UIKit -[UILabel initWithFrame:] 29 UIKit -[UILabel _commonInit] 28 UIKit +[UILabel defaultFont] 27 UIKit +[UIFont systemFontOfSize:] 26 GraphicsServices GSFontCreateWithName 25 CoreGraphics CGFontCreateWithName 24 CoreGraphics CGFontCreateWithFontName 23 CoreGraphics CGFontFinderGetDefault 22 CoreGraphics CGFontGetVTable 21 libSystem.B.dylib pthread_once 20 CoreGraphics load_vtable 19 CoreGraphics load_library 18 CoreGraphics CGLibraryLoadFunction 17 CoreGraphics load_function 16 CoreGraphics open_handle_to_dylib_path 15 libSystem.B.dylib dlopen 14 dyld dlopen 13 dyld dyld::link(ImageLoader*, bool, ImageLoader::RPathChain const&) 12 dyld ImageLoader::link(ImageLoader::LinkContext const&, bool, bool, ImageLoader::RPathChain const&) 11 dyld ImageLoader::recursiveLoadLibraries(ImageLoader::LinkContext const&, bool, ImageLoader::RPathChain const&) 10 dyld dyld::libraryLocator(char const*, bool, char const*, ImageLoader::RPathChain const*) 9 dyld dyld::load(char const*, dyld::LoadContext const&) 8 dyld dyld::loadPhase0(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 7 dyld dyld::loadPhase1(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 6 dyld dyld::loadPhase3(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 5 dyld dyld::loadPhase4(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 4 dyld dyld::loadPhase5(char const*, dyld::LoadContext const&, std::vector<char const*, std::allocator<char const*> >*) 3 dyld dyld::mkstringf(char const*, ...) 2 dyld strdup 1 dyld malloc 0 libSystem.B.dylib malloc I'm really not too sure how to use this information to fix the problem so any guidance would be appreciated. Perhaps the answer is in the trace but I just don't know what to look for? EDIT:: The above stack trace is when running on the simulator. The following is from running on a device. This trace does not point to any of my own classes 23 Promo 0x0 22 libSystem.B.dylib _pthread_body 21 Foundation __NSThread__main__ 20 Foundation +[NSThread exit] 19 libSystem.B.dylib _pthread_exit 18 libSystem.B.dylib _pthread_tsd_cleanup 17 QuartzCore CA::Transaction::release_thread(void*) 16 QuartzCore CA::Transaction::commit() 15 QuartzCore CA::Context::commit_transaction(CA::Transaction*) 14 QuartzCore CALayerDisplayIfNeeded 13 QuartzCore -[CALayer display] 12 QuartzCore -[CALayer _display] 11 QuartzCore CABackingStoreUpdate 10 QuartzCore backing_callback(CGContext*, void*) 9 QuartzCore -[CALayer drawInContext:] 8 UIKit -[UIView(CALayerDelegate) drawLayer:inContext:] 7 UIKit -[UILabel drawRect:] 6 UIKit -[UILabel drawTextInRect:] 5 UIKit -[UILabel _drawTextInRect:baselineCalculationOnly:] 4 UIKit -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:] 3 UIKit -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:letterSpacing:includeEmoji:] 2 WebCore WKSetCurrentGraphicsContext 1 WebCore CurrentThreadContext() 0 libSystem.B.dylib calloc

    Read the article

  • Boost::Interprocess Container Container Resizing No Default Constructor

    - by CuppM
    Hi, After combing through the Boost::Interprocess documentation and Google searches, I think I've found the reason/workaround to my issue. Everything I've found, as I understand it, seems to be hinting at this, but doesn't come out and say "do this because...". But if anyone can verify this I would appreciate it. I'm writing a series of classes that represent a large lookup of information that is stored in memory for fast performance in a parallelized application. Because of the size of data and multiple processes that run at a time on one machine, we're using Boost::Interprocess for shared memory to have a single copy of the structures. I looked at the Boost::Interprocess documentation and examples, and they typedef classes for shared memory strings, string vectors, int vector vectors, etc. And when they "use" them in their examples, they just construct them passing the allocator and maybe insert one item that they've constructed elsewhere. Like on this page: http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess/allocators_containers.html So following their examples, I created a header file with typedefs for shared memory classes: namespace shm { namespace bip = boost::interprocess; // General/Utility Types typedef bip::managed_shared_memory::segment_manager segment_manager_t; typedef bip::allocator<void, segment_manager_t> void_allocator; // Integer Types typedef bip::allocator<int, segment_manager_t> int_allocator; typedef bip::vector<int, int_allocator> int_vector; // String Types typedef bip::allocator<char, segment_manager_t> char_allocator; typedef bip::basic_string<char, std::char_traits<char>, char_allocator> string; typedef bip::allocator<string, segment_manager_t> string_allocator; typedef bip::vector<string, string_allocator> string_vector; typedef bip::allocator<string_vector, segment_manager_t> string_vector_allocator; typedef bip::vector<string_vector, string_vector_allocator> string_vector_vector; } Then for one of my lookup table classes, it's defined something like this: class Details { public: Details(const shm::void_allocator & alloc) : m_Ids(alloc), m_Labels(alloc), m_Values(alloc) { } ~Details() {} int Read(BinaryReader & br); private: shm::int_vector m_Ids; shm::string_vector m_Labels; shm::string_vector_vector m_Values; }; int Details::Read(BinaryReader & br) { int num = br.ReadInt(); m_Ids.resize(num); m_Labels.resize(num); m_Values.resize(num); for (int i = 0; i < num; i++) { m_Ids[i] = br.ReadInt(); m_Labels[i] = br.ReadString().c_str(); int count = br.ReadInt(); m_Value[i].resize(count); for (int j = 0; j < count; j++) { m_Value[i][j] = br.ReadString().c_str(); } } } But when I compile it, I get the error: 'boost::interprocess::allocator<T,SegmentManager>::allocator' : no appropriate default constructor available And it's due to the resize() calls on the vector objects. Because the allocator types do not have a empty constructor (they take a const segment_manager_t &) and it's trying to create a default object for each location. So in order for it to work, I have to get an allocator object and pass a default value object on resize. Like this: int Details::Read(BinaryReader & br) { shm::void_allocator alloc(m_Ids.get_allocator()); int num = br.ReadInt(); m_Ids.resize(num); m_Labels.resize(num, shm::string(alloc)); m_Values.resize(num, shm::string_vector(alloc)); for (int i = 0; i < num; i++) { m_Ids[i] = br.ReadInt(); m_Labels[i] = br.ReadString().c_str(); int count = br.ReadInt(); m_Value[i].resize(count, shm::string(alloc)); for (int j = 0; j < count; j++) { m_Value[i][j] = br.ReadString().c_str(); } } } Is this the best/correct way of doing it? Or am I missing something. Thanks!

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • getline won't let me type, c++

    - by Stijn
    I try to get the name of a game the users chooses and store it in a vector. I use getline so the user can use a space. When I try to type a new game to add it won't let me. It automaticly displays me games library. Please tell me what I do wrong. Problem is at if(action == "add") Here's my code: #include <iostream> #include <string> #include <vector> #include <algorithm> #include <ctime> #include <cstdlib> using namespace std; int main() { vector<string>::const_iterator myIterator; vector<string>::const_iterator iter; vector<string> games; games.push_back("Crysis 2"); games.push_back("GodOfWar 3"); games.push_back("FIFA 12"); cout <<"Welcome to your Games Library.\n"; cout <<"\nThese are your games:\n"; for (iter = games.begin(); iter != games.end(); ++iter) { cout <<*iter <<endl; } //the loop! string action; string newGame; cout <<"\n-Type 'exit' if you want to quit.\n-Type 'add' if you want to add a game.\n-Type 'delete' if you want to delete a game.\n-Type 'find' if you want to search a game.\n-Type 'game' if you don't know what game to play.\n-Type 'show' if you want to view your library."; while (action != "exit") { cout <<"\n\nWhat do you want to do: "; cin >> action; //problem is here if (action == "add") { cout <<"\nType the name of the game you want to add: "; getline (cin, newGame); games.push_back(newGame); for (iter = games.begin(); iter != games.end(); ++iter) { cout <<*iter <<endl; } continue; } else if (action == "show") { cout <<"\nThese are your games:\n"; for (iter = games.begin(); iter != games.end(); ++iter) { cout <<*iter <<endl; } } else if (action == "delete") { cout <<"Type the name of the game you want to delete: "; cin >> newGame; getline (cin, newGame); iter = find(games.begin(), games.end(), newGame); if(iter != games.end()) { games.erase(iter); cout <<"\nGame deleted!"; } else { cout<<"\nGame not found."; } continue; } else if (action == "find") { cout <<"Which game you want to look for in your library: "; cin >> newGame; getline (cin, newGame); iter = find(games.begin(), games.end(), newGame); if (iter != games.end()) { cout << "Game found.\n"; } else { cout << "Game not found.\n"; } continue; } else if (action == "game") { srand(static_cast<unsigned int>(time(0))); random_shuffle(games.begin(), games.end()); cout << "\nWhy don't you play " << games[0]; continue; } else if (action == "quit") { cout <<"\nRemember to have fun while gaming!!\n"; break; } else { cout <<"\nCommand not found"; } } return 0; }

    Read the article

  • Using a boost::fusion::map in boost::spirit::karma

    - by user1097105
    I am using boost spirit to parse some text files into a data structure and now I am beginning to generate text from this data structure (using spirit karma). One attempt at a data structure is a boost::fusion::map (as suggested in an answer to this question). But although I can use boost::spirit::qi::parse() and get data in it easily, when I tried to generate text from it using karma, I failed. Below is my attempt (look especially at the "map_data" type). After some reading and playing around with other fusion types, I found boost::fusion::vector and BOOST_FUSION_DEFINE_ASSOC_STRUCT. I succeeded to generate output with both of them, but they don't seem ideal: in vector you cannot access a member using a name (it is like a tuple) -- and in the other solution, I don't think I need both ways (member name and key type) to access the members. #include <iostream> #include <string> #include <boost/spirit/include/karma.hpp> #include <boost/fusion/include/map.hpp> #include <boost/fusion/include/make_map.hpp> #include <boost/fusion/include/vector.hpp> #include <boost/fusion/include/as_vector.hpp> #include <boost/fusion/include/transform.hpp> struct sb_key; struct id_key; using boost::fusion::pair; typedef boost::fusion::map < pair<sb_key, int> , pair<id_key, unsigned long> > map_data; typedef boost::fusion::vector < int, unsigned long > vector_data; #include <boost/fusion/include/define_assoc_struct.hpp> BOOST_FUSION_DEFINE_ASSOC_STRUCT( (), assocstruct_data, (int, a, sb_key) (unsigned long, b, id_key)) namespace karma = boost::spirit::karma; template <typename X> std::string to_string ( const X& data ) { std::string generated; std::back_insert_iterator<std::string> sink(generated); karma::generate_delimited ( sink, karma::int_ << karma::ulong_, karma::space, data ); return generated; } int main() { map_data d1(boost::fusion::make_map<sb_key, id_key>(234, 35314988526ul)); vector_data d2(boost::fusion::make_vector(234, 35314988526ul)); assocstruct_data d3(234,35314988526ul); std::cout << "map_data as_vector: " << boost::fusion::as_vector(d1) << std::endl; //std::cout << "map_data to_string: " << to_string(d1) << std::endl; //*FAIL No 1* std::cout << "at_key (sb_key): " << boost::fusion::at_key<sb_key>(d1) << boost::fusion::at_c<0>(d1) << std::endl << std::endl; std::cout << "vector_data: " << d2 << std::endl; std::cout << "vector_data to_string: " << to_string(d2) << std::endl << std::endl; std::cout << "assoc_struct as_vector: " << boost::fusion::as_vector(d3) << std::endl; std::cout << "assoc_struct to_string: " << to_string(d3) << std::endl; std::cout << "at_key (sb_key): " << boost::fusion::at_key<sb_key>(d3) << d3.a << boost::fusion::at_c<0>(d3) << std::endl; return 0; } Including the commented line gives lots of pages of compilation errors, among which notably something like: no known conversion for argument 1 from ‘boost::fusion::pair’ to ‘double’ no known conversion for argument 1 from ‘boost::fusion::pair’ to ‘float’ Might it be that to_string needs the values of the map_data, and not the pairs? Though I am not good with templates, I tried to get a vector from a map using transform in the following way template <typename P> struct take_second { typename P::second_type operator() (P p) { return p.second; } }; // ... inside main() pair <char, int> ff(32); std::cout << "take_second (expect 32): " << take_second<pair<char,int>>()(ff) << std::endl; std::cout << "transform map_data and to_string: " << to_string(boost::fusion::transform(d1, take_second<>())); //*FAIL No 2* But I don't know what types am I supposed to give when instantiating take_second and anyway I think there must be an easier way to get (iterate over) the values of a map (is there?) If you answer this question, please also give your opinion on whether using an ASSOC_STRUCT or a map is better.

    Read the article

  • Gradient algororithm produces little white dots

    - by user146780
    I'm working on an algorithm to generate point to point linear gradients. I have a rough, proof of concept implementation done: GLuint OGLENGINEFUNCTIONS::CreateGradient( std::vector<ARGBCOLORF> &input,POINTFLOAT start, POINTFLOAT end, int width, int height,bool radial ) { std::vector<POINT> pol; std::vector<GLubyte> pdata(width * height * 4); std::vector<POINTFLOAT> linearpts; std::vector<float> lookup; float distance = GetDistance(start,end); RoundNumber(distance); POINTFLOAT temp; float incr = 1 / (distance + 1); for(int l = 0; l < 100; l ++) { POINTFLOAT outA; POINTFLOAT OutB; float dirlen; float perplen; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * l * 0.5); perpoffset.y = static_cast<float>(nperp.y * l * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); outA.x = end.x + perpoffset.x + diroffset.x; outA.y = end.y + perpoffset.y + diroffset.y; OutB.x = start.x + perpoffset.x - diroffset.x; OutB.y = start.y + perpoffset.y - diroffset.y; for (float i = 0; i < 1; i += incr) { temp = GetLinearBezier(i,outA,OutB); RoundNumber(temp.x); RoundNumber(temp.y); linearpts.push_back(temp); lookup.push_back(i); } for (unsigned int j = 0; j < linearpts.size(); j++) { if(linearpts[j].x < width && linearpts[j].x >= 0 && linearpts[j].y < height && linearpts[j].y >=0) { pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 0] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 1] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 2] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 3] = (GLubyte) 255; } } lookup.clear(); linearpts.clear(); } return CreateTexture(pdata,width,height); } It works as I would expect most of the time, but at certain angles it produces little white dots. I can't figure out what does this. This is what it looks like at most angles (good) http://img9.imageshack.us/img9/5922/goodgradient.png But once in a while it looks like this (bad): http://img155.imageshack.us/img155/760/badgradient.png What could be causing the white dots? Is there maybe also a better way to generate my gradients if no solution is possible for this? Thanks

    Read the article

  • Inspire Geek Love with These Hilarious Geek Valentines

    - by Eric Z Goodnight
    Want to send some Geek Love to that special someone? Why not do it with these elementary school throwback valentines, and win their heart this upcoming Valentine’s day—the geek way! Read on to see the simple method to make your own custom Valentines, as well as download a set of eleven ready-made ones any geek guy or gal should be delighted get. It’s amore! How to Make Custom Valentines A size we’ve used for all of our Valentines is a 3” x 4” at 150 dpi. This is fairly low resolution for print, but makes a great graphic to email. With your new image open, Navigate to Edit > Fill and fill your background layer with a rich, red color (or whatever appeals to you.) By setting “Use” to “Foreground color as shown above, you’ll paint whatever foreground color you have in your color picker. Press to select the text tool. Set a few text objects, using whatever fonts appeal to you. Pixel fonts, like this one, are freely downloadable, and we’ve already shared a great list of Valentines fonts. Copy an image from the internet if you’re confident your sweetie won’t mind a bit of fair use of copyrighted imagery. If they do mind, find yourself some great Creative Commons images. to do a free transform on your image, sizing it to whatever dimensions work best for your design. Right click your newly added image layer in your panel and Choose “Blending Effects” to pick a Layer Style. “Stroke” with this setting adds a black line around your image. Also turning on “Outer Glow” with this setting puts a dark black shadow around the top and bottom (and sides, although they are hidden). Add some more text. Double entendre is recommended. Click and hold down on the “Rectangle Tool” to get the “Custom Shape Tool.” The custom shape tool has useful vector shapes built into it. Find the “Shape” dropdown in the menu to find the heart image. Click and drag to create a vector heart shape in your image. Your layers panel is where you can change the color, if it happens to use the wrong one at first. Click the color swatch in your panel, highlighted in blue above. will transform your vector heart. You can also use it to rotate, if you like. Add some details, like this Power or Standby symbol, which can be found in symbol fonts, taken from images online, or drawn by hand. Your Valentine is now ready to be saved as a JPG or PNG and sent to the object of your affection! Keep reading to see a list of 11 downloadable How-To Geek Valentines, including this one and the three from the header image. Download The HTG Set of Valentines Download the HTG Geek Valentines (ZIP) Download the HTG Geek Valentines (ZIP) When he’s not wooing ladies with Valentines cards, you can email the author at [email protected] with your Photoshop and Graphics questions. Your questions may be featured in a future How-To Geek article! Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • Collision detection via adjacent tiles - sprite too big

    - by BlackMamba
    I have managed to create a collision detection system for my tile-based jump'n'run game (written in C++/SFML), where I check on each update what values the surrounding tiles of the player contain and then I let the player move accordingly (i. e. move left when there is an obstacle on the right side). This works fine when the player sprite is not too big: Given a tile size of 5x5 pixels, my solution worked quite fine with a spritesize of 3x4 and 5x5 pixels. My problem is that I actually need the player to be quite gigantic (34x70 pixels given the same tilesize). When I try this, there seems to be an invisible, notably smaller boundingbox where the player collides with obstacles, the player also seems to shake strongly. Here some images to explain what I mean: Works: http://tinypic.com/r/207lvfr/8 Doesn't work: http://tinypic.com/r/2yuk02q/8 Another example of non-functioning: http://tinypic.com/r/kexbwl/8 (the player isn't falling, he stays there in the corner) My code for getting the surrounding tiles looks like this (I removed some parts to make it better readable): std::vector<std::map<std::string, int> > Game::getSurroundingTiles(sf::Vector2f position) { // converting the pixel coordinates to tilemap coordinates sf::Vector2u pPos(static_cast<int>(position.x/tileSize.x), static_cast<int>(position.y/tileSize.y)); std::vector<std::map<std::string, int> > surroundingTiles; for(int i = 0; i < 9; ++i) { // calculating the relative position of the surrounding tile(s) int c = i % 3; int r = static_cast<int>(i/3); // we subtract 1 to place the player in the middle of the 3x3 grid sf::Vector2u tilePos(pPos.x + (c - 1), pPos.y + (r - 1)); // this tells us what kind of block this tile is int tGid = levelMap[tilePos.y][tilePos.x]; // converts the coords from tile to world coords sf::Vector2u tileRect(tilePos.x*5, tilePos.y*5); // storing all the information std::map<std::string, int> tileDict; tileDict.insert(std::make_pair("gid", tGid)); tileDict.insert(std::make_pair("x", tileRect.x)); tileDict.insert(std::make_pair("y", tileRect.y)); // adding the stored information to our vector surroundingTiles.push_back(tileDict); } // I organise the map so that it is arranged like the following: /* * 4 | 1 | 5 * -- -- -- * 2 | / | 3 * -- -- -- * 6 | 0 | 7 * */ return surroundingTiles; } I then check in a loop through the surrounding tiles, if there is a 1 as gid (indicates obstacle) and then check for intersections with that adjacent tile. The problem I just can't overcome is that I think that I need to store the values of all the adjacent tiles and then check for them. How? And may there be a better solution? Any help is appreciated. P.S.: My implementation derives from this blog entry, I mostly just translated it from Objective-C/Cocos2d.

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • Is my class structure good enough?

    - by Rivten
    So I wanted to try out this challenge on reddit which is mostly about how you structure your data the best you can. I decided to challenge my C++ skills. Here's how I planned this. First, there's the Game class. It deals with time and is the only class main has access to. A game has a Forest. For now, this class does not have a lot of things, only a size and a Factory. Will be put in better use when it will come to SDL-stuff I guess A Factory is the thing that deals with the Game Objects (a.k.a. Trees, Lumberjack and Bears). It has a vector of all GameObjects and a queue of Events which will be managed at the end of one month. A GameObject is an abstract class which can be updated and which can notify the Event Listener The EventListener is a class which handles all the Events of a simulation. It can recieve events from a Game Object and notify the Factory if needed, the latter will manage correctly the event. So, the Tree, Lumberjack and Bear classes all inherits from GameObject. And Sapling and Elder Tree inherits from Tree. Finally, an Event is defined by an event_type enumeration (LUMBERJACK_MAWED, SAPPLING_EVOLUTION, ...) and an event_protagonists union (a GameObject or a pair of GameObject (who killed who ?)). I was quite happy at first with this because it seems quite logic and flexible. But I ended up questionning this structure. Here's why : I dislike the fact that a GameObject need to know about the Factory. Indeed, when a Bear moves somewhere, it needs to know if there's a Lumberjack ! Or it is the Factory which handles places and objects. It would be great if a GameObject could only interact with the EventListener... or maybe it's not that much of a big deal. Wouldn't it be better if I separate the Factory in three vectors ? One for each kind of GameObject. The idea would be to optimize research. If I'm looking do delete a dead lumberjack, I would only have to look in one shorter vector rather than a very long vector. Another problem arises when I want to know if there is any particular object in a given case because I have to look for all the gameObjects and see if they are at the given case. I would tend to think that the other idea would be to use a matrix but then the issue would be that I would have empty cases (and therefore unused space). I don't really know if Sapling and Elder Tree should inherit from Tree. Indeed, a Sapling is a Tree but what about its evolution ? Should I just delete the sapling and say to the factory to create a new Tree at the exact same place ? It doesn't seem natural to me to do so. How could I improve this ? Is the design of an Event quite good ? I've never used unions before in C++ but I didn't have any other ideas about what to use. Well, I hope I have been clear enough. Thank you for taking the time to help me !

    Read the article

  • Android Canvas Coordinate System

    - by Mitch
    I'm trying to find information on how to change the coordinate system for the canvas. I have some vector data I'd like to draw to a canvas using things like circles and lines, but the data's coordinate system doesn't match the canvas coordinate system. Is there a way to map the units I'm using to the screen's units? I'm drawing to an ImageView which isn't taking up the entire display. If I have to do my own calculations prior to each drawing call, how to I find the width and height of my ImageView? The getWidth() and getHeight() calls I tried seem to be returning the entire canvas size and not the size of the ImageView which isn't helpful. I see some matrix stuff, is that something that will work for me? I tried to use the "public void scale(float sx, float sy)", but that works more like a pixel level zoom rather than a vector scale function by expanding each pixel. This means if the dimensions are increased to fit the screen, the line thickness is also increased. Update: After some research I'm starting to think there's no way to change coordinate systems to something else. I'll need to map all my coordinates to the screen's pixel coordinates and do so by modifying each vector. The getWidth() and getHeight() seem to be working better for me now. I can say what was wrong, but I suspect I can't use these methods inside the constructor.

    Read the article

  • boost.serialization and lazy initialization

    - by niXman
    i need to serialize directory tree. i have no trouble with this type: std::map< std::string, // string(path name) std::vector<std::string> // string array(file names in the path) > tree; but for the serialization the directory tree with the content i need other type: std::map< std::string, // string(path name) std::vector< // files array std::pair< std::string, // file name std::vector< // array of file pieces std::pair< // <<<<<<<<<<<<<<<<<<<<<< for this i need lazy initialization std::string, // piece buf boost::uint32_t // crc32 summ on piece > > > > > tree; how can i serialize the object of type "std::pair" in the moment of its serialization?

    Read the article

  • How to draw an Arc in OpenGL

    - by rpgFANATIC
    While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)). This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0. Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious? void Ball::drawBounce() { float piecesToDraw = 100.0f; float arcWidth = 10.0f; float arcAngle = 4.0f; glBegin(GL_LINE_STRIP); for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half { float currentX = (i / piecesToDraw) * arcWidth; glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth)); } for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now) { float currentX = (j / piecesToDraw) * arcWidth; glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth)); } glEnd(); } Thanks in advance.

    Read the article

  • Context migration in CUDA.NET

    - by Vyacheslav
    I'm currently using CUDA.NET library by GASS. I need to initialize cuda arrays (actually cublas vectors, but it doesn't matters) in one CPU thread and use them in other CPU thread. But CUDA context which holding all initialized arrays and loaded functions, can be attached to only one CPU thread. There is mechanism called context migration API to detach context from one thread and attach it to another. But i don't how to properly use it in CUDA.NET. I tried something like this: class Program { private static float[] vector1, vector2; private static CUDA cuda; private static CUBLAS cublas; private static CUdeviceptr ptr; static void Main(string[] args) { cuda = new CUDA(false); cublas = new CUBLAS(cuda); cuda.Init(); cuda.CreateContext(0); AllocateVectors(); cuda.DetachContext(); CUcontext context = cuda.PopCurrentContext(); GetVectorFromDeviceAsync(context); } private static void AllocateVectors() { vector1 = new float[]{1f, 2f, 3f, 4f, 5f}; ptr = cublas.Allocate(vector1.Length, sizeof (float)); cublas.SetVector(vector1, ptr); vector2 = new float[5]; } private static void GetVectorFromDevice(object objContext) { CUcontext localContext = (CUcontext) objContext; cuda.PushCurrentContext(localContext); cuda.AttachContext(localContext); //change vector somehow vector1[0] = -1; //copy changed vector to device cublas.SetVector(vector1, ptr); cublas.GetVector(ptr, vector2); CUDADriver.cuCtxPopCurrent(ref localContext); } private static void GetVectorFromDeviceAsync(CUcontext cUcontext) { Thread thread = new Thread(GetVectorFromDevice); thread.IsBackground = false; thread.Start(cUcontext); } } But execution fails on attempt to copy changed vector to device because context is not attached? Any ideas how i can get it work?

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible, however, if that is the only way I would also appreciate if anyone can point me in the right direction to a regex that is unlikely to be missing anything. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • Why is the compiler not selecting my function-template overload in the following example?

    - by Steve Guidi
    Given the following function templates: #include <vector> #include <utility> struct Base { }; struct Derived : Base { }; // #1 template <typename T1, typename T2> void f(const T1& a, const T2& b) { }; // #2 template <typename T1, typename T2> void f(const std::vector<std::pair<T1, T2> >& v, Base* p) { }; Why is it that the following code always invokes overload #1 instead of overload #2? void main() { std::vector<std::pair<int, int> > v; Derived derived; f(100, 200); // clearly calls overload #1 f(v, &derived); // always calls overload #1 } Given that the second parameter of f is a derived type of Base, I was hoping that the compiler would choose overload #2 as it is a better match than the generic type in overload #1. Are there any techniques that I could use to rewrite these functions so that the user can write code as displayed in the main function (i.e., leveraging compiler-deduction of argument types)?

    Read the article

  • Changing JButton background colour temporarily?

    - by sashep
    Hi, I'm super new to Java, and in need of some help. I'm making a little java desktop application where I basically have a grid of 4 JButtons ( 2 x 2 grid), and I need the background colour of individual JButtons to change, and after one second, change back to the original colour (the game I'm trying to make is like Simon, where you have to follow a pattern of buttons that light up). I have a vector that contains randomly generated numbers in the range of 1 to 4, and I want to be able to get each element from the vector and get the corresponding button to change to a different colour for one second (for example, if the vector contains 2 4 1, I would want button 2 to change, then button 4 to change, then button 1 to change). Is this possible or is there a better way to go about doing this with something other than JButtons? How do I implement this? Also, I'm running Mac OS X, which apparently (based on some things I've read on forums) doesn't like JButtons background changing (I think it's because of system look and feel), how can I change this so it works on mac? Thank you in advance for any help :)

    Read the article

  • [ActionScript 3] Array subclasses cannot be deserialized, Error #1034

    - by aaaidan
    I've just found a strange error when deserializing from a ByteArray, where Vectors cannot contain types that extend Array: there is a TypeError when they are deserialized. TypeError: Error #1034: Type Coercion failed: cannot convert []@4b8c42e1 to com.myapp.ArraySubclass. at flash.utils::ByteArray/readObject() at com.myapp::MyApplication()[/Users/aaaidan/MyApp/com/myapp/MyApplication.as:99] Here's how: public class Application extends Sprite { public function Application() { // register the custom class registerClassAlias("MyArraySubclass", MyArraySubclass); // write a vector containing an array subclass to a byte array var vec:Vector.<MyArraySubclass> = new Vector.<MyArraySubclass>(); var arraySubclass:MyArraySubclass = new MyArraySubclass(); arraySubclass.customProperty = "foo"; vec.push(arraySubclass); var ba:ByteArray = new ByteArray(); ba.writeObject(arraySubclass); ba.position = 0; // read it back var arraySubclass2:MyArraySubclass = ba.readObject() as MyArraySubclass; // throws TypeError } } public class MyArraySubclass extends Array { public var customProperty:String = "default"; } It's a pretty specific case, but it seems very odd to me. Anyone have any ideas what's causing it, or how it could be fixed?

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find and other built-in methods?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >