Search Results

Search found 19855 results on 795 pages for 'game console'.

Page 394/795 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • How do I build a matrix to translate one set of points to another?

    - by dotminic
    I've got 3 points in space that define a triangle. I've also got a vertex buffer made up of three vertices, that also represent a triangle that I will refer to as a "model". How can I can I find the matrix M that will transform vertex in my buffer to those 3 points in space ? For example, let's say my three points A, B, C are at locations: A.x = 10, A.y = 16, A.z = 8 B.x = 12, B.y = 11, B.z = 1 C.x = 19, C.y = 12, C.z = 3 given these coordinates how can I build a matrix that will translate and rotate my model such that both triangles have the exact same world space ? That is, I want the first vertex in my triangle model to have the same coordinates as A, the second to have the same coordinates as B, and same goes for C. nb: I'm using instanced rendering so I can't just give each vertex the same position as my 3 points. I have a set of three points defining a triangle, and only three vertices in my vertex buffer.

    Read the article

  • Vectors with Circles Physics -java

    - by Joe Hearty
    This is a problem I've been having, When making a set number of filled circles at random locations on a JPanel and applying a gravity (a negative change in the y), each of the circles collide. I want them to have collision detection and push in the opposite direction using vectors but i don't know how to apply that to my scenario could someone help? public void drawballs(Graphics g){ g.setColor (Color.white); //displays circles for(int i = 0; i<xlocationofcircles.length-1; i++){ g.fillOval( (int) xlocationofcircles[i], (int) (ylocationofcircles[i]) ,16 ,16 ); ylocationofcircles[i]+=.2; //gravity if(ylocationofcircles[i] > 550) //stops gravity at bottom of screen ylocationofcircles[i]-=.2; //Check distance between circles(i think..) float distance =(xlocationofcircles[i+1]-xlocationofcircles[i]) + (ylocationofcircles[i+1]-xlocationofcircles[i]) ; if( Math.sqrt(distance) <16)

    Read the article

  • CreateDXGIFactory Doesn't Let Program Exit

    - by smoth190
    I'm using CreateDXGIFactory to get the graphics adapters and display modes. When I call it, it works fine and I get all the data. However, when I exit my program, the main Win32 thread exits, but something stays open because it keeps debugging. Does CreateDXGIFactory create an extra thread and I'm not closing it? I don't understand. The only thing I would suspect is that in the documentation it says it doesn't work if it's called from DllMain. It is in a DLL, but it's not called from DllMain. And it doesn't fail, either. I'm using DirectX 11.

    Read the article

  • Adding gesture recognizer (or dragging) to CCSprite

    - by user339946
    I'm trying to allow a CCSprite to be dragged across the screen. I've succeeded so far by doing it on a Layer level (from this tutorial). However, this only allows ONE sprite to be dragged at a time as the method implementation can only identify a single sprite to move at a time. I'd like to be able to perhaps add a gesture recognizer or somehow implement ccTouchesBegan/Moved in my own little CCSprite subclass. However, from what I understand, you can't just add gesture recognizers to CCSprites. ccTouchMoved are also not available on CCSprites?? Really confused as to how to implement touches on Cocos2D. What is the easiest way to add some position translation code to a CCSprite so it can be dragged around? Thanks!

    Read the article

  • How do I repeatedly move an image by 1 pixel?

    - by Will
    I have a method that is moving a UIImageView called shootImg across the screen: -(IBAction)shoot{ if (appDelegate.shootInt > 0) { if (direction == 1) { shootImg.center = CGPointMake(shootImg.center.x+1, shootImg.center.y); appDelegate.shootInt = appDelegate.shootInt - 1; shootLabel.text = [NSString stringWithFormat:@"%d", appDelegate.shootInt]; } This does seem to work. But it only moves shootImage 1 pixel. What I want to do is make it repeatedly move 1 pixel. I tried a while loop but that didn't seem to work. I'm not using cocos2d or anything like that and if you need to see more code just ask. Thanks :)

    Read the article

  • How do I make a dialog box? [on hold]

    - by bill
    By dialog box I mean when player talks to someone, a box shows up with text on it. I haven't found much about this topic online, so I created a basic dialog box: //in dialog box i have only two methods public void createBox(int x, int y, int width, int height, String txt) { this.x = x; this.y = y; this.width = width; this.height = height; this.txt = txt; } //draw dialog box public void draw(Graphics2D g) { if (txt != null) { g.setColor(Color.red); g.drawRect(x,y,width,height); g.setColor(Color.black); g.fillRect(x, y, width, height); g.setColor(Color.white); g.drawString(txt, x + 10, y + 10); } } I wanted to now can I make this better?

    Read the article

  • How to implement custom texture formats in Android?

    - by random1337
    What I know: Android can load PNG, BMP, WEBP,... via BitmapFactory. What I want to achive: Load my own 2D file format (e.g. 1-bit texture with a 1-bit alpha channel) and output a RGBA8888 texture. Question: Is there any interface to achieve this?(or any other way) The resulting image is used as a texture for a 3D model. Why would you do that? Saving phone memory and download bandwidth while expanding the texture at runtime to RAM seems reasonable for very simple textures.

    Read the article

  • GL_INVALID_OPERATION in glEnd

    - by Killrazor
    Hello, I'm having problems drawing a simple sprite. When I draw: void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); //CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_TRIANGLE_STRIP)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaStart.s,m_textureAreaStart.t)); // 0,0 by default CHECKGL(glVertex3i(m_position.x,m_position.y,0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s,m_textureAreaStart.t)); // 1,0 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y, 0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaEnd.s, m_textureAreaEnd.t)); // 1,1 by default CHECKGL(glVertex3i( m_position.x + m_dimensions.x, m_position.y + m_dimensions.y, 0)); CHECKGL(glNormal3i(0,0,1)); CHECKGL(glTexCoord2f(m_textureAreaStart.s, m_textureAreaEnd.t)); // 0,1 by default CHECKGL(glVertex3i( m_position.x, m_position.y + m_dimensions.y,0)); CHECKGL(glEnd()); //CHECKGL(glPopMatrix()); CHECKGL(glDisable(GL_BLEND)); } I'm always get an GL_INVALID_OPERATION in glEnd(). I suspect that error is not here, but I can't detect where may be. Actually, the output render seems ok. But I want to solve this situation before to catch a subtle bug tomorrow. Any idea of what could be

    Read the article

  • Triangles in a C++ STL Vector as an Objective-C member sometimes draws incorrectly in OpenGL ES

    - by Rahil627
    The polygons draw correctly 80% of the time. When it fails, a vertex is dislocated. The polygon is consistently drawn with the wrong vertex. I checked that the vector is correct during initialization, even when it's wrongly drawn. I'm using Cocos2d. The class member: @interface Polygon : CCSprite { std::vector<float> triangleVertices; } The draw function called in [Polygon draw]: + (void)drawTrianglesWithVertices:(const std::vector<float> &)v { //glEnableClientState(GL_VERTEX_ARRAY); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glVertexPointer(2, GL_FLOAT, 0, &v[0]); glDrawArrays(GL_TRIANGLES, 0, v.size()); //glDisableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); } Any ideas?

    Read the article

  • Should I drawing directly on CCLayer or CCSprite?

    - by einverne
    Now I am a little confused in my cocos2d-x cpp project. I want to draw lines with user's finger touch. Following the screenshot of a CCScene: In the screen, there are two squares. I want show an animation in the first square and let the second one draw lines with user touch. Now these two squares are CCSprite. And I can draw dots in the second one on the CCLayer. But I am little confused that I should draw lines on the Sprite or on the Layer. Or are there other ways to organize the code?

    Read the article

  • Keep Getting Syntax Error C2199?

    - by DARK3ZOOZ
    Here's my problem I'm trying to define something, but keep getting a syntax error Code: #define R_RegisterShader 0x50C8A0 int (*trap_R_RegisterShader)( const char *name, int Arg_1 ) = (int (_cdecl *)(const char *, int ))R_RegisterShader; ^^^^^^^ This last part is where I keep getting the error if you need more lines of codes, just let me know. thanks http://gyazo.com/1a47ebc12cfbd6ea72feb72c686ae84d screenshot of error

    Read the article

  • Translating multiple objects in GUI based on average position?

    - by user1423893
    I use this method to move a single object in 3D space, it accounts for a local offset based on where the cursor ray hits the widget and the center of the widget. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; objectSelected.Position = p; I would like to move multiple objects based on the same principle and using a widget which is located at the average position of all the objects currently selected. I thought that I would have to translate each object based on their offset from the average point and then include the local offset. var cursorRay = cursor.Ray; Vector3 goalPosition = translationWidget.GoalPosition; Vector3 position = cursorRay.Origin + cursorRay.Direction * grabDistance; // Constrain object movement based on selected axis switch (translationWidget.AxisSelected) { case AxisSelected.All: goalPosition = position; break; case AxisSelected.None: break; case AxisSelected.X: goalPosition.X = position.X; break; case AxisSelected.Y: goalPosition.Y = position.Y; break; case AxisSelected.Z: goalPosition.Z = position.Z; break; } translationWidget.GoalPosition = goalPosition; Vector3 p = goalPosition - translationWidget.LocalOffset; int numSelectedObjects = objectSelectedList.Count; for (int i = 0; i < numSelectedObjects; ++i) { objectSelectedList[i].Position = (objectSelectedList[i].Position - translationWidget.Position) + p; } This doesn't work as the object starts shaking, which I think is because I haven't accounted for the new offset correctly. Where have I gone wrong?

    Read the article

  • Will you still play a good Red Alert 3 mission map? [closed]

    - by W.N.
    I've been creating a RA3 mission map (play in Skirmish), most likely a remake of RA2 Yuri "To the Moon" mission, with more interesting elements. However, because of my work, the process was corrupted for more than a year. And now, I see that very few people still play RA3. So, should I continue making this map? Because there're still a lot of work to complete this map. I can assure you, the mission will be interesting. However, if few people play it, there's no need to waste time to it. Please give me some advice. Thank you.

    Read the article

  • Split Texture2D Across Line

    - by Simie
    Background I'm using a texture as a damage map represented by an array of floats for displaying damage effects on a space ship in a process. An upside of this is being able to 'slice' ships in half, using an input damage map with a line down the middle, as shown in the diagram below. Question However, I'm encountering problems separating this split ship into two different entities. I would like to be able to split the image down the 'damage line', in a process similar to this: I don't have much idea where to start detecting the red line above. Any advice would be welcome.

    Read the article

  • Dropping multiple objects using an array in Actionscript?

    - by Eratosthenes
    I'm trying to get these fireBalls to drop more often, I'm not sure if I'm using Math.random correctly. Also, for some reason I'm getting a null reference because I think the fireBalls array waits for one to leave the stage before dropping another one? This is the relevant code: var sun:Sun=new Sun var fireBalls:Array=new Array() var left:Boolean; function onEnterFrame(event:Event){ if (left) { sun.x = sun.x - 15; }else{ sun.x = sun.x + 15; } if (fireBalls.length>0&&fireBalls[0].y>stage.stageHeight){ // Fireballs exit stage removeChild(fireBalls[0]); fireBalls.shift(); } for (var j:int=0; j<fireBalls.length; j++){ fireBalls[j].y=fireBalls[j].y+15; if (fireBalls[j].y>stage.stageHeight-fireBall.width/2){ } } if (Math.random()<.2){ // Fireballs shooting from Sun var fireBall:FireBall=new FireBall; fireBall.x=sun.x; addChild(fireBall); fireBalls.push(fireBall); } }

    Read the article

  • sprite group doesn't support indexing

    - by user3956
    I have a sprite group created with pygame.sprite.Group() (and add sprites to it with the add method) How would I retrieve the nth sprite in this group? Code like this does not work: mygroup = pygame.sprite.Group(mysprite01) print mygroup[n].rect It returns the error: group object does not support indexing. For the moment I'm using the following function: def getSpriteByPosition(position,group): for index,spr in enumerate(group): if (index == position): return spr return False Although working, it just doesn't seem right... Is there a cleaner way to do this? EDIT: I mention a "position" and index but it's not important actually, returning any single sprite from the group is enough

    Read the article

  • How do you blend multiple colors in HSV (polar) color-space?

    - by Toxikman
    In RGB color space, you can do a weighted multiple-color blend by just doing: Start with R = G = B = 0. Then we perform a blend at index i using a set of colors C, and a set of normalized weights w like so: R += w[i] * C[i].r G += w[i] * C[i].g B += w[i] * C[i].b But I'd like to interpolate the colors in the HSV color-space instead, so that saturation and brightness are uniform across the interpolation. I know I can blend saturation and brightness in the same way as above, but the HUE component is an angle around a continuous circle, since HSV is essentially a polar coordinate system. Blending only two HSV colors makes sense to me, you just find the shortest arc around the circle and interpolate between the two hues. But when you attempt to blend more than 2 colors, it becomes a bit of a puzzle. You have to handle anomalous cases, like 4 equally-weighted colors with a hue at 0, 90, 180, and 270 degrees. They basically cancel each other out, so any hue will do. Any ideas would be greatly appreciated.

    Read the article

  • loading a heightmap as texture in shader

    - by wtherapy
    I have a height map of 256x256, containing, foreach cell, not only height as a normal float value ( not 0-1 ) and also 2 gradient values ( for X and Y ), also as normal float values ( not 0-1 ). I have uploaded the texture via normal texture loading: glEnable( GL_TEXTURE_2D ); glGenTextures( 1, &m_uglID ); DEBUG_OUTPUT("Err %x\n", glGetError()); glBindTexture( GL_TEXTURE_2D , m_uglID ); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB32F, unW + 1, unH + 1, 0, GL_RGB, GL_FLOAT, pvBytes ); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_LINEAR); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_LINEAR); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DEBUG_OUTPUT("Err %x\n", glGetError()); as a parenthesis, the debug output is: Err 500 Err 0 Err 0 Err 0 Err 500 Err 500 Err 0 Err 0 pvBytes is a 256x256 array of typedef struct _tGradientHeightCell { float v; float px; float py; } TGradientHeightCell, *LPTGradientHeightCell; then, m_ugl_HeightMapTexture = glGetUniformLocation(m_uglProgram, "TexHeightMap"); I load it via: glEnable(GL_TEXTURE_2D ); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D , pTexture->GetID()); glUniform1i(m_ugl_HeightMapTexture, 0); in shader, I just access it: uniform sampler2D TexHeightMap; vec4 GetVertCellParameters( uint i, uint j ) { return texture( TexHeightMap, vec2( i, j ) ); } vec4 vH00 = GetVertCellParameters( i, j ); My problem is that, when passing negative values in one of the values in TGradientHeightCell ( v, px, py ), the texture is corrupted. I need the values to be passed exact as I have them in memory. Any help appreciated.

    Read the article

  • Cocos2D: Change animation based on joystick direction

    - by Blade
    I'm trying to get my figure to look in the right directions, based on the input of the joystick. So if I tilt left it looks left and the left animation is used, if I used right, it looks right and right animation is used, if up, then up, down, down and so on. I just get animation for front and back. Also if I press up I see the back of the figure correctly, but it won't go back into the original state when I don't press up anymore. -(void)applyJoystick:(SneakyJoystick *)aJoystick forTimeDelta:(float) deltaTime { CGPoint scaledVelocity = ccpMult(aJoystick.velocity, 128.0f); CGPoint oldPosition = [self position]; CGPoint newPosition = ccp(oldPosition.x + scaledVelocity.x * deltaTime, oldPosition.y + scaledVelocity.y * deltaTime); [self setPosition:newPosition]; id action = nil; int extra = 50; if ((int) aJoystick.degrees > 180 - extra && aJoystick.degrees < 180 + extra) { action = [CCAnimate actionWithAnimation:walkingAnimLeft restoreOriginalFrame:NO]; } else if ((int) aJoystick.degrees > 360 - extra && aJoystick.degrees < 360 + extra) { action = [CCAnimate actionWithAnimation:walkingAnimRight restoreOriginalFrame:NO]; } else if ((int) aJoystick.degrees > 90 - extra && aJoystick.degrees < 90 + extra) { action = [CCAnimate actionWithAnimation:walkingAnimBack restoreOriginalFrame:NO]; } else if ((int) aJoystick.degrees > 270 - extra && aJoystick.degrees < 270 + extra) { action = [CCAnimate actionWithAnimation:walkingAnimFront restoreOriginalFrame:NO]; } if (action != nil) { [self runAction:action]; } } }

    Read the article

  • Using 2d collision with 3d objects

    - by Lyise
    I'm planning to write a fairly basic scrolling shoot 'em up, however, I have run into a query with regards to checking for collision. I plan to have a fixed top down view, where the player and enemies are all 3d objects on a fixed plane, and when the enemy or player fires at the other, their shots will also be along this fixed plane. In order to handle the collision, I have read up a bit on collision detection in 3d, as it is not something I have looked into previously, but I'm not sure what would be ideal for this situation. My options appear to be: Sphere collision, however, this lacks the pixel precision I would like Detection using all vertexes and planes of each object, but this seems overly convoluted for a fixed plane of play Rendering the play screen in black and white (where white is an object, black is empty space), once for enemies and once for the player, and checking for collisions that way (if a pixel is white on both, there is a collision) Which of these would be the best approach, or is there another option that I am missing? I have done this previously using 2d sprites, however I can't use the same thinking here as I don't have the image to refer to.

    Read the article

  • IndexOutOfRangeException on World.Step after enabling/disabling a Farseer physics body?

    - by WilHall
    Earlier, I posted a question asking how to swap fixtures on the fly in a 2D side-scroller using Farseer Physics Engine. The ultimate goal being that the player's physical body changes when the player is in different states (I.e. standing, walking, jumping, etc). After reading this answer, I changed my approach to the following: Create a physical body for each state when the player is loaded Save those bodies and their corresponding states in parallel lists Swap those physical bodies out when the player state changes (which causes an exception, see below) The following is my function to change states and swap physical bodies: new protected void SetState(object nState) { //If mBody == null, the player is being loaded for the first time if (mBody == null) { mBody = mBodies[mStates.IndexOf(nState)]; mBody.Enabled = true; } else { //Get the body for the given state Body nBody = mBodies[mStates.IndexOf(nState)]; //Enable the new body nBody.Enabled = true; //Disable the current body mBody.Enabled = false; //Copy the current body's attributes to the new one nBody.SetTransform(mBody.Position, mBody.Rotation); nBody.LinearVelocity = mBody.LinearVelocity; nBody.AngularVelocity = mBody.AngularVelocity; mBody = nBody; } base.SetState(nState); } Using the above method causes an IndexOutOfRangeException when calling World.Step: mWorld.Step(Math.Min((float)nGameTime.ElapsedGameTime.TotalSeconds, (1f / 30f))); I found that the problem is related to changing the .Enabled setting on a body. I tried the above function without setting .Enabled, and there was no error thrown. Turning on the debug views, I saw that the bodies were updating positions/rotations/etc properly when the state was changes, but since they were all enabled, they were just colliding wildly with each other. Does Enabling/Disabling a body remove it from the world's body list, which then causes the error because the list is shorter than expected? Update: For such a straightforward issue, I feel this question has not received enough attention. Has anyone else experienced this? Would anyone try a quick test case? I know this issue can be sidestepped - I.e. by not disabling a body during the simulation - but it seems strange that this issue would exist in the first place, especially when I see no mention of it in the documentation for farseer or box2d. I can't find any cases of the issue online where things are more or less kosher, like in my case. Any leads on this would be helpful.

    Read the article

  • Rectangular Raycasting?

    - by igrad
    If you've ever played The Swapper, you'll have a good idea of what I'm asking about. I need to check for, and isolate, areas of a rectangle that may intersect with either a circle or another rectangle. These selected areas will receive special properties, and the areas will be non-static, since the intersecting shapes themselves will also be dynamic. My first thought was to use raycasting detection, though I've only seen that in use with circles, or even ellipses. I'm curious if there's a method of using raycasting with a more rectangular approach, or if there's a totally different method already in use to accomplish this task. I would like something more exact than checking in large chunks, and since I'm using SDL2 with a logical renderer size of 1920x1080, checking if each pixel is intersecting is out of the question, as it would slow things down past a playable speed. I already have a multi-shape collision function-template in place, and I could use that, though it only checks if sides or corners are intersecting; it does not compute the overlapping area, or even find the circle's secant line, though I can't imagine it would be overly complex to implement. TL;DR: I need to find and isolate areas of a rectangle that may intersect with a circle or another rectangle without checking every single pixel on-screen.

    Read the article

  • Working Qt controls in a 3d environment

    - by Jay
    I need some advice from a Qt expert. The background: I have a 3D engine (ogre3d) working in concert with Qt. The 3D Content is displayed in a widget (using a custom OS window in the client area). I'm able to overlay arbitrary Qt widgets onto the 3d world using the widget render() method and a shared bitmap. This makes a great "heads up display". I can use the standard Qt style sheets and animation using this technique. My goal I'd like to go a step further and allow the user to move these rendered widgets using the mouse. I'd like some advice on the best way to implement this. Possible solutions: The widgets in the HUD are not part of the inheritance chain. I render them manually. They don't get events though. I could add them to the inheritance chain so they get events in the usual way. Then I would need to change them to render to my shared bitmap instead of to the operating system. I looked at this once but couldn't find enough information to implement it. Capture mouse events in the 3D display widget and EMIT them to child controls. I basically create my own event handling chain. Any suggestions on how to implement this? I'm also considering switching to Qt5. I'm not sure how that might affect this decision.

    Read the article

  • Why is my Type.GetFields(BindingFlags.Instance|BindingFlags.Public) not working?

    - by granadaCoder
    My code can see the non-public members, but not the public ones. Why? FieldInfo[] publicFieldInfos = t.GetFields(BindingFlags.Instance | BindingFlags.Public); is returning nothing. Note: I'm trying to get at the properties on the abstract class as well as the concrete class. (And read the attributes as well). The MSDN example works with the 2 flags (BindingFlags.Instance | BindingFlags.Public) but my mini inheritance example below does not. private void RunTest1() { try { textBox1.Text = string.Empty; Type t = typeof(MyInheritedClass); //Look at the BindingFlags *** NonPublic *** int fieldCount = 0; while (null != t) { fieldCount += t.GetFields(BindingFlags.Instance | BindingFlags.NonPublic).Length; FieldInfo[] nonPublicFieldInfos = t.GetFields(BindingFlags.Instance | BindingFlags.NonPublic); foreach (FieldInfo field in nonPublicFieldInfos) { if (null != field) { Console.WriteLine(field.Name); } } t = t.BaseType; } Console.WriteLine("\n\r------------------\n\r"); //Look at the BindingFlags *** Public *** t = typeof(MyInheritedClass); FieldInfo[] publicFieldInfos = t.GetFields(BindingFlags.Instance | BindingFlags.Public); foreach (FieldInfo field in publicFieldInfos) { if (null != field) { Console.WriteLine(field.Name); object[] attributes = field.GetCustomAttributes(t, true); if (attributes != null && attributes.Length > 0) { foreach (Attribute att in attributes) { Console.WriteLine(att.GetType().Name); } } } } } catch (Exception ex) { ReportException(ex); } } private void ReportException(Exception ex) { Exception innerException = ex; while (innerException != null) { Console.WriteLine(innerException.Message + System.Environment.NewLine + innerException.StackTrace + System.Environment.NewLine + System.Environment.NewLine); innerException = innerException.InnerException; } } public abstract class MySuperType { public MySuperType(string st) { this.STString = st; } public string STString { get; set; } public abstract string MyAbstractString { get; set; } } public class MyInheritedClass : MySuperType { public MyInheritedClass(string ic) : base(ic) { this.ICString = ic; } [Description("This is an important property"), Category("HowImportant")] public string ICString { get; set; } private string _oldSchoolPropertyString = string.Empty; public string OldSchoolPropertyString { get { return _oldSchoolPropertyString; } set { _oldSchoolPropertyString = value; } } [Description("This is a not so importarnt property"), Category("HowImportant")] public override string MyAbstractString { get; set; } }

    Read the article

  • How to setup my texture cordinates correctly in GLSL 150 and OpenGL 3.3?

    - by RubyKing
    I'm trying to do texture mapping in GLSL 150 and OpenGL 3.3 Here are my shaders I've tried my best to get this correct as possible hopefully this is :) I'm guessing you want to know what the problem is well my texture shows but not in its fullest form just one section of it not the full texture on the quad. All I can think of is its the texture cordinates in the main.cpp which is at the bottom of this post. FRAGMENT SHADER #version 150 in vec2 Texcoord_VSPS; out vec4 color; // Values that stay constant for the whole mesh. uniform sampler2D myTextureSampler; //Main Entry Point void main() { // Output color = color of the texture at the specified UV color = texture2D( myTextureSampler, Texcoord_VSPS ); } VERTEX SHADER #version 150 //Position Container in vec3 position; //Container for TexCoords attribute vec2 Texcoord0; out vec2 Texcoord_VSPS; //out vec2 ex_texcoord; //TO USE A DIFFERENT COORDINATE SYSTEM JUST MULTIPLY THE MATRIX YOU WANT //Main Entry Point void main() { //Translations and w Cordinates stuff gl_Position = vec4(position.xyz, 1.0); Texcoord_VSPS = Texcoord0; } LINK TO MAIN.CPP http://pastebin.com/t7Vg9L0k

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >