Search Results

Search found 401 results on 17 pages for 'uniform'.

Page 5/17 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Threshold of blurry image - part 2

    - by 1''
    How can I threshold this blurry image to make the digits as clear as possible? In a previous post, I tried adaptively thresholding a blurry image (left), which resulted in distorted and disconnected digits (right): Since then, I've tried using a morphological closing operation as described in this post to make the brightness of the image uniform: If I adaptively threshold this image, I don't get significantly better results. However, because the brightness is approximately uniform, I can now use an ordinary threshold: This is a lot better than before, but I have two problems: I had to manually choose the threshold value. Although the closing operation results in uniform brightness, the level of brightness might be different for other images. Different parts of the image would do better with slight variations in the threshold level. For instance, the 9 and 7 in the top left come out partially faded and should have a lower threshold, while some of the 6s have fused into 8s and should have a higher threshold. I thought that going back to an adaptive threshold, but with a very large block size (1/9th of the image) would solve both problems. Instead, I end up with a weird "halo effect" where the centre of the image is a lot brighter, but the edges are about the same as the normally-thresholded image: Edit: remi suggested morphologically opening the thresholded image at the top right of this post. This doesn't work too well. Using elliptical kernels, only a 3x3 is small enough to avoid obliterating the image entirely, and even then there are significant breakages in the digits:

    Read the article

  • Custom Font. Keeping the font width same.

    - by user338322
    I am trying to draw a string using quartz 2d. What i am doing is, i am drawing each letter of the string individually, because each letter has special attributes associated with it, by taking each letter into a new string. The string gets printed, but the space between the letters is not uniform. It looks very ugly to read . I read someting about using custom fonts. But i have no Idea, if I can do it!! my code is here. (void) drawRect : (CGRect)rect{ NSString *string=@"My Name Is Adam"; float j=0; const char *charStr=[string cStringUsingEncoding: NSASCIIStringEncoding]; for(int i=0;i { NSString *str=[NSString stringWithFormat:@"%c",charStr[i]]; const char *s=[str cStringUsingEncoding:NSASCIIStringEncoding]; NSLog(@"%s",s); CGContextRef context=[self getMeContextRef]; CGContextSetTextMatrix (context,CGAffineTransformMakeScale(1.0, -1.0)) ; CGContextSelectFont(context, "Arial", 24, kCGEncodingMacRoman); //CGContextSetCharacterSpacing (context, 10); CGContextSetRGBFillColor (context, 0,0,200, 1); CGContextSetTextDrawingMode(context,kCGTextFill); CGContextShowTextAtPoint(context, 80+j,80,s,1); j=j+15; } } In the output 'My Name is Adam' gets printed but the space between the letters is not uniform.!! is there any way to make the space uniform!!!

    Read the article

  • Manipulating gl_LightSource[gl_MaxLights]

    - by pion
    The The OpenGL ® Shading Language Spec version 1.20 http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.20.8.pdf shows the following: struct gl_LightSourceParameters { vec4 ambient; // Acli vec4 diffuse; // Dcli vec4 specular; // Scli vec4 position; // Ppli vec4 halfVector; // Derived: Hi vec3 spotDirection; // Sdli float spotExponent; // Srli float spotCutoff; // Crli // (range: [0.0,90.0], 180.0) float spotCosCutoff; // Derived: cos(Crli) // (range: [1.0,0.0],-1.0) float constantAttenuation; // K0 float linearAttenuation; // K1 float quadraticAttenuation;// K2 }; uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights]; Also, http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir1 shows the following code snippets: lightDir = normalize(vec3(gl_LightSource[0].position)); https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.10 shows many uniform* functions but nothing seems to deal with an array of uniform variable like gl_LightSource[0]. How do we set the gl_LightSource[0] fields in WebGL using JavaScript? For example, gl_LightSource[0].position Thanks in advance for your help. Note: Cross post on http://www.khronos.org/message_boards/viewtopic.php?f=43&t=3373

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle is providing a large range of interesting solutions to ensure High Availability of the database. Dataguard, RAC or even both configurations (as recommended by Oracle for a Maximum Available Architecture - MAA) are the most frequently found and used solutions. But, when it comes to protecting a system with an Active/Passive architecture with failover capabilities, people often thinks to other expensive third party cluster systems. Oracle Clusterware technology, which comes along at no extra-cost with Oracle Database or Oracle Unbreakable Linux, is - in the knowing of most people - often linked to Oracle RAC and therefore, is seldom used to implement failover solutions. Oracle Clusterware 11gR2  (a part of Oracle 11gR2 Grid Infrastructure)  provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make "failover-able'", and then to protect, almost any kind of application (from the simple xclock to the most complex Application Server). Quoting Oracle: “Oracle Clusterware is a portable cluster software that allows clustering of single servers so that they cooperate as a single system. Oracle Clusterware also provides the required infrastructure for Oracle Real Application Clusters (RAC). In addition Oracle Clusterware enables the protection of any Oracle application or any other kind of application within a cluster.” In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional on my system. Unfortunately, there can always be a typo or a mistake. This blog entry does not replace a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it...  Note : This entry has been revised (rev.2) following comments from Philip Newlan. Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with an Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 - Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let's name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill -9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let's save the definition of this resource, for future use : mkdir -p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let's remove it from the OCR definitions, we don't need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'clean')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown abort    ##for i in `ps -ef | grep -i $ORACLE_SID | awk '{print $2}' ` ;do kill -9 $i; done EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop, clean and check the database. It is self-explaining and contains nothing special. Just be aware that it must be runnable (+x), it runs as Oracle user (because of the ACL property - see later) and needs to know about the environment. Also make sure it exists on every node of the cluster. Moreover, as of 11.2, the clean method is mandatory. It must provide the “last gasp clean up”, for example, a shutdown abort or a kill –9 of all the remaining processes. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more "MS$ like" method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions. References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference Clusterware for Unbreakable Linux Using Oracle Clusterware to Protect A Single Instance Oracle Database 11gR1 (to have an idea of complexity) Oracle Clusterware on OTN   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • Vertex buffer acting strange? [on hold]

    - by Ryan Capote
    I'm having a strange problem, and I don't know what could be causing it. My current code is identical to how I've done this before. I'm trying to render a rectangle using VBO and orthographic projection.   My results:     What I expect: 3x3 rectangle in the top left corner   #include <stdio.h> #include <GL\glew.h> #include <GLFW\glfw3.h> #include "lodepng.h"   static const int FALSE = 0; static const int TRUE = 1;   static const char* VERT_SHADER =     "#version 330\n"       "layout(location=0) in vec4 VertexPosition; "     "layout(location=1) in vec2 UV;"     "uniform mat4 uProjectionMatrix;"     /*"out vec2 TexCoords;"*/       "void main(void) {"     "    gl_Position = uProjectionMatrix*VertexPosition;"     /*"    TexCoords = UV;"*/     "}";   static const char* FRAG_SHADER =     "#version 330\n"       /*"uniform sampler2D uDiffuseTexture;"     "uniform vec4 uColor;"     "in vec2 TexCoords;"*/     "out vec4 FragColor;"       "void main(void) {"    /* "    vec4 texel = texture2D(uDiffuseTexture, TexCoords);"     "    if(texel.a <= 0) {"     "         discard;"     "    }"     "    FragColor = texel;"*/     "    FragColor = vec4(1.f);"     "}";   static int g_running; static GLFWwindow *gl_window; static float gl_projectionMatrix[16];   /*     Structures */ typedef struct _Vertex {     float x, y, z, w;     float u, v; } Vertex;   typedef struct _Position {     float x, y; } Position;   typedef struct _Bitmap {     unsigned char *pixels;     unsigned int width, height; } Bitmap;   typedef struct _Texture {     GLuint id;     unsigned int width, height; } Texture;   typedef struct _VertexBuffer {     GLuint bufferObj, vertexArray; } VertexBuffer;   typedef struct _ShaderProgram {     GLuint vertexShader, fragmentShader, program; } ShaderProgram;   /*   http://en.wikipedia.org/wiki/Orthographic_projection */ void createOrthoProjection(float *projection, float width, float height, float far, float near)  {       const float left = 0;     const float right = width;     const float top = 0;     const float bottom = height;          projection[0] = 2.f / (right - left);     projection[1] = 0.f;     projection[2] = 0.f;     projection[3] = -(right+left) / (right-left);     projection[4] = 0.f;     projection[5] = 2.f / (top - bottom);     projection[6] = 0.f;     projection[7] = -(top + bottom) / (top - bottom);     projection[8] = 0.f;     projection[9] = 0.f;     projection[10] = -2.f / (far-near);     projection[11] = (far+near)/(far-near);     projection[12] = 0.f;     projection[13] = 0.f;     projection[14] = 0.f;     projection[15] = 1.f; }   /*     Textures */ void loadBitmap(const char *filename, Bitmap *bitmap, int *success) {     int error = lodepng_decode32_file(&bitmap->pixels, &bitmap->width, &bitmap->height, filename);       if (error != 0) {         printf("Failed to load bitmap. ");         printf(lodepng_error_text(error));         success = FALSE;         return;     } }   void destroyBitmap(Bitmap *bitmap) {     free(bitmap->pixels); }   void createTexture(Texture *texture, const Bitmap *bitmap) {     texture->id = 0;     glGenTextures(1, &texture->id);     glBindTexture(GL_TEXTURE_2D, texture);       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0,              GL_RGBA, GL_UNSIGNED_BYTE, bitmap->pixels);       glBindTexture(GL_TEXTURE_2D, 0); }   void destroyTexture(Texture *texture) {     glDeleteTextures(1, &texture->id);     texture->id = 0; }   /*     Vertex Buffer */ void createVertexBuffer(VertexBuffer *vertexBuffer, Vertex *vertices) {     glGenBuffers(1, &vertexBuffer->bufferObj);     glGenVertexArrays(1, &vertexBuffer->vertexArray);     glBindVertexArray(vertexBuffer->vertexArray);       glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj);     glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * 6, (const GLvoid*)vertices, GL_STATIC_DRAW);       const unsigned int uvOffset = sizeof(float) * 4;       glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);     glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)uvOffset);       glEnableVertexAttribArray(0);     glEnableVertexAttribArray(1);       glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }   void destroyVertexBuffer(VertexBuffer *vertexBuffer) {     glDeleteBuffers(1, &vertexBuffer->bufferObj);     glDeleteVertexArrays(1, &vertexBuffer->vertexArray); }   void bindVertexBuffer(VertexBuffer *vertexBuffer) {     glBindVertexArray(vertexBuffer->vertexArray);     glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->bufferObj); }   void drawVertexBufferMode(GLenum mode) {     glDrawArrays(mode, 0, 6); }   void drawVertexBuffer() {     drawVertexBufferMode(GL_TRIANGLES); }   void unbindVertexBuffer() {     glBindVertexArray(0);     glBindBuffer(GL_ARRAY_BUFFER, 0); }   /*     Shaders */ void compileShader(ShaderProgram *shaderProgram, const char *vertexSrc, const char *fragSrc) {     GLenum err;     shaderProgram->vertexShader = glCreateShader(GL_VERTEX_SHADER);     shaderProgram->fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);       if (shaderProgram->vertexShader == 0) {         printf("Failed to create vertex shader.");         return;     }       if (shaderProgram->fragmentShader == 0) {         printf("Failed to create fragment shader.");         return;     }       glShaderSource(shaderProgram->vertexShader, 1, &vertexSrc, NULL);     glCompileShader(shaderProgram->vertexShader);     glGetShaderiv(shaderProgram->vertexShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile vertex shader.");         return;     }       glShaderSource(shaderProgram->fragmentShader, 1, &fragSrc, NULL);     glCompileShader(shaderProgram->fragmentShader);     glGetShaderiv(shaderProgram->fragmentShader, GL_COMPILE_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to compile fragment shader.");         return;     }       shaderProgram->program = glCreateProgram();     glAttachShader(shaderProgram->program, shaderProgram->vertexShader);     glAttachShader(shaderProgram->program, shaderProgram->fragmentShader);     glLinkProgram(shaderProgram->program);          glGetProgramiv(shaderProgram->program, GL_LINK_STATUS, &err);       if (err != GL_TRUE) {         printf("Failed to link shader.");         return;     } }   void destroyShader(ShaderProgram *shaderProgram) {     glDetachShader(shaderProgram->program, shaderProgram->vertexShader);     glDetachShader(shaderProgram->program, shaderProgram->fragmentShader);       glDeleteShader(shaderProgram->vertexShader);     glDeleteShader(shaderProgram->fragmentShader);       glDeleteProgram(shaderProgram->program); }   GLuint getUniformLocation(const char *name, ShaderProgram *program) {     GLuint result = 0;     result = glGetUniformLocation(program->program, name);       return result; }   void setUniformMatrix(float *matrix, const char *name, ShaderProgram *program) {     GLuint loc = getUniformLocation(name, program);       if (loc == -1) {         printf("Failed to get uniform location in setUniformMatrix.\n");         return;     }       glUniformMatrix4fv(loc, 1, GL_FALSE, matrix); }   /*     General functions */ static int isRunning() {     return g_running && !glfwWindowShouldClose(gl_window); }   static void initializeGLFW(GLFWwindow **window, int width, int height, int *success) {     if (!glfwInit()) {         printf("Failed it inialize GLFW.");         *success = FALSE;        return;     }          glfwWindowHint(GLFW_RESIZABLE, 0);     *window = glfwCreateWindow(width, height, "Alignments", NULL, NULL);          if (!*window) {         printf("Failed to create window.");         glfwTerminate();         *success = FALSE;         return;     }          glfwMakeContextCurrent(*window);       GLenum glewErr = glewInit();     if (glewErr != GLEW_OK) {         printf("Failed to initialize GLEW.");         printf(glewGetErrorString(glewErr));         *success = FALSE;         return;     }       glClearColor(0.f, 0.f, 0.f, 1.f);     glViewport(0, 0, width, height);     *success = TRUE; }   int main(int argc, char **argv) {          int err = FALSE;     initializeGLFW(&gl_window, 480, 320, &err);     glDisable(GL_DEPTH_TEST);     if (err == FALSE) {         return 1;     }          createOrthoProjection(gl_projectionMatrix, 480.f, 320.f, 0.f, 1.f);          g_running = TRUE;          ShaderProgram shader;     compileShader(&shader, VERT_SHADER, FRAG_SHADER);     glUseProgram(shader.program);     setUniformMatrix(&gl_projectionMatrix, "uProjectionMatrix", &shader);       Vertex rectangle[6];     VertexBuffer vbo;     rectangle[0] = (Vertex){0.f, 0.f, 0.f, 1.f, 0.f, 0.f}; // Top left     rectangle[1] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top right     rectangle[2] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[3] = (Vertex){3.f, 0.f, 0.f, 1.f, 1.f, 0.f}; // Top left     rectangle[4] = (Vertex){0.f, 3.f, 0.f, 1.f, 0.f, 1.f}; // Bottom left     rectangle[5] = (Vertex){3.f, 3.f, 0.f, 1.f, 1.f, 1.f}; // Bottom right       createVertexBuffer(&vbo, &rectangle);            bindVertexBuffer(&vbo);          while (isRunning()) {         glClear(GL_COLOR_BUFFER_BIT);         glfwPollEvents();                    drawVertexBuffer();                    glfwSwapBuffers(gl_window);     }          unbindVertexBuffer(&vbo);       glUseProgram(0);     destroyShader(&shader);     destroyVertexBuffer(&vbo);     glfwTerminate();     return 0; }

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • Better solution for boolean mixing?

    - by Ruben Nunez
    Sorry if this question has been asked in the past, but searching Google and here didn't yield relevant results, so here goes. I'm working on a fragment shader that implements both conditional/boolean diffuse and bump mapping (that is to say, you don't need a diffuse texture or a normals texture, and if they're not present, they're simply changed to default values). My current solution is to use a uniform float to say "mix amount". For example, computing the diffuse texel works as: // Compute diffuse amount scaled by vCol // If no texture is present (mDif = 0.0), then DiffuseTexel = vCol // kT[0] is the diffuse texture // vTex is the texture co-ordinates // mDif is the uniform float containing the mix amount (either 0.0 or 1.0) vec4 DiffuseTexel = vCol*mix(vec4(1.0), texture2D(kT[0], vTex), mDif); While that works great and all, I was wondering if there's a better way of doing this, as I will never have any use for in-between values for funky effects. I know that perhaps the best solution is to simply write separate shaders for mDif=0.0 and mDif=1.0, but I'd like a more elegant solution than splicing shaders before compiling or writing multiple shader files and keeping each one updated. Any ideas are greatly appreciated. =)

    Read the article

  • problems texture mapping in modern OpenGL 3.3 using GLSL #version 150

    - by RubyKing
    Hi all I'm trying to do texture mapping using Modern OpenGL and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here http://www.youtube.com/watch?v=xbzw_LMxlHw and I have everything setup best I can have my texcords in my vertex array sent up to opengl I have my fragment color set to the texture values and texel values I have my vertex sending the textures cords to texture cordinates to be used in the fragment shader I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. here is my code FRAGMENT SHADER #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void){ gl_FragColor = texture(texture, texture_coord); } VERTEX SHADER #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit here is my vertex array with texture cordinates GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f , 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y HERE IS THE ACTUAL FULL SOURCE CODE if you need to see all the code in its fullest glory here is a link to every file http://ideone.com/7kQN3 thank you for your help

    Read the article

  • Proper way to do texture mapping in modern OpenGL?

    - by RubyKing
    I'm trying to do texture mapping using OpenGL 3.3 and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here. My texcords are in a vertex array. I have my fragment color set to the texture values and texel values. I have my vertex shader sending the texture cords to texture cordinates to be used in the fragment shader. I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. Here is my code: Fragment shader #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void) { gl_FragColor = texture(texture, texture_coord); } Vertex shader #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit Here is my vertex array with texture coordinates: GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y If you need to see all the code, here is a link to every file. Thank you for your help.

    Read the article

  • Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

    - by fip
    The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites: How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic? How to make sure the messages are evenly distributed (load balanced) to SOA cluster members? Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication. 2. The typical configuration In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics. Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps: Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) Here are more details of each step: Step A. Configure JMS Topic to be UDD and PDT, You do this at the WebLogic cluster that house the JMS Topic. You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively Step B: Configure ServerProperties of JCA Connection Factory You do this step at the SOA cluster. This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client". When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list: ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties. Please note: Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID. The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory. Here is the example setting: Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file) In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber': <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"> <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/> <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"> <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"> <property name="DurableSubscriber" value="MySubscriberID-1"/> <property name="PayloadType" value="TextMessage"/> <property name="UseMessageListener" value="false"/> <property name="DestinationName" value="jms/MyTestUDDTopic"/> </activation-spec> </endpoint-activation> </adapter-config> You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project. 2.The "atypical" configurations: For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here: Configuration A: The JMS Topic is NOT PDT In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages. Configuration B. The ClientIDPolicy=RESTRICTED In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter. References: Managing Durable Subscription WebLogic JMS Partitioned Distributed Topics and Shared Subscriptions JMS Troubleshooting: Configuring JMS Message Logging: Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • How do I pass vertex and color positions to OpenGL shaders?

    - by smoth190
    I've been trying to get this to work for the past two days, telling myself I wouldn't ask for help. I think you can see where that got me... I thought I'd try my hand at a little OpenGL, because DirectX is complex and depressing. I picked OpenGL 3.x, because even with my OpenGL 4 graphics card, all my friends don't have that, and I like to let them use my programs. There aren't really any great tutorials for OpenGL 3, most are just "type this and this will happen--the end". I'm trying to just draw a simple triangle, and so far, all I have is a blank screen with my clear color (when I set the draw type to GL_POINTS I just get a black dot). I have no idea what the problem is, so I'll just slap down the code: Here is the function that creates the triangle: void CEntityRenderable::CreateBuffers() { m_vertices = new Vertex3D[3]; m_vertexCount = 3; m_vertices[0].x = -1.0f; m_vertices[0].y = -1.0f; m_vertices[0].z = -5.0f; m_vertices[0].r = 1.0f; m_vertices[0].g = 0.0f; m_vertices[0].b = 0.0f; m_vertices[0].a = 1.0f; m_vertices[1].x = 1.0f; m_vertices[1].y = -1.0f; m_vertices[1].z = -5.0f; m_vertices[1].r = 1.0f; m_vertices[1].g = 0.0f; m_vertices[1].b = 0.0f; m_vertices[1].a = 1.0f; m_vertices[2].x = 0.0f; m_vertices[2].y = 1.0f; m_vertices[2].z = -5.0f; m_vertices[2].r = 1.0f; m_vertices[2].g = 0.0f; m_vertices[2].b = 0.0f; m_vertices[2].a = 1.0f; //Create the VAO glGenVertexArrays(1, &m_vaoID); //Bind the VAO glBindVertexArray(m_vaoID); //Create a vertex buffer glGenBuffers(1, &m_vboID); //Bind the buffer glBindBuffer(GL_ARRAY_BUFFER, m_vboID); //Set the buffers data glBufferData(GL_ARRAY_BUFFER, sizeof(m_vertices), m_vertices, GL_STATIC_DRAW); //Set its usage glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), 0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_TRUE, sizeof(Vertex3D), (void*)(3*sizeof(float))); //Enable glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); //Check for errors if(glGetError() != GL_NO_ERROR) { Error("Failed to create VBO: %s", gluErrorString(glGetError())); } //Unbind... glBindVertexArray(0); } The Vertex3D struct is as such... struct Vertex3D { Vertex3D() : x(0), y(0), z(0), r(0), g(0), b(0), a(1) {} float x, y, z; float r, g, b, a; }; And finally the render function: void CEntityRenderable::RenderEntity() { //Render... glBindVertexArray(m_vaoID); //Use our attribs glDrawArrays(GL_POINTS, 0, m_vertexCount); glBindVertexArray(0); //unbind OnRender(); } (And yes, I am binding and unbinding the shader. That is just in a different place) I think my problem is that I haven't fully wrapped my mind around this whole VertexAttribArray thing (the only thing I like better in DirectX was input layouts D:). This is my vertex shader: #version 330 //Matrices uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; //In values layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; //Out values out vec3 frag_color; //Main shader void main(void) { //Position in world gl_Position = vec4(position, 1.0); //gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); //No color changes frag_color = color; } As you can see, I've disable the matrices, because that just makes debugging this thing so much harder. I tried to debug using glslDevil, but my program just crashes right before the shaders are created... so I gave up with that. This is my first shot at OpenGL since the good old days of LWJGL, but that was when I didn't even know what a shader was. Thanks for your help :)

    Read the article

  • Design pattern for parsing data that will be grouped to two different ways and flipped

    - by lewisblackfan
    I'm looking for an easily maintainable and extendable design model for a script to parse an excel workbook into two separate workbooks after pulling data from other locations like the command line, and a database. The high level details are as follows. I need to parse an excel workbook containing a sheet that lists unique question names, the only reliable information that can be parsed from the question name is the book code that identifies the title and edition of the textbook the question is associated with, the rest of the question name is not standardized well enough to be reliably parsed by computer. The general form of the question name is best described by the following regular expression. '^(\w+)\s(\w{1,2})\.(\w{1,2})\.(\w{1,3})\.(\w{1,3}\.)*$' The first sub-pattern is the book code, the second sub-pattern is 90% of the time the chapter, and the rest of the sub-patterns could be section, problem type, problem number, or question type information. There is no simple logic, at least not one I can find. There will be a minimum of three other columns in this spreadsheet; one column will be the chapter the question is associated with, the second will be the section within the chapter the question is associated with, and the third will be some kind of asset indicated by a uniform resource locator. 1 | 1 | qname1 | url | description | url | description ... 1 | 1 | qname2 | url | description 1 | 1 | qname3 | url | description | url | description | url | The asset can be indicated by a full or partial uniform resource locator, the partial url will need to be completed before it can be fed into the application. There theoretically could be no limit to the number of asset columns, the assets will be grouped in columns by type. Some times additional data will have to be retrieved from a database or combined with the book code before the asset url is complete and can be understood by the application that will be using the asset. The type is an abstraction, there are eight types right now, each with their own logic in how the uniform resource locator is handled and or completed, and I have to add a new type and its logic every three or four months. For each asset url there is the possibility of a description column, a character string for display in the application, but not always. (I've already worked out validating the description text, and squashing MSs obscure code page down to something 7-bit ascii can handle.) Now that all the details are filled-in I can get to the actual problem of parsing the file. I need to split the information in this excel workbook into two separate workbooks. The first workbook will group all the questions by section in rows. With the first cell being the section doublet and the rest of the cells in the row are the question names. 1.1 | qname1 | qname2 | qname3 | qname4 | 1.2 | qname1 | qname2 | qname3 | 1.3 | qname1 | qname2 | qname3 | qname4 | qname5 There is no set number of questions for each section as you can see from the above example. The second workbook is more complicated, there is one row per asset, and question names that have more than one asset will be duplicated. There will be four or five columns on this sheet. The first is the question name for the asset, the second is a media type used to select the correct icon for the asset in the application, the third is string representing the asset type, the four is the full and complete uniform resource locator for the asset, and the fifth columns is the optional text description for the asset. q1 | mtype1 | atype1 | url | description q1 | mtype2 | atype2 | url | description q1 | mtype2 | atype3 | url | description q2 | mtype1 | atype1 | url | description q2 | mtype2 | atype3 | url | description For the original six types I did have a script that parsed the source excel workbook into the other two excel workbooks, and I was able to add two more types until I ran aground on the implementation of the ninth type and tenth types. What broke my script was the fact that the ninth type is actually a sub-type of one of the original six, but with entirely different logic, and my mostly procedural script could not accommodate without duplicating a lot of code. I also had a lot of bugs in the script and will be writing the test first on this time around. I'm stuck with the format for the resulting two workbooks, this script is glue code, development went ahead with the project without bothering to get a complete spec from the sponsor. I work for the same company as the developers but in the editorial department, editorial is co-sponsor of the project, and am expected to fix pesky details like this (I'm foaming at the mouth as I type this). I've tried factories, I've tried different object models, but each resulting workbook is so different when I find a design that works for generating one workbook the code is not really usable for generating the other. What I would really like are ideas about a maintainable and extensible design for parsing the source workbook into both workbooks with maximum code reuse, and or sympathy.

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • PyOpenGL - passing transformation matrix into shader

    - by M-V
    I am having trouble passing projection and modelview matrices into the GLSL shader from my PyOpenGL code. My understanding is that OpenGL matrices are column major, but when I pass in projection and modelview matrices as shown, I don't see anything. I tried the transpose of the matrices, and it worked for the modelview matrix, but the projection matrix doesn't work either way. Here is the code: import OpenGL from OpenGL.GL import * from OpenGL.GL.shaders import * from OpenGL.GLU import * from OpenGL.GLUT import * from OpenGL.GLUT.freeglut import * from OpenGL.arrays import vbo import numpy, math, sys strVS = """ attribute vec3 aVert; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; uniform vec4 uColor; varying vec4 vCol; void main() { // option #1 - fails gl_Position = uPMatrix * uMVMatrix * vec4(aVert, 1.0); // option #2 - works gl_Position = vec4(aVert, 1.0); // set color vCol = vec4(uColor.rgb, 1.0); } """ strFS = """ varying vec4 vCol; void main() { // use vertex color gl_FragColor = vCol; } """ # particle system class class Scene: # initialization def __init__(self): # create shader self.program = compileProgram(compileShader(strVS, GL_VERTEX_SHADER), compileShader(strFS, GL_FRAGMENT_SHADER)) glUseProgram(self.program) self.pMatrixUniform = glGetUniformLocation(self.program, 'uPMatrix') self.mvMatrixUniform = glGetUniformLocation(self.program, "uMVMatrix") self.colorU = glGetUniformLocation(self.program, "uColor") # attributes self.vertIndex = glGetAttribLocation(self.program, "aVert") # color self.col0 = [1.0, 1.0, 0.0, 1.0] # define quad vertices s = 0.2 quadV = [ -s, s, 0.0, -s, -s, 0.0, s, s, 0.0, s, s, 0.0, -s, -s, 0.0, s, -s, 0.0 ] # vertices self.vertexBuffer = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) vertexData = numpy.array(quadV, numpy.float32) glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData, GL_STATIC_DRAW) # render def render(self, pMatrix, mvMatrix): # use shader glUseProgram(self.program) # set proj matrix glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix) # set modelview matrix glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix) # set color glUniform4fv(self.colorU, 1, self.col0) #enable arrays glEnableVertexAttribArray(self.vertIndex) # set buffers glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) glVertexAttribPointer(self.vertIndex, 3, GL_FLOAT, GL_FALSE, 0, None) # draw glDrawArrays(GL_TRIANGLES, 0, 6) # disable arrays glDisableVertexAttribArray(self.vertIndex) class Renderer: def __init__(self): pass def reshape(self, width, height): self.width = width self.height = height self.aspect = width/float(height) glViewport(0, 0, self.width, self.height) glEnable(GL_DEPTH_TEST) glDisable(GL_CULL_FACE) glClearColor(0.8, 0.8, 0.8,1.0) glutPostRedisplay() def keyPressed(self, *args): sys.exit() def draw(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # build projection matrix fov = math.radians(45.0) f = 1.0/math.tan(fov/2.0) zN, zF = (0.1, 100.0) a = self.aspect pMatrix = numpy.array([f/a, 0.0, 0.0, 0.0, 0.0, f, 0.0, 0.0, 0.0, 0.0, (zF+zN)/(zN-zF), -1.0, 0.0, 0.0, 2.0*zF*zN/(zN-zF), 0.0], numpy.float32) # modelview matrix mvMatrix = numpy.array([1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.5, 0.0, -5.0, 1.0], numpy.float32) # render self.scene.render(pMatrix, mvMatrix) # swap buffers glutSwapBuffers() def run(self): glutInitDisplayMode(GLUT_RGBA) glutInitWindowSize(400, 400) self.window = glutCreateWindow("Minimal") glutReshapeFunc(self.reshape) glutDisplayFunc(self.draw) glutKeyboardFunc(self.keyPressed) # Checks for key strokes self.scene = Scene() glutMainLoop() glutInit(sys.argv) prog = Renderer() prog.run() When I use option #2 in the shader without either matrix, I get the following output: What am I doing wrong?

    Read the article

  • How to set the same column width in a datagrid in flex at runtime?

    - by Ra
    HI, In flex, I have a datagrid with 22 columns. I initially display all the columns. The width of each column right is uniform. Now when i change the visiblity of a few columns, the width of each column varies. How do i maintain a uniform column width for each column whether or not there are any invisible columns?... Also how do i get the count of number of visible columns. the ColumnCount property returns total number of columns and not the number of visible ones.

    Read the article

  • Are there any tools in IDEs to automatically fix comment formatting?

    - by Fragsworth
    /* Suppose I have a multi-line comment with hard line-breaks * that are roughly uniform on the right side of the text, * and I want to add text to a line in order to make the * comment a bit more descriptive. */ Now, most unfortunately, I need to add text to one of the top lines. /* Suppose I have a multi-line comment with hard line-breaks (here is some added text for happy fun time) * that are roughly uniform on the right side of the text, * and I want to add text to a line in order to make the * comment a bit more descriptive. */ It takes O(n) time (n being the number of lines) to fix each line so that they roughly line up again. The computer should do this, not me. Are there tools to deal with this in our IDEs? What are they called?

    Read the article

  • No drop packets using the error models for wirelesss scenario ?

    I am trying to use the error model in ns2 with wireless links, I am using ns2.33. I tried the uniform error model and the markov chain model. No errors from ns but when I am trying to find the dropped packets due to corruption I can not find any of them using either the uniform or markov models. So please help me if I made any error in using the error model. to search for dropped packets I am using eid@eid-laptop:~/code/ns2/noisy$ cat mixed.tr | grep d My code is available here : http://pastebin.com/f68749435

    Read the article

  • Fragment-shader blur ... how does this work?

    - by anon
    uniform sampler2D sampler0; uniform vec2 tc_offset[9]; void blur() { vec4 sample[9]; for(int i = 0; i < 9; ++i) sample[i] = texture2D(sampler0, gl_TexCoord[0].st + tc_offset[i]); gl_FragColor = (sample[0] + (2.0 * sample[1]) + sample[2] + (2.0 * sample[3]) + sample[4] + 2.0 * sample[5] + sample[6] + 2.0 * sample[7] + sample[8] ) / 13.0; } How does the sample[i] = texture2D(sample0, ...) line work? It seems like to blur an image, I have to first generate the image, yet here, I'm somehow trying to query the very iamge I'm generating. How does this work?

    Read the article

  • Uniforme distance between points

    - by Reonarudo
    Hello, How could I, having a path defined by several points that are not in a uniform distance from each other, redefine along the same path the same number of points but with a uniform distance. I'm trying to do this in Objective-C with NSArrays of CGPoints but so far I haven't had any luck with this. Thank you for any help. EDIT I was wondering if it would help to reduce the number of points, like when detecting if 3 points are collinear we could remove the middle one, but I'm not sure that would help.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >