Search Results

Search found 2888 results on 116 pages for 'scale'.

Page 29/116 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Algorithm to zoom a plotted function

    - by astinx
    I'm making a game in android and I need plot a function, my algorithm is this: @Override protected void onDraw(Canvas canvas) { float e = 0.5f; //from -x axis to +x evaluate f(x) for (float x = -z(canvas.getWidth()); x < z(canvas.getWidth()); x+=e) { float x1,y1; x1 = x; y1 = f(x); canvas.drawPoint((canvas.getWidth()/2)+x1, (canvas.getHeight()/2)-y1, paintWhite); } super.onDraw(canvas); } This is how it works. If my function is, for example f(x)=x^2, then z(x) is sqrt(x). I evaluate each point between -z(x) to z(x) and then I draw them. As you can see I use the half of the size of the screen to put the function in the middle of the screen. The problem isn't that the code isn't working, actually plots the function. But if the screen is of 320*480 then this function will be really tiny like in the image below. My question is: how can I change this algorithm to scale the function?. BTW what I'm really wanting to do is trace a route to later display an animated sprite, so actually scale this image doesnt gonna help me. I need change the algorithm in order to draw the same interval but in a larger space. Any tip helps, thanks! Current working result Desired result UPDATE: I will try explain more in detail what the problem is. Given this interval [-15...15] (-z(x) to z(x)) I need divide this points in a bigger interval [-320...320] (-x to x). For example, when you use some plotting software like this one. Although the div where is contain the function has 640 px of width, you dont see that the interval is from -320 to 320, you see that the interval is from -6 to 6. How can I achieve this?

    Read the article

  • using apple-mobile-web-app-capable and cache.manifest issue [migrated]

    - by LocoMike
    So I have this simple html file <!DOCTYPE HTML> <html manifest="cache.manifest"><head> <meta name="apple-mobile-web-app-capable" content="yes"> <meta name="apple-mobile-web-app-status-bar-style" content="black"> <title>Test</title> <meta http-equiv="content-type" content="text/html"> <meta name="HandheldFriendly" content="true"> <meta name="viewport" content="width=320; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;"> <style type="text/css"></style></head> <body marginwidth="0" marginheight="0" topmargin="0" leftmargin="0"> <h1>hello</h1> </body> </html> My cache.manifest is simply CACHE MANIFEST I run this website on my local server (localhost). I load it from iphone safari and it works fine. I then stop the server and load it again, and it works, because the offline cache is doing its job. However... if I save the website as a start icon in the iphone dashboard, and then I try to open it with the server stopped it won't load. However... if I open it with the server running at least once (it will work) then I can open it later without problem. It looks like even though the page was cached in safari, it is not cached in this saved app. Anybody knows how to get around this? Thank you!

    Read the article

  • OpenGL: Move camera regardless of rotation

    - by Markus
    For a 2D board game I'd like to move and rotate an orthogonal camera in coordinates given in a reference system (window space), but simply can't get it to work. The idea is that the user can drag the camera over a surface, rotate and scale it. Rotation and scaling should always be around the center of the current viewport. The camera is set up as: gl.glMatrixMode(GL2.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(-width/2, width/2, -height/2, height/2, nearPlane, farPlane); where width and height are equal to the viewport's width and height, so that 1 unit is one pixel when no zoom is applied. Since these transformations usually mean (scaling and) translating the world, then rotating it, the implementation is: gl.glMatrixMode(GL2.GL_MODELVIEW); gl.glLoadIdentity(); gl.glRotatef(rotation, 0, 0, 1); // e.g. 45° gl.glTranslatef(x, y, 0); // e.g. +10 for 10px right, -2 for 2px down gl.glScalef(zoomFactor, zoomFactor, zoomFactor); // e.g. scale by 1.5 That however has the nasty side effect that translations are transformed as well, that is applied in world coordinates. If I rotate around 90° and translate again, X and Y axis are swapped. If I reorder the transformations so they read gl.glTranslatef(x, y, 0); gl.glScalef(zoomFactor, zoomFactor, zoomFactor); gl.glRotatef(rotation, 0, 0, 1); the translation will be applied correctly (in reference space, so translation along x always visually moves the camera sideways) but rotation and scaling are now performed around origin. It shouldn't be too hard, so what is it I'm missing?

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [closed]

    - by DhruvPathak
    Possible Duplicate: How to find web hosting that meets my requirements? We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • What arguments do I send a function being called by a button in python?

    - by Jared
    I have a UI, in that UI is 4 text fields and 1 int field, then I have a function that calls to another function based on what's inside of the text fields, this function has (self, *args). My function that is being called to takes five arguments and I don't know what to put in it to make it actually work with my UI because python button's send an argument of their own. I have tried self and *args, but it doesn't work. Here is my code, didn't include most of the UI code since it is self explanatory: def crBC(self, IKJoint, FKJoint, bindJoint, xQuan, switch): ''' You should have a controller with an attribute 'ikFkBlend' - The name can be changed after the script executes. Controller should contain an enum - FK/DYN(0), IK(1). Specify the IK joint, then either the dynamic or FK joint, then the bind joint. Then a quantity of joints to pass through and connect. Tested currently on 600 joints (200 x 3), executed in less than a second. Returns nothing. Please open your script editor for details. ''' import itertools # gets children joints of the selected joint chHipIK = cmds.listRelatives(IKJoint, ad = True, type = 'joint') chHipFK = cmds.listRelatives(FKJoint, ad = True, type = 'joint') chHipBind = cmds.listRelatives(bindJoint, ad = True, type = 'joint') # list is built backwards, this reverses the list chHipIK.reverse() chHipFK.reverse() chHipBind.reverse() # appends the initial joint to the list chHipIK.append(IKJoint) chHipFK.append(FKJoint) chHipBind.append(bindJoint) # puts the last joint at the start of the list because the initial joint # was added to the end chHipIK.insert(0, chHipIK.pop()) chHipFK.insert(0, chHipFK.pop()) chHipBind.insert(0, chHipBind.pop()) # pops off the remaining joints in the list the user does not wish to be blended chHipBind[xQuan:] = [] chHipIK[xQuan:] = [] chHipFK[xQuan:] = [] # goes through the bind joints, makes a blend colors for each one, connects # the switch to the blender for a, b, c in itertools.izip(chHipBind, chHipIK, chHipFK): rotBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'rotate_BC') tranBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'tran_BC') scaleBC = cmds.shadingNode('blendColors', asUtility = True, n = a + 'scale_BC') cmds.connectAttr(switch + '.ikFkSwitch', rotBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', tranBC + '.blender') cmds.connectAttr(switch + '.ikFkSwitch', scaleBC + '.blender') # goes through the ik joints, connects to the blend colors cmds.connectAttr(b + '.rotate', rotBC + '.color1', force = True) cmds.connectAttr(b + '.translate', tranBC + '.color1', force = True) cmds.connectAttr(b + '.scale', scaleBC + '.color1', force = True) # connects FK joints to the blend colors cmds.connectAttr(c + '.rotate', rotBC + '.color2') cmds.connectAttr(c + '.translate', tranBC + '.color2') cmds.connectAttr(c + '.scale', scaleBC + '.color2') # connects blend colors to bind joints cmds.connectAttr(rotBC + '.output', a + '.rotate') cmds.connectAttr(tranBC + '.output', a + '.translate') cmds.connectAttr(scaleBC + '.output', a + '.scale') ------------------- def execCrBC(self, *args): g.crBC(cmds.textField(self.ikJBC, q = True, tx = True), cmds.textField(self.fkJBC, q = True, tx = True), cmds.textField(self.bindJBC, q = True, tx = True), cmds.intField(self.bQBC, q = True, v = True), cmds.textField(self.sCBC, q = True, tx = True)) ------------------- self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) Here is the code causing the problem as requested: import maya.cmds as cmds import jtRigUI.createDummyRig as dum import jtRigUI.createSkeleton as sk import jtRigUI.generalUtilities as gu import jtRigUI.createLegRig as lr import jtRigUI.createArmRig as ar class RUI(dum.Dict, dum.Dummy, sk.Skel, sk.FiSkel, lr.LeanLocs, lr.LegRig, ar.ArmRig, gu.Gutils): def __init__(self, charNameUI, gScaleUI, fingButtonGrp, thumbCheckBox, spineButtonGrp, neckButtonGrp, ikJBC, fkJBC, bindJBC, bQBC, sCBC): rigUI = 'rigUI' if cmds.window(rigUI, exists = True): cmds.deleteUI(rigUI) rigUI = cmds.window(rigUI, t = 'JT Rigging UI', sizeable = False, tb = True, mnb = False, mxb = False, menuBar = True, tlb = True, nm = 5) form = cmds.formLayout() tabs = cmds.tabLayout(innerMarginWidth = 1, innerMarginHeight = 1) rigUIMenu = cmds.menu('Help', hm = True) aboutMenu = cmds.menuItem('about') cmds.popupMenu('about', button = 1) deleteUIMenu = cmds.menu('Delete', hm = True) cmds.menuItem('dummySkeleton') cmds.formLayout(form, edit = True, attachForm = ((tabs, 'top', 0), (tabs, 'left', 0), (tabs, 'bottom', 0), (tabs, 'right', 0)), w = 30) tab1 = cmds.rowColumnLayout('Dummy') #cmds.columnLayout(rowSpacing = 10) #cmds.setParent('..') cmds.frameLayout(l = 'A: Dummy Skeleton Setup', w = 400) self.charNameUI = cmds.textFieldGrp (label="Optional Character Name:", ann="Insert a name for the character or leave empty.", tx = '', w = 1) fingJUI = cmds.frameLayout(l = 'B: Number of Fingers', w = 10) cmds.text('\n', h = 5) self.fingButtonGrp = cmds.radioButtonGrp('fingRadio', p = fingJUI, l = 'Fingers: ', sl = 4, w = 1, numberOfRadioButtons = 4, labelArray4 = ['One', 'Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw5 = [60,60,60,60,60]) self.thumbCheckBox = cmds.checkBoxGrp(l = 'Thumb: ', v1 = True) cmds.text('\n', h = 5) spineJUI = cmds.frameLayout(l = 'C: Number of Spine Joints') cmds.text('\n', h = 5) self.spineButtonGrp = cmds.radioButtonGrp('spineRadio', p = spineJUI, l = 'Spine Joints: ', sl = 2, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Three', 'Five', 'Ten'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) neckJUI = cmds.frameLayout(l = 'D: Number of Neck Joints') cmds.text('\n', h = 5) self.neckButtonGrp = cmds.radioButtonGrp('neckRadio', p = neckJUI, l = 'Neck Joints: ', sl = 0, w = 1, numberOfRadioButtons = 3, labelArray3 = ['Two', 'Three', 'Four'], ct2 = ('left', 'left'), cw4 = [95,95,95,95]) cmds.text('\n', h = 5) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.frameLayout('E: Creation') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nCreate Dummy Skeleton\n', c = self.build) # also have it make char name field grey cmds.text('Elbows and Knees must have bend.', bgc = (0.2,0.2,0.2)) cmds.columnLayout() cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab2 = cmds.rowColumnLayout('Skeleton') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Skeleton Setup') cmds.text('SAVE FIRST: CAN NOT UNDO', bgc = (0.2,0.2,0.2)) cmds.button(l = '\nConvert to Skeleton - Orient - Set LRA\n', c = self.buildSkel) self.gScaleUI = cmds.textFieldGrp (label="Scale Multiplier:", ann="Scale multipler of Character: basis for all further base controllers", tx = '1.0', w = 1, ed = False, en = False, visible = True) cmds.frameLayout('B: Manual Orientation') cmds.text('You must manually check finger, thumb, leg, foot orientation specifically.\nConfirm rest of joints.\nSpine: X aim, Y point backwards from spine, Z to the side.\nFingers: X is aim, Y points upwards, Z to the side - Spread on Y, curl on Z.\nFoot: Pivots on Y, rolls on Z, leans on X.') cmds.columnLayout() cmds.setParent('..') cmds.frameLayout('C: Finalize Creation of Skeleton') cmds.button(l = '\nFinalize Skeleton\n', c = self.finishS) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab3 = cmds.rowColumnLayout('Legs') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Leg Rig Setup') cmds.button(l = '\nGenerate Foot Lean Locators\n', c = self.makeLean) cmds.text('Place on either side of the foot.\nDo not rotate: Automatic orientation in place.') cmds.frameLayout(l = 'B: Rig Legs') cmds.button(l = '\nRig Legs\n', c = self.makeLegs) cmds.setParent('..') cmds.setParent('..') cmds.setParent('..') tab4 = cmds.rowColumnLayout('Arms') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'A: Arm Rig Setup') cmds.button(l = '\nA: Rig Arms\n', c = self.makeArms) cmds.setParent('..') cmds.setParent('..') tab5 = cmds.rowColumnLayout('Spine and Head') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Spine Rig Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Stretchy IK') cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'Stretchy Setup') cmds.setParent('..') cmds.setParent('..') tab6 = cmds.rowColumnLayout('Extras') cmds.scrollLayout(saw = 600, sah = 600, cr = True) cmds.columnLayout(columnAttach = ('both', 5), rowSpacing = 10, columnWidth = 150) cmds.setParent('..') cmds.frameLayout(l = 'General Utitlities') cmds.text('\nHere are all my general utilities for various things') cmds.frameLayout(l = 'Automatic Blend Colors Creation and Connection') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('IK Joint:') cmds.text(l = '') cmds.text('FK/Dyn Joint:') cmds.text(l = '') cmds.text('Bind Joint:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nBlend Quantity:') cmds.text(l = '') cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.text(l = '') cmds.setParent('..') cmds.frameLayout(l = 'Make Spline IK Curve Stretch And Squash') cmds.rowColumnLayout(nc = 5, w = 10) cmds.text('Curve Name:') cmds.text(l = '') cmds.text('Setup Name:') cmds.text(l = '') cmds.text('Joint Quantity:') self.ikJBC = cmds.textField() cmds.text(l = '') self.fkJBC = cmds.textField() cmds.text(l = '') self.bindJBC = cmds.textField() cmds.text(' \nSwitch Control:') cmds.text(l = '') cmds.text(' \nGlobal Control:') cmds.text(l = '') cmds.text(l = '') self.bQBC = cmds.intField() cmds.text(l = '') self.sCBC = cmds.textField() cmds.text(l = '') cmds.button(l = 'Help Docs', c = self.crBC.__doc__) cmds.setParent('..') cmds.button(l = 'Create', c = self.execCrBC) cmds.setParent('..') cmds.showWindow(rigUI) r = RUI('charNameUI', 'gScaleUI', 'fingButtonGrp', 'thumbCheckBox', 'spineButtonGrp', 'neckButtonGrp', 'ikJBC', 'fkJBC', 'bindJBC', 'bQBC', 'sCBC') # last modified at 6.20 pm 29th June 2011

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • 8 Things You Can Do In Android’s Developer Options

    - by Chris Hoffman
    The Developer Options menu in Android is a hidden menu with a variety of advanced options. These options are intended for developers, but many of them will be interesting to geeks. You’ll have to perform a secret handshake to enable the Developer Options menu in the Settings screen, as it’s hidden from Android users by default. Follow the simple steps to quickly enable Developer Options. Enable USB Debugging “USB debugging” sounds like an option only an Android developer would need, but it’s probably the most widely used hidden option in Android. USB debugging allows applications on your computer to interface with your Android phone over the USB connection. This is required for a variety of advanced tricks, including rooting an Android phone, unlocking it, installing a custom ROM, or even using a desktop program that captures screenshots of your Android device’s screen. You can also use ADB commands to push and pull files between your device and your computer or create and restore complete local backups of your Android device without rooting. USB debugging can be a security concern, as it gives computers you plug your device into access to your phone. You could plug your device into a malicious USB charging port, which would try to compromise you. That’s why Android forces you to agree to a prompt every time you plug your device into a new computer with USB debugging enabled. Set a Desktop Backup Password If you use the above ADB trick to create local backups of your Android device over USB, you can protect them with a password with the Set a desktop backup password option here. This password encrypts your backups to secure them, so you won’t be able to access them if you forget the password. Disable or Speed Up Animations When you move between apps and screens in Android, you’re spending some of that time looking at animations and waiting for them to go away. You can disable these animations entirely by changing the Window animation scale, Transition animation scale, and Animator duration scale options here. If you like animations but just wish they were faster, you can speed them up. On a fast phone or tablet, this can make switching between apps nearly instant. If you thought your Android phone was speedy before, just try disabling animations and you’ll be surprised how much faster it can seem. Force-Enable FXAA For OpenGL Games If you have a high-end phone or tablet with great graphics performance and you play 3D games on it, there’s a way to make those games look even better. Just go to the Developer Options screen and enable the Force 4x MSAA option. This will force Android to use 4x multisample anti-aliasing in OpenGL ES 2.0 games and other apps. This requires more graphics power and will probably drain your battery a bit faster, but it will improve image quality in some games. This is a bit like force-enabling antialiasing using the NVIDIA Control Panel on a Windows gaming PC. See How Bad Task Killers Are We’ve written before about how task killers are worse than useless on Android. If you use a task killer, you’re just slowing down your system by throwing out cached data and forcing Android to load apps from system storage whenever you open them again. Don’t believe us? Enable the Don’t keep activities option on the Developer options screen and Android will force-close every app you use as soon as you exit it. Enable this app and use your phone normally for a few minutes — you’ll see just how harmful throwing out all that cached data is and how much it will slow down your phone. Don’t actually use this option unless you want to see how bad it is! It will make your phone perform much more slowly — there’s a reason Google has hidden these options away from average users who might accidentally change them. Fake Your GPS Location The Allow mock locations option allows you to set fake GPS locations, tricking Android into thinking you’re at a location where you actually aren’t. Use this option along with an app like Fake GPS location and you can trick your Android device and the apps running on it into thinking you’re at locations where you actually aren’t. How would this be useful? Well, you could fake a GPS check-in at a location without actually going there or confuse your friends in a location-tracking app by seemingly teleporting around the world. Stay Awake While Charging You can use Android’s Daydream Mode to display certain apps while charging your device. If you want to force Android to display a standard Android app that hasn’t been designed for Daydream Mode, you can enable the Stay awake option here. Android will keep your device’s screen on while charging and won’t turn it off. It’s like Daydream Mode, but can support any app and allows users to interact with them. Show Always-On-Top CPU Usage You can view CPU usage data by toggling the Show CPU usage option to On. This information will appear on top of whatever app you’re using. If you’re a Linux user, the three numbers on top probably look familiar — they represent the system load average. From left to right, the numbers represent your system load over the last one, five, and fifteen minutes. This isn’t the kind of thing you’d want enabled most of the time, but it can save you from having to install third-party floating CPU apps if you want to see CPU usage information for some reason. Most of the other options here will only be useful to developers debugging their Android apps. You shouldn’t start changing options you don’t understand. If you want to undo any of these changes, you can quickly erase all your custom options by sliding the switch at the top of the screen to Off.     

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • LIBGDX "parsing error emitter" with 2 or more emitters [on hold]

    - by flow969
    I have a problem with the use of particle effect of LIBGDX with 2 or more emitters. After using ParticleEditor to create my .p file, I use it in my code BUT...when I use only 1 emitter it's fine but with more than 1, not fine ! :( Here is my error code in java console : Exception in thread "LWJGL Application" java.lang.RuntimeException: Error parsing emitter: - Delay - at com.badlogic.gdx.graphics.g2d.ParticleEmitter.load(ParticleEmitter.java:910) at com.badlogic.gdx.graphics.g2d.ParticleEmitter.<init>(ParticleEmitter.java:95) at com.badlogic.gdx.graphics.g2d.ParticleEffect.loadEmitters(ParticleEffect.java:154) at com.badlogic.gdx.graphics.g2d.ParticleEffect.load(ParticleEffect.java:138) at com.fasgame.fishtrip.android.screens.GameScreen.show(GameScreen.java:313) at com.badlogic.gdx.Game.setScreen(Game.java:61) at com.fasgame.fishtrip.android.screens.MainMenuScreen.render(MainMenuScreen.java:71) at com.badlogic.gdx.Game.render(Game.java:46) at com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:206) at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114) Caused by: java.lang.NumberFormatException: For input string: "- Count -" at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source) at sun.misc.FloatingDecimal.parseFloat(Unknown Source) at java.lang.Float.parseFloat(Unknown Source) at com.badlogic.gdx.graphics.g2d.ParticleEmitter.readFloat(ParticleEmitter.java:929) at com.badlogic.gdx.graphics.g2d.ParticleEmitter$RangedNumericValue.load(ParticleEmitter.java:1062) at com.badlogic.gdx.graphics.g2d.ParticleEmitter.load(ParticleEmitter.java:866) ... 9 more And here is my particle effect .p file : Blanc - Delay - active: false - Duration - lowMin: 3000.0 lowMax: 3000.0 - Count - min: 0 max: 200 - Emission - lowMin: 0.0 lowMax: 0.0 highMin: 250.0 highMax: 250.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Life - lowMin: 500.0 lowMax: 500.0 highMin: 500.0 highMax: 500.0 relative: false scalingCount: 3 scaling0: 1.0 scaling1: 0.47058824 scaling2: 0.0 timelineCount: 3 timeline0: 0.0 timeline1: 0.51369864 timeline2: 1.0 - Life Offset - active: false - X Offset - active: false - Y Offset - active: false - Spawn Shape - shape: point - Spawn Width - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Spawn Height - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Scale - lowMin: 0.0 lowMax: 0.0 highMin: 70.0 highMax: 70.0 relative: true scalingCount: 2 scaling0: 1.0 scaling1: 0.0 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Velocity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 30.0 highMax: 300.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Angle - active: true lowMin: 220.0 lowMax: 320.0 highMin: 220.0 highMax: 320.0 relative: false scalingCount: 2 scaling0: 0.0 scaling1: 0.98039216 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Rotation - active: false - Wind - active: false - Gravity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Tint - colorsCount: 3 colors0: 0.50980395 colors1: 0.7647059 colors2: 0.7921569 timelineCount: 1 timeline0: 0.0 - Transparency - lowMin: 0.0 lowMax: 0.0 highMin: 1.0 highMax: 1.0 relative: false scalingCount: 4 scaling0: 1.0 scaling1: 1.0 scaling2: 1.0 scaling3: 1.0 timelineCount: 4 timeline0: 0.0 timeline1: 0.36301368 timeline2: 0.6164383 timeline3: 1.0 - Options - attached: false continuous: true aligned: false additive: true behind: false premultipliedAlpha: false pre_particle.png Bleu - Delay - active: false - Duration - lowMin: 3000.0 lowMax: 3000.0 - Count - min: 0 max: 200 - Emission - lowMin: 0.0 lowMax: 0.0 highMin: 250.0 highMax: 250.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Life - lowMin: 500.0 lowMax: 500.0 highMin: 500.0 highMax: 500.0 relative: false scalingCount: 3 scaling0: 1.0 scaling1: 0.47058824 scaling2: 0.0 timelineCount: 3 timeline0: 0.0 timeline1: 0.51369864 timeline2: 1.0 - Life Offset - active: false - X Offset - active: false - Y Offset - active: false - Spawn Shape - shape: point - Spawn Width - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Spawn Height - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Scale - lowMin: 0.0 lowMax: 0.0 highMin: 70.0 highMax: 70.0 relative: true scalingCount: 2 scaling0: 1.0 scaling1: 0.0 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Velocity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 30.0 highMax: 300.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Angle - active: true lowMin: 220.0 lowMax: 320.0 highMin: 220.0 highMax: 320.0 relative: false scalingCount: 2 scaling0: 0.0 scaling1: 0.98039216 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Rotation - active: false - Wind - active: false - Gravity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Tint - colorsCount: 3 colors0: 0.0 colors1: 0.7254902 colors2: 0.7921569 timelineCount: 1 timeline0: 0.0 - Transparency - lowMin: 0.0 lowMax: 0.0 highMin: 1.0 highMax: 1.0 relative: false scalingCount: 6 scaling0: 0.0 scaling1: 1.0 scaling2: 1.0 scaling3: 1.0 scaling4: 1.0 scaling5: 0.0 timelineCount: 6 timeline0: 0.0 timeline1: 0.047945205 timeline2: 0.34246576 timeline3: 0.6712329 timeline4: 0.94520545 timeline5: 1.0 - Options - attached: false continuous: true aligned: false additive: true behind: false premultipliedAlpha: false pre_particle.png BleuFonce - Delay - active: false - Duration - lowMin: 3000.0 lowMax: 3000.0 - Count - min: 0 max: 200 - Emission - lowMin: 0.0 lowMax: 0.0 highMin: 250.0 highMax: 250.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Life - lowMin: 500.0 lowMax: 500.0 highMin: 500.0 highMax: 500.0 relative: false scalingCount: 3 scaling0: 1.0 scaling1: 0.47058824 scaling2: 0.0 timelineCount: 3 timeline0: 0.0 timeline1: 0.51369864 timeline2: 1.0 - Life Offset - active: false - X Offset - active: false - Y Offset - active: false - Spawn Shape - shape: point - Spawn Width - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Spawn Height - lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Scale - lowMin: 0.0 lowMax: 0.0 highMin: 70.0 highMax: 70.0 relative: true scalingCount: 2 scaling0: 1.0 scaling1: 0.0 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Velocity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 30.0 highMax: 300.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Angle - active: true lowMin: 220.0 lowMax: 320.0 highMin: 220.0 highMax: 320.0 relative: false scalingCount: 2 scaling0: 0.0 scaling1: 0.98039216 timelineCount: 2 timeline0: 0.0 timeline1: 1.0 - Rotation - active: false - Wind - active: false - Gravity - active: true lowMin: 0.0 lowMax: 0.0 highMin: 0.0 highMax: 0.0 relative: false scalingCount: 1 scaling0: 1.0 timelineCount: 1 timeline0: 0.0 - Tint - colorsCount: 3 colors0: 0.0 colors1: 0.7294118 colors2: 1.0 timelineCount: 1 timeline0: 0.0 - Transparency - lowMin: 0.0 lowMax: 0.0 highMin: 1.0 highMax: 1.0 relative: false scalingCount: 4 scaling0: 1.0 scaling1: 0.0 scaling2: 0.0 scaling3: 1.0 timelineCount: 4 timeline0: 0.0 timeline1: 0.001 timeline2: 0.5753425 timeline3: 0.79452056 - Options - attached: false continuous: true aligned: false additive: true behind: false premultipliedAlpha: false pre_particle.png For the "- Image Path -" missing it's normal if I let them in it doesn't work even with only 1 emitter PS : I've already updated my lib to the last release

    Read the article

  • XNA - Use Mouse To Rotate & Arrow Keys To Scroll A Linearly Wrapped Texture:

    - by The Thing
    Using XNA I'm working on my first, relatively simple, videogame for the PC. At the moment my game window is 1024 X 768 and I have a 'Starfield' linearly wrapped background texture 1280 X 1280 in size whose origin has been set to its center point (width / 2, height / 2). This texture is drawn onscreen using (graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2) to place the origin in the center of the window. I want to be able to use the horizontal movement of the mouse to rotate my texture left or right and use the arrow keys to scroll the texture in four directions. From my own related coding experiments I have found that once I rotate the texture it no longer scrolls in the direction I want, it's as if somehow the XNA framework's 'sense of direction' has been 'rotated' along with the texture. As an example of what I've described above lets say I rotate the texture 45 degrees to the right, then pressing the up arrow key results in the texture scrolling diagonally from top-right to bottom-left. This is not what I want, regardless of the degree or direction of rotation I want my texture to scroll straight up, straight down, or to the left or right depending on which arrow key was pressed. How do I go about accomplishing this? Any help or guidance is appreciated. To finish up there are two points I'd like to clarify: [1] The reason I'm using linear wrapping on my starfield texture is that it gives a nice impression of an endless starfield. [2] Using a texture at least 1280 X 1280 in conjunction with a game window of 1024 X 768 means that at no point in it's rotation will the edges of the texture become visible. Thanks for reading..... Update # 1 - as requested by RCIX: The code below is what I was referring to earlier when I mentioned 'related coding experiments'. As you can see I am scrolling a linearly wrapped texture in the direction I've moved the mouse relative to the center of the screen. This works perfectly if I don't rotate the texture, but once I do rotate it the direction of the scrolling gets messed up for some reason. public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; int x; int y; float z = 250f; Texture2D Overlay; Texture2D RotatingBackground; Rectangle? sourceRectangle; Color color; float rotation; Vector2 ScreenCenter; Vector2 Origin; Vector2 scale; Vector2 Direction; SpriteEffects effects; float layerDepth; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { graphics.PreferredBackBufferWidth = 1024; graphics.PreferredBackBufferHeight = 768; graphics.ApplyChanges(); Direction = Vector2.Zero; IsMouseVisible = true; ScreenCenter = new Vector2(graphics.PreferredBackBufferWidth / 2, graphics.PreferredBackBufferHeight / 2); Mouse.SetPosition((int)graphics.PreferredBackBufferWidth / 2, (int)graphics.PreferredBackBufferHeight / 2); sourceRectangle = null; color = Color.White; rotation = 0.0f; scale = new Vector2(1.0f, 1.0f); effects = SpriteEffects.None; layerDepth = 1.0f; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); Overlay = Content.Load<Texture2D>("Overlay"); RotatingBackground = Content.Load<Texture2D>("Background"); Origin = new Vector2((int)RotatingBackground.Width / 2, (int)RotatingBackground.Height / 2); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { float timePassed = (float)gameTime.ElapsedGameTime.TotalSeconds; MouseState ms = Mouse.GetState(); Vector2 MousePosition = new Vector2(ms.X, ms.Y); Direction = ScreenCenter - MousePosition; if (Direction != Vector2.Zero) { Direction.Normalize(); } x += (int)(Direction.X * z * timePassed); y += (int)(Direction.Y * z * timePassed); //No rotation = texture scrolls as intended, With rotation = texture no longer scrolls in the direction of the mouse. My update method needs to somehow compensate for this. //rotation += 0.01f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Deferred, null, SamplerState.LinearWrap, null, null); spriteBatch.Draw(RotatingBackground, ScreenCenter, new Rectangle(x, y, RotatingBackground.Width, RotatingBackground.Height), color, rotation, Origin, scale, effects, layerDepth); spriteBatch.Draw(Overlay, Vector2.Zero, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • How to maintain encapsulation with composition in C++?

    - by iFreilicht
    I am designing a class Master that is composed from multiple other classes, A, Base, C and D. These four classes have absolutely no use outside of Master and are meant to split up its functionality into manageable and logically divided packages. They also provide extensible functionality as in the case of Base, which can be inherited from by clients. But, how do I maintain encapsulation of Master with this design? So far, I've got two approaches, which are both far from perfect: 1. Replicate all accessors: Just write accessor-methods for all accessor-methods of all classes that Master is composed of. This leads to perfect encapsulation, because no implementation detail of Master is visible, but is extremely tedious and makes the class definition monstrous, which is exactly what the composition should prevent. Also, adding functionality to one of the composees (is that even a word?) would require to re-write all those methods in Master. An additional problem is that inheritors of Base could only alter, but not add functionality. 2. Use non-assignable, non-copyable member-accessors: Having a class accessor<T> that can not be copied, moved or assigned to, but overrides the operator-> to access an underlying shared_ptr, so that calls like Master->A()->niceFunction(); are made possible. My problem with this is that it kind of breaks encapsulation as I would now be unable to change my implementation of Master to use a different class for the functionality of niceFunction(). Still, it is the closest I've gotten without using the ugly first approach. It also fixes the inheritance issue quite nicely. A small side question would be if such a class already existed in std or boost. EDIT: Wall of code I will now post the code of the header files of the classes discussed. It may be a bit hard to understand, but I'll give my best in explaining all of it. 1. GameTree.h The foundation of it all. This basically is a doubly-linked tree, holding GameObject-instances, which we'll later get to. It also has it's own custom iterator GTIterator, but I left that out for brevity. WResult is an enum with the values SUCCESS and FAILED, but it's not really important. class GameTree { public: //Static methods for the root. Only one root is allowed to exist at a time! static void ConstructRoot(seed_type seed, unsigned int depth); inline static bool rootExists(){ return static_cast<bool>(rootObject_); } inline static weak_ptr<GameTree> root(){ return rootObject_; } //delta is in ms, this is used for velocity, collision and such void tick(unsigned int delta); //Interaction with the tree inline weak_ptr<GameTree> parent() const { return parent_; } inline unsigned int numChildren() const{ return static_cast<unsigned int>(children_.size()); } weak_ptr<GameTree> getChild(unsigned int index) const; template<typename GOType> weak_ptr<GameTree> addChild(seed_type seed, unsigned int depth = 9001){ GOType object{ new GOType(seed) }; return addChildObject(unique_ptr<GameTree>(new GameTree(std::move(object), depth))); } WResult moveTo(weak_ptr<GameTree> newParent); WResult erase(); //Iterators for for( : ) loop GTIterator& begin(){ return *(beginIter_ = std::move(make_unique<GTIterator>(children_.begin()))); } GTIterator& end(){ return *(endIter_ = std::move(make_unique<GTIterator>(children_.end()))); } //unloading should be used when objects are far away WResult unloadChildren(unsigned int newDepth = 0); WResult loadChildren(unsigned int newDepth = 1); inline const RenderObject& renderObject() const{ return gameObject_->renderObject(); } //Getter for the underlying GameObject (I have not tested the template version) weak_ptr<GameObject> gameObject(){ return gameObject_; } template<typename GOType> weak_ptr<GOType> gameObject(){ return dynamic_cast<weak_ptr<GOType>>(gameObject_); } weak_ptr<PhysicsObject> physicsObject() { return gameObject_->physicsObject(); } private: GameTree(const GameTree&); //copying is only allowed internally GameTree(shared_ptr<GameObject> object, unsigned int depth = 9001); //pointer to root static shared_ptr<GameTree> rootObject_; //internal management of a child weak_ptr<GameTree> addChildObject(shared_ptr<GameTree>); WResult removeChild(unsigned int index); //private members shared_ptr<GameObject> gameObject_; shared_ptr<GTIterator> beginIter_; shared_ptr<GTIterator> endIter_; //tree stuff vector<shared_ptr<GameTree>> children_; weak_ptr<GameTree> parent_; unsigned int selfIndex_; //used for deletion, this isn't necessary void initChildren(unsigned int depth); //constructs children }; 2. GameObject.h This is a bit hard to grasp, but GameObject basically works like this: When constructing a GameObject, you construct its basic attributes and a CResult-instance, which contains a vector<unique_ptr<Construction>>. The Construction-struct contains all information that is needed to construct a GameObject, which is a seed and a function-object that is applied at construction by a factory. This enables dynamic loading and unloading of GameObjects as done by GameTree. It also means that you have to define that factory if you inherit GameObject. This inheritance is also the reason why GameTree has a template-function gameObject<GOType>. GameObject can contain a RenderObject and a PhysicsObject, which we'll later get to. Anyway, here's the code. class GameObject; typedef unsigned long seed_type; //this declaration magic means that all GameObjectFactorys inherit from GameObjectFactory<GameObject> template<typename GOType> struct GameObjectFactory; template<> struct GameObjectFactory<GameObject>{ virtual unique_ptr<GameObject> construct(seed_type seed) const = 0; }; template<typename GOType> struct GameObjectFactory : GameObjectFactory<GameObject>{ GameObjectFactory() : GameObjectFactory<GameObject>(){} unique_ptr<GameObject> construct(seed_type seed) const{ return unique_ptr<GOType>(new GOType(seed)); } }; //same as with the factories. this is important for storing them in vectors template<typename GOType> struct Construction; template<> struct Construction<GameObject>{ virtual unique_ptr<GameObject> construct() const = 0; }; template<typename GOType> struct Construction : Construction<GameObject>{ Construction(seed_type seed, function<void(GOType*)> func = [](GOType* null){}) : Construction<GameObject>(), seed_(seed), func_(func) {} unique_ptr<GameObject> construct() const{ unique_ptr<GameObject> gameObject{ GOType::factory.construct(seed_) }; func_(dynamic_cast<GOType*>(gameObject.get())); return std::move(gameObject); } seed_type seed_; function<void(GOType*)> func_; }; typedef struct CResult { CResult() : constructions{} {} CResult(CResult && o) : constructions(std::move(o.constructions)) {} CResult& operator= (CResult& other){ if (this != &other){ for (unique_ptr<Construction<GameObject>>& child : other.constructions){ constructions.push_back(std::move(child)); } } return *this; } template<typename GOType> void push_back(seed_type seed, function<void(GOType*)> func = [](GOType* null){}){ constructions.push_back(make_unique<Construction<GOType>>(seed, func)); } vector<unique_ptr<Construction<GameObject>>> constructions; } CResult; //finally, the GameObject class GameObject { public: GameObject(seed_type seed); GameObject(const GameObject&); virtual void tick(unsigned int delta); inline Matrix4f trafoMatrix(){ return physicsObject_->transformationMatrix(); } //getter inline seed_type seed() const{ return seed_; } inline CResult& properties(){ return properties_; } inline const RenderObject& renderObject() const{ return *renderObject_; } inline weak_ptr<PhysicsObject> physicsObject() { return physicsObject_; } protected: virtual CResult construct_(seed_type seed) = 0; CResult properties_; shared_ptr<RenderObject> renderObject_; shared_ptr<PhysicsObject> physicsObject_; seed_type seed_; }; 3. PhysicsObject That's a bit easier. It is responsible for position, velocity and acceleration. It will also handle collisions in the future. It contains three Transformation objects, two of which are optional. I'm not going to include the accessors on the PhysicsObject class because I tried my first approach on it and it's just pure madness (way over 30 functions). Also missing: the named constructors that construct PhysicsObjects with different behaviour. class Transformation{ Vector3f translation_; Vector3f rotation_; Vector3f scaling_; public: Transformation() : translation_{ 0, 0, 0 }, rotation_{ 0, 0, 0 }, scaling_{ 1, 1, 1 } {}; Transformation(Vector3f translation, Vector3f rotation, Vector3f scaling); inline Vector3f translation(){ return translation_; } inline void translation(float x, float y, float z){ translation(Vector3f(x, y, z)); } inline void translation(Vector3f newTranslation){ translation_ = newTranslation; } inline void translate(float x, float y, float z){ translate(Vector3f(x, y, z)); } inline void translate(Vector3f summand){ translation_ += summand; } inline Vector3f rotation(){ return rotation_; } inline void rotation(float pitch, float yaw, float roll){ rotation(Vector3f(pitch, yaw, roll)); } inline void rotation(Vector3f newRotation){ rotation_ = newRotation; } inline void rotate(float pitch, float yaw, float roll){ rotate(Vector3f(pitch, yaw, roll)); } inline void rotate(Vector3f summand){ rotation_ += summand; } inline Vector3f scaling(){ return scaling_; } inline void scaling(float x, float y, float z){ scaling(Vector3f(x, y, z)); } inline void scaling(Vector3f newScaling){ scaling_ = newScaling; } inline void scale(float x, float y, float z){ scale(Vector3f(x, y, z)); } void scale(Vector3f factor){ scaling_(0) *= factor(0); scaling_(1) *= factor(1); scaling_(2) *= factor(2); } Matrix4f matrix(){ return WMatrix::Translation(translation_) * WMatrix::Rotation(rotation_) * WMatrix::Scale(scaling_); } }; class PhysicsObject; typedef void tickFunction(PhysicsObject& self, unsigned int delta); class PhysicsObject{ PhysicsObject(const Transformation& trafo) : transformation_(trafo), transformationVelocity_(nullptr), transformationAcceleration_(nullptr), tick_(nullptr) {} PhysicsObject(PhysicsObject&& other) : transformation_(other.transformation_), transformationVelocity_(std::move(other.transformationVelocity_)), transformationAcceleration_(std::move(other.transformationAcceleration_)), tick_(other.tick_) {} Transformation transformation_; unique_ptr<Transformation> transformationVelocity_; unique_ptr<Transformation> transformationAcceleration_; tickFunction* tick_; public: void tick(unsigned int delta){ tick_ ? tick_(*this, delta) : 0; } inline Matrix4f transformationMatrix(){ return transformation_.matrix(); } } 4. RenderObject RenderObject is a base class for different types of things that could be rendered, i.e. Meshes, Light Sources or Sprites. DISCLAIMER: I did not write this code, I'm working on this project with someone else. class RenderObject { public: RenderObject(float renderDistance); virtual ~RenderObject(); float renderDistance() const { return renderDistance_; } void setRenderDistance(float rD) { renderDistance_ = rD; } protected: float renderDistance_; }; struct NullRenderObject : public RenderObject{ NullRenderObject() : RenderObject(0.f){}; }; class Light : public RenderObject{ public: Light() : RenderObject(30.f){}; }; class Mesh : public RenderObject{ public: Mesh(unsigned int seed) : RenderObject(20.f) { meshID_ = 0; textureID_ = 0; if (seed == 1) meshID_ = Model::getMeshID("EM-208_heavy"); else meshID_ = Model::getMeshID("cube"); }; unsigned int getMeshID() const { return meshID_; } unsigned int getTextureID() const { return textureID_; } private: unsigned int meshID_; unsigned int textureID_; }; I guess this shows my issue quite nicely: You see a few accessors in GameObject which return weak_ptrs to access members of members, but that is not really what I want. Also please keep in mind that this is NOT, by any means, finished or production code! It is merely a prototype and there may be inconsistencies, unnecessary public parts of classes and such.

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • ipad video format

    - by Mike
    When you use iTunes to sync your videos with the iPhone the videos are always saved with no more than 640 pixels wide, if I am not wrong. What about the iPad? What is the size of videos iTunes syncs with iPad? 1024x768? and what if the video has a dimension below 1024x768? Will it scale up? or will it keep the video at low res and scale when you play? The question is because I am using the MPMoviePlayerController and I need to know what resolutions to expect, so I can adjust the interface. thanks.

    Read the article

  • Disable scrolling in an iPhone web application?

    - by Stefan Kendall
    Is there any way to completely disable web page scrolling in an iPhone web app? I've tried numerous things posted on google, but none seem to work. Here's my current header setup: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; user-scalable=no;"/> <meta name="apple-mobile-web-app-capable" content="yes"/> document.body.addEventListener('touchmove', function(e){ e.preventDefault(); }); doesn't seem to work.

    Read the article

  • Center content of UISCrollView when smaller

    - by hgpc
    I have an UIImageView inside a UIScrollView which I use for zooming and scrolling. If the image/content of the scroll view if bigger than the scroll view everything works fine. However, when the image becomes smaller than the scroll view, it sticks to the top left corner of the scroll view. I would like to keep it centered, like the Photos app. Any ideas or examples about keeping the content of the UIScrollView centered when smaller? I am working with iPhone 3.0. The following code almost works. The image returns to the top left corner if I pinch it after reaching the minimum zoom level. - (void)loadView { [super loadView]; // set up main scroll view imageScrollView = [[UIScrollView alloc] initWithFrame:[[self view] bounds]]; [imageScrollView setBackgroundColor:[UIColor blackColor]]; [imageScrollView setDelegate:self]; [imageScrollView setBouncesZoom:YES]; [[self view] addSubview:imageScrollView]; UIImageView *imageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"WeCanDoIt.png"]]; [imageView setTag:ZOOM_VIEW_TAG]; [imageScrollView setContentSize:[imageView frame].size]; [imageScrollView addSubview:imageView]; CGSize imageSize = imageView.image.size; [imageView release]; CGSize maxSize = imageScrollView.frame.size; CGFloat widthRatio = maxSize.width / imageSize.width; CGFloat heightRatio = maxSize.height / imageSize.height; CGFloat initialZoom = (widthRatio heightRatio) ? heightRatio : widthRatio; [imageScrollView setMinimumZoomScale:initialZoom]; [imageScrollView setZoomScale:1]; float topInset = (maxSize.height - imageSize.height) / 2.0; float sideInset = (maxSize.width - imageSize.width) / 2.0; if (topInset < 0.0) topInset = 0.0; if (sideInset < 0.0) sideInset = 0.0; [imageScrollView setContentInset:UIEdgeInsetsMake(topInset, sideInset, -topInset, -sideInset)]; } - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { return [imageScrollView viewWithTag:ZOOM_VIEW_TAG]; } /************************************** NOTE **************************************/ /* The following delegate method works around a known bug in zoomToRect:animated: */ /* In the next release after 3.0 this workaround will no longer be necessary */ /**********************************************************************************/ - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale { [scrollView setZoomScale:scale+0.01 animated:NO]; [scrollView setZoomScale:scale animated:NO]; // END Bug workaround CGSize maxSize = imageScrollView.frame.size; CGSize viewSize = view.frame.size; float topInset = (maxSize.height - viewSize.height) / 2.0; float sideInset = (maxSize.width - viewSize.width) / 2.0; if (topInset < 0.0) topInset = 0.0; if (sideInset < 0.0) sideInset = 0.0; [imageScrollView setContentInset:UIEdgeInsetsMake(topInset, sideInset, -topInset, -sideInset)]; }

    Read the article

  • How to set viewport meta for iPhone that handles rotation properly?

    - by Caffeine Coma
    So I've been using: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0;"/> to get my HTML content to display nicely on the iPhone. It works great until the user rotates the device into landscape mode, where the display remains constrained to 320px. Is there a simple way to specify a viewport that changes in response to the user changing the device orientation? Or must I resort to Javascript to handle that?

    Read the article

  • auto-scaling size with anchors - overlapping controls

    - by Shane.C
    I'm having some trouble with resizing and scaling of controls in a windows form. I've set anchors up so that the controls stay in ratio with the form, which works great. However, perhaps i was expecting too much when i thought that the control origin points would also scale and change with the form scaling, but this is not the case and i'm finding my controls overlapping. here's some screenshots; anyone know of an approach i can take to solve this problem? perhaps i need to set control origins to dynamic drawing points that scale, but then do these redraw on scaling the form, or only on creation?

    Read the article

  • Resizing and rotating an image in Android

    - by kingrichard2005
    I'm trying to rotate and resize an image in Android. The code I have is as follows, (taken from this tutorial): Bitmap originalSprite = BitmapFactory.decodeResource(getResources(), R.drawable.android); int orgHeight = a.getHeight(); int orgWidth = a.getWidth(); //Create manipulation matrix Matrix m = new Matrix(); // resize the bit map m.postScale(.25f, .25f); // rotate the Bitmap by the given angle m.postRotate(180); //Rotated bitmap Bitmap rotatedBitmap = Bitmap.createBitmap(originalSprite, 0, 0, orgWidth, orgHeight, m, true); return rotatedBitmap; The image seems to rotate just fine, but it doesn't scale like I expect it to. I've tried different values, but the image only gets more pixelized as I reduce the scale values, but remains that same size visually; whereas the goal I'm trying to achieve is to actually shrink it visually. I'm not sure what to do at this point, any help is appreciated.

    Read the article

  • ipad vide format

    - by Mike
    When you use iTunes to sync your videos with the iPhone the videos are always saved with no more than 640 pixels wide, if I am not wrong. What about the iPad? What is the size of videos iTunes syncs with iPad? 1024x768? and what if the video has a dimension below 1024x768? Will it scale up? or will it keep the video at low res and scale when you play? thanks.

    Read the article

  • What's viewport mean to the jQuery guys?

    - by justSteve
    I came here intent on asking a 'how do i' type question focused on a particular UI problem i have to solve and i started using the term 'viewport'. Seems there's lots of contexts that tag evokes. Before i outline mine i should ask - is there a generally recognized consensus as to what a 'viewport' is? Any gallery or object browser that focuses on a UI principle where multiple frames/panes show feeds of different graphs where each feed might be a complex grid/graph? I'm imagining something where my page may open with a half dozen different grids (tabular data) stay updated via ajax interactions with various servers. UI elements provide control scale up or down a given grid with a quasi accordion slide kinda thingie. For example, one click on a scale up-or-down control would rescale the other viewports in the opposite direction. More clicks - more pronounced effect. So my 'how do i' question morphs into a 'how have others...' thx

    Read the article

  • Global name not defined

    - by anteater7171
    I wrote a CPU monitoring program in Python. For some reason sometimes the the program will run without any problem. Then other times the program won't even start because of the following error. Traceback (most recent call last): File "", line 244, in run_nodebug File "C:\Python26\CPUR1.7.pyw", line 601, in app = simpleapp_tk(None) File "C:\Python26\CPUR1.7.pyw", line 26, in init self.initialize() File "C:\Python26\CPUR1.7.pyw", line 107, in initialize self.F() File "C:\Python26\CPUR1.7.pyw", line 517, in F S2 = TL.entryVariableS.get() NameError: global name 'TL' is not defined I can't seem to find the problem, maybe someone more experienced may assist me? Here is a snippet of the part giving me trouble: (The second to last line in the snippet is what's giving me trouble) def E(self): if self.selectedM.get() =='Options...': Setup global TL TL = Tkinter.Toplevel(self) menu = Tkinter.Menu(TL) TL.config(menu=menu) filemenu = Tkinter.Menu(menu) menu.add_cascade(label="| Menu |", menu=filemenu) filemenu.add_command(label="Instruction Manual...", command=self.helpmenu) filemenu.add_command(label="About...", command=self.aboutmenu) filemenu.add_separator() filemenu.add_command(label="Exit Options", command=TL.destroy) filemenu.add_command(label="Exit", command=self.destroy) helpmenu = Tkinter.Menu(menu) menu.add_cascade(label="| Help |", menu=helpmenu) helpmenu.add_command(label="Instruction Manual...", command=self.helpmenu) helpmenu.add_separator() helpmenu.add_command(label="Quick Help...", command=self.helpmenu) Title TL.label5 = Tkinter.Label(TL,text="CPU Usage: Options",anchor="center",fg="black",bg="lightgreen",relief="ridge",borderwidth=5,font=('Arial', 18, 'bold')) TL.label5.pack(padx=15,ipadx=5) X Y scale TL.separator = Tkinter.Frame(TL,height=7, bd=1, relief='ridge', bg='grey95') TL.separator.pack(pady=5,padx=5) # TL.sclX = Tkinter.Scale(TL.separator, from_=0, to=1500, orient='horizontal', resolution=1, command=self.A) TL.sclX.grid(column=1,row=0,ipadx=27, sticky='w') TL.label1 = Tkinter.Label(TL.separator,text="X",anchor="s",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label1.grid(column=0,row=0, pady=1, sticky='S') TL.sclY = Tkinter.Scale(TL.separator, from_=0, to=1500, resolution=1, command=self.A) TL.sclY.grid(column=2,row=1,rowspan=2,sticky='e', padx=4) TL.label3 = Tkinter.Label(TL.separator,text="Y",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label3.grid(column=2,row=0, padx=10, sticky='e') TL.entryVariable2 = Tkinter.StringVar() TL.entry2 = Tkinter.Entry(TL.separator,textvariable=TL.entryVariable2, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entry2.grid(column=1,row=1,ipadx=20, pady=10,sticky='EW') TL.entry2.bind("<Return>", self.B) TL.label2 = Tkinter.Label(TL.separator,text="X:",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label2.grid(column=0,row=1, ipadx=4, sticky='W') TL.entryVariable1 = Tkinter.StringVar() TL.entry1 = Tkinter.Entry(TL.separator,textvariable=TL.entryVariable1, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entry1.grid(column=1,row=2,sticky='EW') TL.entry1.bind("<Return>", self.B) TL.label4 = Tkinter.Label(TL.separator,text="Y:", anchor="center",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label4.grid(column=0,row=2, ipadx=4, sticky='W') TL.label7 = Tkinter.Label(TL.separator,text="Text Colour:",fg="black",bg="grey95",font=('Arial', 8 ,'bold'),justify='left') TL.label7.grid(column=1,row=3, sticky='W',padx=10,ipady=10,ipadx=30) TL.selectedP = Tkinter.StringVar() TL.opt1 = Tkinter.OptionMenu(TL.separator, TL.selectedP,'Normal', 'White','Black', 'Blue', 'Steel Blue','Green','Light Green','Yellow','Orange' ,'Red',command=self.G) TL.opt1.config(fg="black",bg="grey90",activebackground="grey90",activeforeground="black", anchor="center",relief="raised",direction='right',font=('Arial', 10)) TL.opt1.grid(column=1,row=4,sticky='EW',padx=20,ipadx=20) TL.selectedP.set('Normal') TL.sclS = Tkinter.Scale(TL.separator, from_=10, to=2000, orient='horizontal', resolution=10, command=self.H) TL.sclS.grid(column=1,row=5,ipadx=27, sticky='w') TL.sclS.set(600) TL.entryVariableS = Tkinter.StringVar() TL.entryS = Tkinter.Entry(TL.separator,textvariable=TL.entryVariableS, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entryS.grid(column=1,row=6,ipadx=20, pady=10,sticky='EW') TL.entryS.bind("<Return>", self.I) TL.entryVariableS.set(600) # TL.resizable(False,False) TL.title('Options') geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") s = self.wm_geometry() m = geomPatt.search(s) X = m.group(4) Y = m.group(6) TL.sclY.set(Y) TL.sclX.set(X) if self.selectedM.get() == 'Exit': self.destroy() def F (self): G = round(psutil.cpu_percent(), 1) G1 = str(G) + '%' self.labelVariable.set(G1) if G < 5: self.imageLabel.configure(image=self.image0) if G >= 5: self.imageLabel.configure(image=self.image5) if G >= 10: self.imageLabel.configure(image=self.image10) if G >= 15: self.imageLabel.configure(image=self.image15) if G >= 20: self.imageLabel.configure(image=self.image20) if G >= 25: self.imageLabel.configure(image=self.image25) if G >= 30: self.imageLabel.configure(image=self.image30) if G >= 35: self.imageLabel.configure(image=self.image35) if G >= 40: self.imageLabel.configure(image=self.image40) if G >= 45: self.imageLabel.configure(image=self.image45) if G >= 50: self.imageLabel.configure(image=self.image50) if G >= 55: self.imageLabel.configure(image=self.image55) if G >= 60: self.imageLabel.configure(image=self.image60) if G >= 65: self.imageLabel.configure(image=self.image65) if G >= 70: self.imageLabel.configure(image=self.image70) if G >= 75: self.imageLabel.configure(image=self.image75) if G >= 80: self.imageLabel.configure(image=self.image80) if G >= 85: self.imageLabel.configure(image=self.image85) if G >= 90: self.imageLabel.configure(image=self.image90) if 100> G >= 95: self.imageLabel.configure(image=self.image95) if G == 100: self.imageLabel.configure(image=self.image100) S2 = TL.entryVariableS.get() self.after(int(S2), self.F)

    Read the article

  • Color Theory: How to convert Munsell HVC to RGB/HSB/HSL

    - by Ian Boyd
    I'm looking at at document that describes the standard colors used in dentistry to describe the color of a tooth. They quote hue, value, chroma values, and indicate they are from the 1905 Munsell description of color: The system of colour notation developed by A. H. Munsell in 1905 identifies colour in terms of three attributes: HUE, VALUE (Brightness) and CHROMA (saturation) [15] HUE (H): Munsell defined hue as the quality by which we distinguish one colour from another. He selected five principle colours: red, yellow, green, blue, and purple; and five intermediate colours: yellow-red, green-yellow, blue-green, purple-blue, and red-purple. These were placed around a colour circle at equal points and the colours in between these points are a mixture of the two, in favour of the nearer point/colour (see Fig 1.). VALUE (V): This notation indicates the lightness or darkness of a colour in relation to a neutral grey scale, which extends from absolute black (value symbol 0) to absolute white (value symbol 10). This is essentially how ‘bright’ the colour is. CHROMA (C): This indicates the degree of divergence of a given hue from a neutral grey of the same value. The scale of chroma extends from 0 for a neutral grey to 10, 12, 14 or farther, depending upon the strength (saturation) of the sample to be evaluated. There are various systems for categorising colour, the Vita system is most commonly used in Dentistry. This uses the letters A, B, C and D to notate the hue (colour) of the tooth. The chroma and value are both indicated by a value from 1 to 4. A1 being lighter than A4, but A4 being more saturated than A1. If placed in order of value, i.e. brightness, the order from brightest to darkest would be: A1, B1, B2, A2, A3, D2, C1, B3, D3, D4, A3.5, B4, C2, A4, C3, C4 The exact values of Hue, Value and Chroma for each of the shades is shown below (16) So my question is, can anyone convert Munsell HVC into RGB, HSB or HSL? Hue Value (Brightness) Chroma(Saturation) === ================== ================== 4.5 7.80 1.7 2.4 7.45 2.6 1.3 7.40 2.9 1.6 7.05 3.2 1.6 6.70 3.1 5.1 7.75 1.6 4.3 7.50 2.2 2.3 7.25 3.2 2.4 7.00 3.2 4.3 7.30 1.6 2.8 6.90 2.3 2.6 6.70 2.3 1.6 6.30 2.9 3.0 7.35 1.8 1.8 7.10 2.3 3.7 7.05 2.4 They say that Value(Brightness) varies from 0..10, which is fine. So i take 7.05 to mean 70.5%. But what is Hue measured in? i'm used to hue being measured in degrees (0..360). But the values i see would all be red - when they should be more yellow, or brown. Finally, it says that Choma/Saturation can range from 0..10 ...or even higher - which makes it sound like an arbitrary scale. So can anyone convert Munsell HVC to HSB or HSL, or better yet, RGB?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >