Search Results

Search found 22128 results on 886 pages for 'point in polygon'.

Page 81/886 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Rotating object around moving object/player in 2D

    - by Boston
    I am trying to implement a camera which rotates around the world around the player. I have found many solutions online to the task of rotating an object about the origin, or about an arbitrary point. The procedure seems to be to translate the point to be rotated about to the origin, perform the rotation, translate back, then draw. I have gotten this working for rotation around the origin as well as for a fixed point. Rotation of objects around the player works as well, provided the player does not move. However, if the objects are rotated around the player by some non-zero degree, if the player moves after the rotation it causes the rotated objects to move as well. I probably have done a poor job explaining this so here's an image: http://i.imgur.com/1n63iWR.gif And here's the code for the behavior: renderx = (Ox - Px)*cos(camAngle) - (Oy - Py)*sin(camAngle) + Px; rendery = (Ox - Px)*sin(camAngle) + (Oy - Py)*cos(camAngle) + Py; Where (Ox,Oy) is the actual position of the object to be rotated and (Px,Py) is the actual position of the player. Any ideas? I am using C++ with SDL2.0.

    Read the article

  • Is this one network or two networks?

    - by colemik
    I have this simple question and really don't know the answer. Does the drawing below show one network or two networks? This is a question about the definition of a network from the OSI / TCP/IP model point of view: From one point of view, those are two L2 networks connected with a bridge. From another point of view, this is one L3 network, that can have a common L3 address space (like 10.1.1.0). PS If this question is too dumb, please move it to Superuser.

    Read the article

  • Function for building an isosurface (a sphere cut by planes)

    - by GameDevEnthusiast
    I want to build an octree over a quarter of a sphere (for debugging and testing). The octree generator relies on the AIsosurface interface to compute the density and normal at any given point in space. For example, for a full sphere the corresponding code is: // returns <0 if the point is inside the solid virtual float GetDensity( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); Float3 v = Float3_Subtract( P, m_origin ); float l = Float3_LengthSquared( v ); float d = Float_Sqrt(l) - m_radius; return d; } // estimates the gradient at the given point virtual Float3 GetNormal( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); float d = this->AIsosurface::GetDensity( P ); float Nx = this->GetDensity( _x + 0.001f, _y, _z ) - d; float Ny = this->GetDensity( _x, _y + 0.001f, _z ) - d; float Nz = this->GetDensity( _x, _y, _z + 0.001f ) - d; Float3 N = Float3_Normalized( Float3_Set( Nx, Ny, Nz ) ); return N; } What is a nice and fast way to compute those values when the shape is bounded by a low number of half-spaces?

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

  • How to use Boost 1.41.0 graph layout algorithmes

    - by daniil-k
    Hi I have problem using boost graph layout algorithmes. boost verision 1_41_0 mingw g++ 4.4.0. So there are issues I have encountered Can you suggest me with them? The function fruchterman_reingold_force_directed_layout isn't compiled. The kamada_kawai_spring_layout compiled but program crashed. Boost documentation to layout algorithms is wrong, sample to fruchterman_reingold_force_directed_layout isn't compiled. This is my example. To use function just uncomment one. String 60, 61, 63. #include <boost/config.hpp> #include <boost/graph/adjacency_list.hpp> #include <boost/graph/graph_utility.hpp> #include <boost/graph/simple_point.hpp> #include <boost/property_map/property_map.hpp> #include <boost/graph/circle_layout.hpp> #include <boost/graph/fruchterman_reingold.hpp> #include <boost/graph/kamada_kawai_spring_layout.hpp> #include <iostream> //typedef boost::square_topology<>::point_difference_type Point; typedef boost::square_topology<>::point_type Point; struct VertexProperties { std::size_t index; Point point; }; struct EdgeProperty { EdgeProperty(const std::size_t &w):weight(w) {} double weight; }; typedef boost::adjacency_list<boost::listS, boost::listS, boost::undirectedS, VertexProperties, EdgeProperty > Graph; typedef boost::property_map<Graph, std::size_t VertexProperties::*>::type VertexIndexPropertyMap; typedef boost::property_map<Graph, Point VertexProperties::*>::type PositionMap; typedef boost::property_map<Graph, double EdgeProperty::*>::type WeightPropertyMap; typedef boost::graph_traits<Graph>::vertex_descriptor VirtexDescriptor; int main() { Graph graph; VertexIndexPropertyMap vertexIdPropertyMap = boost::get(&VertexProperties::index, graph); for (int i = 0; i < 3; ++i) { VirtexDescriptor vd = boost::add_vertex(graph); vertexIdPropertyMap[vd] = i + 2; } boost::add_edge(boost::vertex(1, graph), boost::vertex(0, graph), EdgeProperty(5), graph); boost::add_edge(boost::vertex(2, graph), boost::vertex(0, graph), EdgeProperty(5), graph); std::cout << "Vertices\n"; boost::print_vertices(graph, vertexIdPropertyMap); std::cout << "Edges\n"; boost::print_edges(graph, vertexIdPropertyMap); PositionMap positionMap = boost::get(&VertexProperties::point, graph); WeightPropertyMap weightPropertyMap = boost::get(&EdgeProperty::weight, graph); boost::circle_graph_layout(graph, positionMap, 100); // boost::fruchterman_reingold_force_directed_layout(graph, positionMap, boost::square_topology<>()); boost::kamada_kawai_spring_layout(graph, positionMap, weightPropertyMap, boost::square_topology<>(), boost::side_length<double>(10), boost::layout_tolerance<>(), 1, vertexIdPropertyMap); std::cout << "Coordinates\n"; boost::graph_traits<Graph>::vertex_iterator i, end; for (boost::tie(i, end) = boost::vertices(graph); i != end; ++i) { std::cout << "ID: (" << vertexIdPropertyMap[*i] << ") x: " << positionMap[*i][0] << " y: " << positionMap[*i][1] << "\n"; } return 0; }

    Read the article

  • Why isn't my algorithm for find the biggest and smallest inputs working?

    - by Matt Ellen
    I have started a new job, and with it comes a new language: Ironpython. Thankfully a good language :D Before starting I got to grips with Python on the whole, but that was only a week's worth of learning. Now I'm writing actual code. I've been charged with writing an algorithm that finds the best input parameter to collect data with. The basic algorithm is (as I've been instructed): Set the input parameter to a good guess Start collecting data When data is available stop collecting find the highest point If the point before this (i.e. for the previous parameter value) was higher and the point before that was lower then we've found the max otherwise the input parameter is increased by the initial guess. goto 2 If the max is found then the min needs to be found. To do this the algorithm carries on increasing the input, but by 1/10 of the max, until the current point is greater than the previous point and the point before that is also greater. Once the min is found then the algorithm stops. Currently I have a simplified data generator outputting the sin of the input, so that I know that the min value should be PI and the max value should be PI/2 The main Python code looks like this (don't worry, this is just for my edification, I don't write real code like this): import sys sys.path.append(r"F:\Programming Source\C#\PythonHelp\PythonHelp\bin\Debug") import clr clr.AddReferenceToFile("PythonHelpClasses.dll") import PythonHelpClasses from PHCStruct import Helper from System import Math helper = Helper() def run(): b = PythonHelpClasses.Executor() a = PythonHelpClasses.HasAnEvent() b.Input = 0.0 helper.__init__() def AnEventHandler(e): b.Stop() h = helper h.lastLastVal, h.lastVal, h.currentVal = h.lastVal, h.currentVal, e.Number if h.lastLastVal < h.lastVal and h.currentVal < h.lastVal and h.NotPast90: h.NotPast90 = False h.bestInput = h.lastInput inputInc = 0.0 if h.NotPast90: inputInc = Math.PI/10.0 else: inputInc = h.bestInput/10.0 if h.lastLastVal > h.lastVal and h.currentVal > h.lastVal and h.NotPast180: h.NotPast180 = False if h.NotPast180: h.lastInput, b.Input = b.Input, b.Input + inputInc b.Start(a) else: print "Best input:", h.bestInput print "Last input:", h.lastInput b.Stop() a.AnEvent += AnEventHandler b.Start(a) PHCStruct.py: class Helper(): def __init__(self): self.currentVal = 0 self.lastVal = 0 self.lastLastVal = 0 self.NotPast90 = True self.NotPast180 = True self.bestInput = 0 self.lastInput = 0 PythonHelpClasses has two small classes I wrote in C# before I realised how to do it in Ironpython. Executor runs a delegate asynchronously while it's running member is true. The important code: public void Start(HasAnEvent hae) { running = true; RunDelegate r = new RunDelegate(hae.UpdateNumber); AsyncCallback ac = new AsyncCallback(UpdateDone); IAsyncResult ar = r.BeginInvoke(Input, ac, null); } public void Stop() { running = false; } public void UpdateDone(IAsyncResult ar) { RunDelegate r = (RunDelegate)((AsyncResult)ar).AsyncDelegate; r.EndInvoke(ar); if (running) { AsyncCallback ac = new AsyncCallback(UpdateDone); IAsyncResult ar2 = r.BeginInvoke(Input, ac, null); } } HasAnEvent has a function that generates the sin of its input and fires an event with that result as its argument. i.e.: public void UpdateNumber(double val) { AnEventArgs e = new AnEventArgs(Math.Sin(val)); System.Threading.Thread.Sleep(1000); if (null != AnEvent) { AnEvent(e); } } The sleep is in there just to slow things down a bit. The problem I am getting is that the algorithm is not coming up with the best input being PI/2 and the final input being PI, but I can't see why. Also the best and final inputs are different each time I run the programme. Can anyone see why? Also when the algorithm terminates the best and final inputs are printed to the screen multiple times, not just once. Can someone explain why?

    Read the article

  • Memory management in iphone cocos2d

    - by muthu
    i am iphone developer very new to this field....i am developing a ebook app in iphone using cocos2d...i use more than 150 images(i guess) the problem while turning from one page to another images get hanged randomly...... i tried this also [[TextureMgr sharedTextureMgr] removeAllTextures]; but went in vain...i guess the the problem is with the memory.....this my coding for all the pages -(id)init { if( (self=[super init] )) { self.isTouchEnabled = YES; [SimpleAudioEngine sharedEngine]; NSLog(@"b4 cover"); Sprite *bg1 = [Sprite spriteWithFile:@"a.jpg"]; bg1.anchorPoint = CGPointZero; [self addChild:bg1 z:-1]; once = TRUE; soundId = [[SimpleAudioEngine sharedEngine] playEffect:@".mp3"]; } return self; } -(void) transitionfront:(id) sender { [[SimpleAudioEngine sharedEngine] stopEffect:soundId]; soundId1 = [[SimpleAudioEngine sharedEngine] playEffect:@"page_turn.mp3"]; flip = [[Sprite spriteWithFile:@"a.jpg"] retain]; [self addChild: flip z:1]; [flip setPosition:ccp(160,240)]; Animation* animation1 = [Animation animationWithName:@"Page1" delay:0.09]; for( int i=1;i<4;i++) [animation1 addFrameWithFilename: [NSString stringWithFormat:@".jpg", i]]; id action = [Animate actionWithAnimation: animation1]; //id action = [RepeatForever actionWithAction:[Animate actionWithAnimation: animation1]]; [flip runAction:action]; [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(moveforward) userInfo:nil repeats:NO]; } -(void) moveforward { [[SimpleAudioEngine sharedEngine] stopEffect:soundId1]; [[Director sharedDirector] replaceScene: [ [Scene node] addChild: [nextpage node] z:0] ]; } -(void) transitionback:(id) sender { [[SimpleAudioEngine sharedEngine] stopEffect:soundId]; soundId1 = [[SimpleAudioEngine sharedEngine] playEffect:@".mp3"]; flip = [[Sprite spriteWithFile:@".jpg"] retain]; [self addChild: flip z:1]; [flip setPosition:ccp(160,240)]; Animation* animation1 = [Animation animationWithName:@"Page1" delay:0.09]; for( int i=3;i>0;i--) [animation1 addFrameWithFilename: [NSString stringWithFormat:@".jpg", i]]; id action = [Animate actionWithAnimation: animation1]; //id action = [RepeatForever actionWithAction:[Animate actionWithAnimation: animation1]]; [flip runAction:action]; [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(movebackward) userInfo:nil repeats:NO]; } -(void) movebackward{ //[[SimpleAudioEngine sharedEngine]stopEffect:@".mp3"]; [[Director sharedDirector]replaceScene:[[Scene node]addChild:[b4page node] z:0]]; } -(void) glossary :(id) sender { [[SimpleAudioEngine sharedEngine]stopEffect:soundId]; [[Director sharedDirector]replaceScene:[[Scene node]addChild:[ node] z:0]]; } -(BOOL)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint cocosTouchPoint = [touch locationInView: [touch view]]; CGPoint point = [[Director sharedDirector] convertToGL:cocosTouchPoint]; NSLog(@"pointx: %f pointy:%f", point.x, point.y); // Was a tab touched, if so, which one... if (CGRectContainsPoint(CGRectMake(220, 0, 100, 70), point)) { if(once) { NSLog(@"enterred page1"); [self transitionfront:nil]; once = FALSE; } } if (CGRectContainsPoint(CGRectMake(0,0,60,60), point)) { if(once) { NSLog(@"enterred cover"); [self transitionback:nil]; once = FALSE; } } if (CGRectContainsPoint(CGRectMake(100, 15, 30, 30), point)) { if(once){ [self glossary :nil]; once = FALSE; } } return kEventHandled; } -(void)playEffect:(NSString*)sound{ if(effectPlayer!=nil){ [effectPlayer release]; } NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:sound ofType:@"mp3"]]; effectPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil]; [effectPlayer setDelegate:self]; [effectPlayer play]; } -(void)stopEffect { [effectPlayer stop]; } -(void) dealloc{ [super dealloc]; } do pls help me........ do give me a exact coding this is the err..... *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary setObject:forKey:]: attempt to insert nil value (key: aesop.mp3)' 2010-05-27 10:43:09.834 abc[276:20b] Stack: ( 11674715, 2476006971, 11758651, 11758490, 5126917, 660698, 660881, 661061, 131577, 448857, 120432, 153433, 630890, 23694899, 23603228, 23630005, 47120081, 11459456, 11455560, 47114125, 47114322, 23633923, 9928, 9814 )

    Read the article

  • how to make a div(black border,and on the google-maps) panel drop-disable,thanks

    - by zjm1126
    the black div is used to panel,so it can not be droppable. <!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.0//EN" "http://www.wapforum.org/DTD/xhtml-mobile10.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width,minimum-scale=0.3,maximum-scale=5.0,user-scalable=yes"> </head> <body onload="initialize()" onunload="GUnload()"> <style type="text/css"> *{ margin:0; padding:0; } .container{ padding:10px; width:50px; height:50px; border:5px solid black; } </style> <!--<div style="width:100px;height:100px;background:blue;"> </div>--> <div id="map_canvas" style="width: 500px; height: 300px;"></div> <!-- <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> <div class=b style="width: 20px; height: 20px;background:red;position:absolute;left:700px;top:200px;"></div> --> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;key=ABQIAAAA-7cuV3vqp7w6zUNiN_F4uBRi_j0U6kJrkFvY4-OX2XYmEAa76BSNz0ifabgugotzJgrxyodPDmheRA&sensor=false"type="text/javascript"></script> <script type="text/javascript"> var aFn; //********** function initialize() { if (GBrowserIsCompatible()) { //************ function a() { } a.prototype = new GControl(); a.prototype.initialize = function(map) { var container = document.createElement("div"); var a=''; for(i=0;i<5;i++){ a+='<div class=b style="width: 20px; height: 20px;background:red;position:absolute;"></div>' } $(container).addClass('container'); $(container).droppable( 'destroy' ).css('z-index','2700') $(map.getContainer()).append($(container).append(a)); return container; } a.prototype.getDefaultPosition = function() { return new GControlPosition(G_ANCHOR_TOP_LEFT, new GSize(7, 7)); } //************ var map = new GMap2(document.getElementById("map_canvas")); map.addControl(new a()); var center=new GLatLng(39.9493, 116.3975); map.setCenter(center, 13); aFn=function(x,y){ var point =new GPoint(x,y) point = map.fromContainerPixelToLatLng(point); //console.log(point.x+" "+point.y) map.addOverlay(new GMarker(point)); } $(".b").draggable({}); $("#map_canvas").droppable({ drop: function(event,ui) { //console.log(ui.offset.left+' '+ui.offset.top) aFn(ui.offset.left+10,ui.offset.top+10); ui.draggable.remove(); } }); } } //************* </script> </body> </html>

    Read the article

  • Can someone explain this color wheel code to me?

    - by user1869438
    I just started doing java and i need some help with understanding this code. I got it from a this website. This is supposed to be code for a color wheel but i don't really understand how it works, especially the final ints STEPS and SLICES. import java.awt.Color; import objectdraw.*; public class ColorWheel extends WindowController { private double brightness; private Text text; private FilledRect swatch; private Location center; private int size; private FilledRect brightnessOverlay; private static final int SLICES = 96; private static final int STEPS = 16; public void begin() { canvas.setBackground(Color.BLACK); brightness = 1.; size = Math.min(canvas.getWidth(), canvas.getHeight() - 20); center = new Location(canvas.getWidth() / 2, size / 2); for(int j = STEPS; j >= 1; j--) { int arcSize = size * j / STEPS; int x = center.getX() - arcSize / 2; int y = center.getY() - arcSize / 2; for(int i = 0; i < SLICES; i++) { Color c = Color.getHSBColor((float)i / SLICES, (float)j / STEPS, (float)brightness); new FilledArc(x, y, arcSize, arcSize, i * 360. / SLICES, 360. / SLICES + .5, c, canvas); } } swatch = new FilledRect(0, canvas.getHeight() - 20, canvas.getWidth(), 20, Color.BLACK, canvas); brightnessOverlay = new FilledRect(0, 0, canvas.getWidth(), canvas.getHeight() - 20, new Color(0, 0, 0, 0), canvas); text = new Text("", canvas.getWidth() / 2, canvas.getHeight() - 18, canvas); text.setAlignment(Text.CENTER, Text.TOP); text.setBold(true); } public void onMouseDrag(Location point) { brightness = (canvas.getHeight() - point.getY()) / (double)(canvas.getHeight()); if(brightness < 0) { brightness = 0; } else if(brightness > 1) { brightness = 1; } if(brightness < .5) { text.setColor(Color.WHITE); } else { text.setColor(Color.BLACK); } brightnessOverlay.setColor(new Color(0f, 0f, 0f, (float)(1 - brightness))); } public void onMouseMove(Location point) { double saturation = 2 * center.distanceTo(point) / size; if(saturation > 1) { text.setText(""); swatch.setColor(Color.BLACK); return; } double hue = -Math.atan2(point.getY() - center.getY(), point.getX() - center.getX()) / (2 * Math.PI); if(hue < 0) { hue += 1; } swatch.setColor(Color.getHSBColor((float)hue, (float)saturation, (float)brightness)); text.setText("Color.getHSBColor(" + Text.formatDecimal(hue, 2) + "f, " + Text.formatDecimal(saturation, 2) + "f, " + Text.formatDecimal(brightness, 2) + "f)"); } }

    Read the article

  • OpenGL FrameBuffer Objects weird behavior

    - by Ben Jones
    My algorithm is this: Render the scene to a FBO with shadow mapping from multiple locations Render the scene to the screen with shadow mapping ...black magic that I still have to imlement... Combine the samples from step 1 with the image from step 2 I'm trying to debug steps 1 and 2 and am coming across STRANGE behavior. My algorithm for each shadow mapped pass is: render the scene to a FBO connected to a depth array texture from the POV of each light render the scene from the viewpoint and use vertex/frag shaders to compare the depths When I run my algorithm this way: render from point to FBO render from point to screen glutSwapBuffers() The normal vectors in the screen pass appear to be incorrect (inverted possibly). I'm pretty sure that's the issue because my diffuse lighting calculation is incorrect, but the material colors are correct, and the shadows appear in the correct places. So, it seems like the only thing that could be the culprit is the normals. However if I do render from point to FBO render from point to Screen glutSwapBuffers() //wrong here render from point to Screen glutSwapBuffers() the second pass is correct. I assume there's a problem with my framebuffer calls. Can anyone see what the problem is from the log below? Its from a bugle trace grepped for 'buffer' with a few edits to make it a little more clear. Thanks! [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfeb90 - { 1 }) [INFO] trace.call: glGenFramebuffersEXT(1, 0xdfebac - { 2 }) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glDrawBuffer(GL_NONE) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) //start render to FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glReadBuffer(GL_NONE) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 2, 0) [INFO] trace.call: glFramebufferTexture2DEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, 3, 0) [INFO] trace.call: glDrawBuffer(GL_COLOR_ATTACHMENT0) //bind to the FBO attached to a depth tex array for shadows [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the FBO I want the shadow mapped image rendered to [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 2) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //draw geometry //draw to screen pass //again shadow mapping FBO [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry //bind to the screen [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) //finished, swap buffers [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //INCORRECT OUTPUT //second try at render to screen: [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 1) [INFO] trace.call: glFramebufferTextureLayerARB(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 1, 0, 0) [INFO] trace.call: glClear(GL_DEPTH_BUFFER_BIT) //draw geometry [INFO] trace.call: glBindFramebufferEXT(GL_FRAMEBUFFER, 0) [INFO] trace.call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) draw geometry [INFO] trace.call: glXSwapBuffers(0xd5fc10, 0x05800002) //correct output

    Read the article

  • Smooth animation using MatrixTransform?

    - by Mattias Konradsson
    I'm trying to do an Matrix animation where I both scale and transpose a canvas at the same time. The only approach I found was using a MatrixTransform and MatrixAnimationUsingKeyFrames. Since there doesnt seem to be any interpolation for matrices built in (only for path/rotate) it seems the only choice is to try and build the interpolation and DiscreteMatrixKeyFrame's yourself. I did a basic implementation of this but it isnt exactly smooth and I'm not sure if this is the best way and how to handle framerates etc. Anyone have suggestions for improvement? Here's the code: MatrixAnimationUsingKeyFrames anim = new MatrixAnimationUsingKeyFrames(); int duration = 1; anim.KeyFrames = Interpolate(new Point(0, 0), centerPoint, 1, factor,100,duration); this.matrixTransform.BeginAnimation(MatrixTransform.MatrixProperty, anim,HandoffBehavior.Compose); public MatrixKeyFrameCollection Interpolate(Point startPoint, Point endPoint, double startScale, double endScale, double framerate,double duration) { MatrixKeyFrameCollection keyframes = new MatrixKeyFrameCollection(); double steps = duration * framerate; double milliSeconds = 1000 / framerate; double timeCounter = 0; double diffX = Math.Abs(startPoint.X- endPoint.X); double xStep = diffX / steps; double diffY = Math.Abs(startPoint.Y - endPoint.Y); double yStep = diffY / steps; double diffScale= Math.Abs(startScale- endScale); double scaleStep = diffScale / steps; if (endPoint.Y < startPoint.Y) { yStep = -yStep; } if (endPoint.X < startPoint.X) { xStep = -xStep; } if (endScale < startScale) { scaleStep = -scaleStep; } Point currentPoint = new Point(); double currentScale = startScale; for (int i = 0; i < steps; i++) { keyframes.Add(new DiscreteMatrixKeyFrame(new Matrix(currentScale, 0, 0, currentScale, currentPoint.X, currentPoint.Y), KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(timeCounter)))); currentPoint.X += xStep; currentPoint.Y += yStep; currentScale += scaleStep; timeCounter += milliSeconds; } keyframes.Add(new DiscreteMatrixKeyFrame(new Matrix(endScale, 0, 0, endScale, endPoint.X, endPoint.Y), KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(0)))); return keyframes; }

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

  • Subroutine & GoTo design

    - by sub
    I have a strange question concerning subroutines: As I'm creating a minimal language and I don't want to add high-level loops like while or for I was planning on just adding gotos to keep it Turing-Complete. Now I thought, eww - gotos - I wouldn't want to program in that language if I had to use gotos so often. So I thought about adding subroutines instead. I see the difference as the following: gotos Go to (captain obvious) a previously defined point and continue executing the program from there. Leads to hardly understandable and buggy code, I think that's a fact. subroutines Similiar: You define their starting point somewhere, as you call them the program jumps there - but the subroutine can go back to the point it was called from with return. Okay. Why didn't I just add the more function-like, nice looking subroutines? Because: In order to make return work if I call subroutines from within subroutines from within other subroutines, I'd have to use a stack containing the point where the currently running subroutine came from at top. That would then mean that I would, if I create loops using the subroutines, end up with an extremely memory-eating, overflowing stack with return locations. Not good. Don't think of my subroutines as functions. They are just gotos that return to the point they were called from, they don't actually give back values like the return x; statement in nearly all today's languages. Now to my actual questions: How can I solve the above problem with the stack overflow on loops with subroutines? Do I have to add a separate goto language construct without the return option? Assembler doesn't have loops but as I have seen myJumpPoint:, jnz, jz, retn. That means to me that there must also be a stack containing all the return locations. Am I right with that? What about long running loops then? Don't they overflow the stack/eat memory then? Am I getting the retn symbol in assembler totally wrong? If yes, please explain it to me.

    Read the article

  • Mapscript queryByPoint return no results

    - by lucian.jp
    I have a dynamically generated mapfile made with c# mapscript that is defined like: MAP EXTENT 5.91828 45.63552 5.92346 45.65051 IMAGECOLOR 192 192 192 IMAGETYPE png SIZE 256 256 STATUS ON TRANSPARENT TRUE UNITS METERS NAME "GMAP_TILE" OUTPUTFORMAT NAME "png" MIMETYPE "image/png" DRIVER "GD/PNG" EXTENSION "png" IMAGEMODE "PC256" TRANSPARENT TRUE END SYMBOL NAME "circle" TYPE ELLIPSE FILLED TRUE POINTS 1 1 END END SYMBOL NAME ">" TYPE TRUETYPE ANTIALIAS TRUE CHARACTER ">" GAP -20 FONT "arial" POSITION CC END PROJECTION "proj=merc" "a=6378137" "b=6378137" "lat_ts=0.0" "lon_0=0.0" "x_0=0.0" "y_0=0" "units=m" "k=1.0" "nadgrids=@null" END LEGEND IMAGECOLOR 255 255 255 KEYSIZE 20 10 KEYSPACING 5 5 LABEL SIZE MEDIUM TYPE BITMAP BUFFER 0 COLOR 0 0 0 FORCE FALSE MINDISTANCE -1 MINFEATURESIZE -1 OFFSET 0 0 PARTIALS TRUE END POSITION LL STATUS OFF END QUERYMAP COLOR 255 255 0 SIZE -1 -1 STATUS ON STYLE HILITE END SCALEBAR ALIGN CENTER COLOR 0 0 0 IMAGECOLOR 255 255 255 INTERVALS 4 LABEL SIZE MEDIUM TYPE BITMAP BUFFER 0 COLOR 0 0 0 FORCE FALSE MINDISTANCE -1 MINFEATURESIZE -1 OFFSET 0 0 PARTIALS TRUE END POSITION LL SIZE 200 3 STATUS OFF STYLE 0 UNITS MILES END WEB IMAGEPATH "" IMAGEURL "" QUERYFORMAT text/html LEGENDFORMAT text/html BROWSEFORMAT text/html END LAYER NAME "Troncons" PROJECTION "proj=longlat" "ellps=WGS84" "datum=WGS84" END STATUS DEFAULT TEMPLATE "nofile.html" TOLERANCE 100 TOLERANCEUNITS METERS TYPE LINE UNITS METERS CLASS NAME "Troncons" STYLE ANGLE 360 COLOR 0 0 255 SIZE 5 SYMBOL "circle" WIDTH 5 END STYLE ANGLE 360 COLOR 0 0 0 SIZE 12 SYMBOL ">" WIDTH 1 END END FEATURE POINTS 5.91828 45.63552 5.91876 45.63611 5.91898 45.6364 5.91936 45.63701 5.91952 45.63731 5.91968 45.63762 5.91993 45.63825 5.92003 45.63856 5.92018 45.63919 5.92028 45.63983 5.92031 45.64014 5.92033 45.64046 5.92034 45.64077 5.92034 45.64108 5.92034 45.64171 5.92035 45.64234 5.92035 45.6428 5.92037 45.6433 5.9204 45.64394 5.92046 45.64458 5.92056 45.64522 5.92062 45.64554 5.92069 45.64586 5.92077 45.64617 5.92097 45.64679 5.92122 45.64739 5.92136 45.64769 5.92169 45.64828 5.92207 45.64886 5.92228 45.64914 5.92272 45.64969 5.92321 45.65023 5.92346 45.65051 END END END END I try to queryByPoint to retreive the index of the shape clciked near. In the code below I made a specific test function with fixed point instead of points passed by parameter so I am sure the point I use is actually part of a feature. In my case I use the first point of the only feature contained in mapfile. public string GetTronconId() { //_map is my dynamically created mapObj if (_map != null) for (int i = 0; i < _map.numlayers; i++) { layerObj layer = _map.getLayer(i); // Code never pass this point if (layer.queryByPoint(_map, new pointObj(5.91898, 45.6364, 0, 0), (int) MS_QUERY_MODE.MS_QUERY_MULTIPLE, 100) == (int) MS_RETURN_VALUE.MS_SUCCESS) { int numresults = layer.getNumResults(); if (numresults != 0) { layer.open(); for (int j = 0; j < numresults; j++) { resultCacheMemberObj resultat = layer.getResult(j); shapeObj shape = null; if (layer.getShape(shape, resultat.tileindex, resultat.shapeindex) == (int) MS_RETURN_VALUE.MS_SUCCESS) return shape.getValue(0); } } } } return null; } I have a dummy TEMPLATE set, I do not eveen have to use the tolerance since the point is derectly in a shape, but the queryByPoint keep returning me MS_FAILURE. From my searches on the web everything seem to be OK. Any idea?

    Read the article

  • Agilent E4426B signal generator locks up during multiple GPIB *SAV operations

    - by aspiehler
    I have a test fixture with an Agilent E4426B RF signal generator connected to a PC via a National Instrument Ethernet-to-GPIB bridge. My software is attempting to sanitize the instrument by presetting it and then saving the current state to all of the memory locations writable via the standard SCPI command "*SAV x,y". The loop works to a point, but eventually the instrument responds with an error and continuously displays the "L" icon on the front display and a "Remote preset" message at the bottom. At that point it won't respond to any more remote commands and I have to either cycle power or press LOCAL, then PRESET at which point it takes about 3 minutes to finish presetting. At that point the "L" icon is still present and and the next GPIB command sent to the instrument causes it to report a -113 error (undefined header) in the instrument error queue. I fired up NI spy to see what was happening, and found that the error was happening at the same point in the loop - "*SAV 6,2" in this case. From NI Spy: Send (0, 0x0017, "*SAV 6,2", 8 (0x8), NLend (0,0x01)) Process ID: 0x00000520 Thread ID: 0x00000518 ibsta:0xc168 iberr: 6 ibcntl: 2(0x2) And here's the code from the instrument driver: int CHP_E4426b::Erase() { if ((m_StatusCode = Initialize()) != GPIB_SUCCESS) // basically just sends "*RST" return m_StatusCode; m_SaveState = "*SAV %d, %d"; for (int i=0; i < 10; i++) for (int j=0; j < 100; j++) { sprintf(m_CmdString, m_SaveState, j, i); if ((m_StatusCode = Send(m_CmdString, strlen(m_CmdString))) != GPIB_SUCCESS) return m_StatusCode; } return GPIB_SUCCESS; } I tried putting a small Sleep() delay (10-20 ms) at the end of the inner loop, and to my surprise it caused the error to show up earlier rather than later. 10 ms caused the loop to error out at 44,1 and 20 ms was even sooner. I've already eliminated faulty cabling or the instrument as the culprit. This same type of sequence works without any error on a higher end signal generator, so I'm tempted to chalk this up to a bug in the instrument's firmware.

    Read the article

  • how to drag a 'div' element to the google maps ,that be changed to a 'marker'..use jquery

    - by zjm1126
    this is my code : <!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.0//EN" "http://www.wapforum.org/DTD/xhtml-mobile10.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no"> </head> <body onload="initialize()" onunload="GUnload()"> <style type="text/css"> </style> <div id="map_canvas" style="width: 500px; height: 300px;float:left;"></div> <div id=b style="width: 50px; height: 50px;background:red;float:left;margin-left:300px;"></div> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script src="http://ditu.google.cn/maps?file=api&amp;v=2&amp;key=ABQIAAAA-7cuV3vqp7w6zUNiN_F4uBRi_j0U6kJrkFvY4-OX2XYmEAa76BSNz0ifabgugotzJgrxyodPDmheRA&sensor=false"type="text/javascript"></script> <script type="text/javascript"> //********** function initialize() { if (GBrowserIsCompatible()) { // function createMarker(point, number) { var marker = new GMarker(point); var message = ["?","?","?","??","??"]; marker.value = number; GEvent.addListener(marker, "click", function() { var myHtml = "<b>#" + number + "</b><br/>" + message[number -1]; map.openInfoWindowHtml(point, myHtml); }); return marker; } // var map = new GMap2(document.getElementById("map_canvas")); map.setCenter(new GLatLng(39.9493, 116.3975), 13); // Add 5 markers to the map at random locations var bounds = map.getBounds(); var southWest = bounds.getSouthWest(); var northEast = bounds.getNorthEast(); var lngSpan = northEast.lng() - southWest.lng(); var latSpan = northEast.lat() - southWest.lat(); for (var i = 0; i < 5; i++) { var point = new GLatLng(southWest.lat() + latSpan * Math.random(), southWest.lng() + lngSpan * Math.random()); map.addOverlay(createMarker(point, i + 1)); } } } //************* $("#b").draggable(); </script> </body> </html>

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • How would you implement this "WorkerChain" functionality in .NET?

    - by Dan Tao
    Sorry for the vague question title -- not sure how to encapsulate what I'm asking below succinctly. (If someone with editing privileges can think of a more descriptive title, feel free to change it.) The behavior I need is this. I am envisioning a worker class that accepts a single delegate task in its constructor (for simplicity, I would make it immutable -- no more tasks can be added after instantiation). I'll call this task T. The class should have a simple method, something like GetToWork, that will exhibit this behavior: If the worker is not currently running T, then it will start doing so right now. If the worker is currently running T, then once it is finished, it will start T again immediately. GetToWork can be called any number of times while the worker is running T; the simple rule is that, during any execution of T, if GetToWork was called at least once, T will run again upon completion (and then if GetToWork is called while T is running that time, it will repeat itself again, etc.). Now, this is pretty straightforward with a boolean switch. But this class needs to be thread-safe, by which I mean, steps 1 and 2 above need to comprise atomic operations (at least I think they do). There is an added layer of complexity. I have need of a "worker chain" class that will consist of many of these workers linked together. As soon as the first worker completes, it essentially calls GetToWork on the worker after it; meanwhile, if its own GetToWork has been called, it restarts itself as well. Logically calling GetToWork on the chain is essentially the same as calling GetToWork on the first worker in the chain (I would fully intend that the chain's workers not be publicly accessible). One way to imagine how this hypothetical "worker chain" would behave is by comparing it to a team in a relay race. Suppose there are four runners, W1 through W4, and let the chain be called C. If I call C.StartWork(), what should happen is this: If W1 is at his starting point (i.e., doing nothing), he will start running towards W2. If W1 is already running towards W2 (i.e., executing his task), then once he reaches W2, he will signal to W2 to get started, immediately return to his starting point and, since StartWork has been called, start running towards W2 again. When W1 reaches W2's starting point, he'll immediately return to his own starting point. If W2 is just sitting around, he'll start running immediately towards W3. If W2 is already off running towards W3, then W2 will simply go again once he's reached W3 and returned to his starting point. The above is probably a little convoluted and written out poorly. But hopefully you get the basic idea. Obviously, these workers will be running on their own threads. Also, I guess it's possible this functionality already exists somewhere? If that's the case, definitely let me know!

    Read the article

  • plane bombing problems- help

    - by peiska
    I'm training code problems, and on this one I am having problems to solve it, can you give me some tips how to solve it please. The problem is something like this: Your task is to find the sequence of points on the map that the bomber is expected to travel such that it hits all vital links. A link from A to B is vital when its absence isolates completely A from B. In other words, the only way to go from A to B (or vice versa) is via that link. Notice that if we destroy for example link (d,e), it becomes impossible to go from d to e,m,l or n in any way. A vital link can be hit at any point that lies in its segment (e.g. a hit close to d is as valid as a hit close to e). Of course, only one hit is enough to neutralize a vital link. Moreover, each bomb affects an exact circle of radius R, i.e., every segment that intersects that circle is considered hit. Due to enemy counter-attack, the plane may have to retreat at any moment, so the plane should follow, at each moment, to the closest vital link possible, even if in the end the total distance grows larger. Given all coordinates (the initial position of the plane and the nodes in the map) and the range R, you have to determine the sequence of positions in which the plane has to drop bombs. This sequence should start (takeoff) and finish (landing) at the initial position. Except for the start and finish, all the other positions have to fall exactly in a segment of the map (i.e. it should correspond to a point in a non-hit vital link segment). The coordinate system used will be UTM (Universal Transverse Mercator) northing and easting, which basically corresponds to a Euclidian perspective of the world (X=Easting; Y=Northing). Input Each input file will start with three floating point numbers indicating the X0 and Y0 coordinates of the airport and the range R. The second line contains an integer, N, indicating the number of nodes in the road network graph. Then, the next N (<10000) lines will each contain a pair of floating point numbers indicating the Xi and Yi coordinates (1 No two links will ever cross with each other. Output The program will print the sequence of coordinates (pairs of floating point numbers with exactly one decimal place), each one at a line, in the order that the plane should visit (starting and ending in the airport). Sample input 1 102.3 553.9 0.2 14 342.2 832.5 596.2 638.5 479.7 991.3 720.4 874.8 744.3 1284.1 1294.6 924.2 1467.5 659.6 1802.6 659.6 1686.2 860.7 1548.6 1111.2 1834.4 1054.8 564.4 1442.8 850.1 1460.5 1294.6 1485.1 17 1 2 1 3 2 4 3 4 4 5 4 6 6 7 7 8 8 9 8 10 9 10 10 11 6 11 5 12 5 13 12 13 13 14 Sample output 1 102.3 553.9 720.4 874.8 850.1 1460.5 102.3 553.9

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • Random Slow Response

    - by ARehman
    We have an ASP.NET MVC 1.0 application running on Windows Server 2008 – Standard (32 –bit), Dual Core Xeon (3.0 GHz), 2 G.B R.A.M. Most of the times application renders response in 3-4 seconds, but sometimes users get very late response and delay is up to 40 seconds or more than a minute. It happens in following way: User browsed a page, idle for 5, 10 or 15 minutes, tried to browse same page or some other. Now, there is a chance that he will see late response whereas the app pool is still up and running. This can happen with any arbitrary page. We have tried followings/observations. Moved the application to stand alone web server App Pool idle shutdown time is 60 minutes. There are no abrupt shut downs/restarts. CPU or memory doesn’t spike. No delays in SQL queries. Modified App Pool setting to run in classic-mode. It didn’t help. Plugged-in custom module to log all those requests which took more than 5 seconds to complete. It didn’t pick any request of interest. Enabled ‘Failed Request Tracing’ to log all those requests which take 20 or more seconds to complete. It didn’t log anything. Event Viewer, HTTPER log, W3SVC logs or WAS logs don’t indicate anything. HTTPERR only has ‘_ _ Timer_ConnectionIdle _ _’ entries. There is not much traffic to server. This can happen also if only two users are active. Next we captured TCP/IP terrific on both a user and server end with Wireshark and below are details in brief of this slowness: Browser sends a request for ~/User/Home/ (GET Request) by setting up a receiving end point using port 'wlbs(port-2504)'. I'm not sure if this could be a problem in some way that browser didn't hand-shake with the server first and assumed that last connection is still open, whereas, I browsed the same page 4 minutes ago and didn't perform any activity with site after that. If I see the HTTPERR log, it indicates that it has ‘_ _ Timer_ConnectionIdle _ _ _’ entry for my last activity with server. Browser (I was using Chrome) waits for any response from the server, doesn’t find any then starts retransmitting the same request using same end point after incrementing wait intervals, e.g. after 8, 18, 29, 40, 62, and 92 seconds. All these GET requests were received by server as well. But, server didn’t send any packet to client. Browser didn't see any response on the end point it set up in point 1, it opened a new end point 'optiwave-lm (port-2524)', did a hand shake with the server and transmitted the same request again. Server received, processed it, and returned successful response. What happened to earlier 6-7 requests? Whether they were passed on to HTTP.SYS or not? Why Failed Request Tracing not logged anything, we didn't find any clue yet. Server served the same page successfully just 4 minutes ago. Looking forward for more suggestions/solutions. -- Thanks

    Read the article

  • SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch

    - by pinaldave
    June 11, 2010 was a wonderful day because I attended the very first SQL Server 2008 R2 Launch event held by Microsoft at Mumbai. I traveled to Mumbai from my home town, Ahmedabad. The event was located at one of the best hotels in Mumbai,”The Leela”. SQL Server R2 Launch was an evening event that had a few interesting talks. SQL PASS is associated with this event as one of the partners and its goal is to increase the awareness of the Community about SQL Server. I met many interesting people and had a great networking opportunity at the event. This event was kicked off with an awesome laser show and a “Welcome” video, which was followed by a Microsoft Executive session wherein there were several interesting demo. The very first demo was about Powerpivot. I knew beforehand that there will be Powerpivot demos because it is a very popular subject; however, I was really hoping to see other interesting demos from SQL Server 2008 R2. And believe me; I was happier to see the later demos. There were demos from SQL Server Utility Control Point, as well an integration of Bing Map with Reporting Servers. I really enjoyed the interactive and informative session by Shivaram Venkatesh. He had excellent presentation skills as well as ample technical knowledge to keep the audience attentive. I really liked his presentations skills wherein he did not read the whole slide deck; rather, he picked one point and using that point he told the story of the whole slide deck. I also enjoyed my conversation with Afaq Choonawala, who is one of the “gem guys” in Microsoft. I also want to acknowledge Ashwin Kini and Mohit Panchal for their excellent support to this event. Mumbai IT Pro is a user group which you can really count on for any kind of help. After excellent demos and a vibrant start of the event, all the audience was jazzed up. There were two vendors’ sessions right after the first session. Intel had 15 minutes to present; however, Intel’s representative, who had good knowledge of the subject, had nearly 30+ slides in his presentation, so he had to rush a bit to cover the whole slide deck. Intel presentations were followed up by another vendor presentation from NetApp. I have previously heard about this tool. After I saw the demo which did not work the first time the Net App presenter demonstrated it, I started to have a doubt on this product. I personally went to clarify my doubt to the demo booth after the presentation was over, but I realize the NetApp presenter or booth owner had absolutely a POOR KNOWLEDGE of SQL Server and even of their own NetApp product. The NetApp people tried to misguide us and when we argued, they started to say different things against what they said earlier. At one point in their presentation, they claimed their application does something very fast, which did not really happen in front of all the audience. They blamed SQL Server R2 DBCC CHECKDB command for their product’s failed demonstration. I know that NetApp has many great products; however, this one was not conveyed clearly and even created a negative impression to all of us. Well, let us not judge the potential, fun, education and enigma of the launch event through a small glitch. This event was jam-packed and extremely well-received by everybody who attended it. As what I said, average demos and good presentations by MS folks were really something to cheer about. Any launch event is considered as successful if it achieves its goal to excite users with its cutting edge technology; just like this event that left a very deep impression on me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology Tagged: PASS, SQLPASS

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >