Search Results

Search found 438 results on 18 pages for 'plane'.

Page 14/18 | < Previous Page | 10 11 12 13 14 15 16 17 18  | Next Page >

  • Cutting Paper through Visualization and Collaboration

    - by [email protected]
    It's hard not to hear about "Going Green" these days. Many are working to be more environmentally conscious in their personal lives, and this has extended to the corporate world as well. I know I'm always looking for new ways. Environmental responsibility is important at Oracle too, and we have an entire section of our website dedicated to our solutions around the Eco-Enterprise. You can check it out here: http://www.oracle.com/green/index.html Perhaps the biggest and most obvious challenge in the world of business is the fact that we use so much paper. There are many good reasons why we print today too. For example: Printing off a document, spreadsheet, or CAD design that will be reviewed and marked up while on a plane Having a printout of a facility when a field engineer performs on-site maintenance During a multi-party design review where a number of people will review a drawing in a meeting room, scribbling onto a large scale drawing print to provide their collaborative comments These are just a few potential use cases, and they're valid ones. However, there's a way in which you can turn these paper processes into digital ones. AutoVue allows you to view, mark-up, and collaborate on all the data you would print. Indeed, this is the core of what AutoVue does. So if we take the examples above, we could address each as follows: Allow you to view the document, spreadsheet, or CAD drawing in AutoVue on your laptop. Even if you originally had this data vaulted in some time of system of record (like an ECM solution) and view your data from there, AutoVue allows you to "Work Offline" and take the documents you need to review on your laptop. From there, the many annotation tools in AutoVue will give you what you need to comment upon the documents that you are reviewing. The challenge with the mobile workforce is always access to information. People who perform maintenance and repair operations often are in locations with little to no Internet connectivity. However, technology is coming to these people in the form of laptops, tablet PCs, and other portable devices too. AutoVue can address situations with limited bandwidth through our streaming technology for viewing, meaning that the most up to date information can be pulled up from the central server - without the need for large data transfer. When there is no connectivity at all, the "Work Offline" option will handle this. For a design review session, the Real-Time Collaboration capabilities of AutoVue will let all the participants view the same document in a synchronized view, allowing each person to be able to mark-up the document at the same time. Since this is done in a web-based manner, not only is it not necessary to print the document, but you benefit by reducing the travel needed for these sessions. Not only are trees spared, but jet fuel as well. There are many steps involved with "Going Green", but each step is a necessary one. What we do today will directly influence our future generations, and we're looking to help.

    Read the article

  • Craftsmanship Tour: Day 2 Obtiva

    - by Liam McLennan
    I like Chicago. It is a great city for travellers. From the moment I got off the plane at O’Hare everything was easy. I took the train to ‘the Loop’ and walked around the corner to my hotel, Hotel Blake on Dearborn St. Sadly, the elevated train lines in downtown Chicago remind me of ‘Shall We Dance’. Hotel Blake is excellent (except for the breakfast) and the concierge directed me to a pizza place called Lou Malnati's for Chicago style deep-dish pizza. Lou Malnati’s would be a great place to go with a group of friends. I felt strange dining there by myself, but the food and service were excellent. As usual in the United States the portion was so large that I could not finish it, but oh how I tried. Dave Hoover, who invited me to Obtiva for the day, had asked me to arrive at 9:45am. I was up early and had some time to kill so I stopped at the Willis Tower, since it was on my way to the office. Willis Tower is 1,451 feet (442 m) tall and has an observation deck at the top. Around the observation deck are a set of acrylic boxes, protruding from the side of the building. Brave soles can walk out on the perspex and look between their feet all the way down to the street. It is unnerving. Obtiva is a progressive, craftsmanship-focused software development company in downtown Chicago. Dave even wrote a book, Apprenticeship Patterns, that provides a catalogue of patterns to assist aspiring software craftsmen to achieve their goals. I spent the morning working in Obtiva’s software studio, an open xp-style office that houses Obtiva’s in-house development team. For lunch Dave Hoover, Corey Haines, Cory Foy and I went to a local Greek restaurant (not Dancing Zorbas). Dave, Corey and Cory are three smart and motivated guys and I found their ideas enlightening. It was especially great to chat with Corey Haines since he was the inspiration for my craftsmanship tour in the first place. After lunch I recorded a brief interview with Dave. Unfortunately, the battery in my camera went flat so I missed recording some interesting stuff. Interview with Dave Hoover In the evening Obtiva hosted an rspec hackfest with David Chelimsky and others. This was an excellent opportunity to be around some of the very best ruby programmers. At 10pm I went back to my hotel to get some rest before my train north the next morning.

    Read the article

  • Warm Reception By Partners at EMEA Manageability Forum

    - by Get_Specialized!
    For the EMEA Partners that were able to attend the event in Istanbul Turkey, thank you for your attendance and feedback at the event. As you can see, the weather kept most of inside during the event and at times there was even some snow.  And while it may have been chilly outside, there was a warm reception from Partners who traveled from all over EMEA to hear from other Oracle Specialized Partners and subject matter experts about the opportunities and benefits of Oracle Enterprise Manager and Exadata Specialization. Here you can see David Robo, Oracle Technology Director for Manageability kicking off the event followed later by Patrick Rood, Oracle Indirect Manageability Business. A special thank you to all the Partner speakers including Ron Tolido, VP and CTO of Application Services Continental Europe Capgemini, who delivered a very innovative keynote where many in attendance learned that Black Swans do exist. And while at break, interactivity among partners continued and it was great to see such innovative partners who had listed their achieved specializations on their business cards. Here we can see Oracle Enterprise Manager customer, Turkish Oracle User Group board member and Blogger Gokhan Atil sharing his product experiences with others attending. Additionally, Christian Trieb of Paragon Data, also shared with other Partners what the German Oracle User Group (DOAG) was doing around manageability and invitation to submit papers for their next event. Here we can see at one of the breaks, one of the event organizers Javier Puerta (left), Oracle Director of Partner Programs, joined by Sebastiaan Vingerhoed (middle), Oracle EE & CIS Manager Manageability and speaker on Managing the Application Lifecycle, Julian Dontcheff (right), Global Head of Database Management at Accenture. Below is Julian Dontcheff's delivering his partner presentation on Exadata and Lifecycle Management. Just after his plane landed and 1 hour Turkish taxi experience to the event location, Julian still took the time to sit down with me and provide some extra insights on his experiences of managing the enterprise infrastructure with Oracle Enterprise Manager. Below is one of the Oracle Enterprise Management Product Management Team,  Mark McGill, Oracle Principal Product Manager, presenting to Partners on how you can perform Chargeback and Metering with Oracle Enterprise Manager 12c Cloud Control. Overall, it was a great event and an extra thank you to those OPN Specialized Partners who presented, to the Partners that attended, and to those Oracle team members who organized the event and presented.

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • Collision detection - Smooth wall sliding, no bounce effect

    - by Joey
    I'm working on a basic collision detection system that provides point - OBB collision detection. I have around 200 cubes in my environment and I check (for now) each of them in turn and see if it collides. If it does I return the colliding face's normal, save the old player position and do some trigonometry to return a new player position for my wall sliding. edit I'll define my meaning of wall sliding: If a player walks in a vertical slope and has a slight horizontal rotation to the left or the right and keeps walking forward in the wall the player should slide a little to the right/left while continually walking towards the wall till he left the wall. Thus, sliding along the wall. Everything works fine and with multiple objects as well but I still have one problem I can't seem to figure out: smooth wall sliding. In my current implementation sliding along the walls make my player bounce like a mad man (especially noticable with gravity on and moving forward). I have a velocity/direction vector, a normal vector from the collided plane and an old and new player position. First I negate the normal vector and get my new velocity vector by substracting the inverted normal from my direction vector (which is the vector to slide along the wall) and I add this vector to my new Player position and recalculate the direction vector (in case I have multiple collisions). I know I am missing some step but I can't seem to figure it out. Here is my code for the collision detection (run every frame): Vector direction; Vector newPos(camera.GetOriginX(), camera.GetOriginY(), camera.GetOriginZ()); direction = newPos - oldPos; // Direction vector // Check for collision with new position for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { // Get inverse normal (direction STRAIGHT INTO wall) Vector invNormal = normal.Negative(); Vector wallDir = direction - invNormal; // We know INTO wall, and DIRECTION to wall. Substract these and you got slide WALL direction newPos = oldPos + wallDir; direction = newPos - oldPos; } } Any help would be greatly appreciated! FIX I eventually got things up and running how they should thanks to Krazy, I'll post the updated code listing in case someone else comes upon this problem! for(int i = 0; i < NUM_OBJECTS; i++) { Vector normal = objects[i].CheckCollision(newPos.x, newPos.y, newPos.z, direction.x, direction.y, direction.z); if(normal != Vector::NullVector()) { Vector invNormal = normal.Negative(); invNormal = invNormal * (direction * normal).Length(); // Change normal to direction's length and normal's axis Vector wallDir = direction - invNormal; newPos = oldPos + wallDir; direction = newPos - oldPos; } }

    Read the article

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • Arcball 3D camera - how to convert from camera to object coordinates

    - by user38873
    I have checked multiple threads before posting, but i havent been able to figure this one out. Ok so i have been following this tutorial, but im not using glm, ive been implementing everything up until now, like lookat etc. http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Arcball So i can rotate with the click and drag of the mouse, but when i rotate 90º degrees around Y and then move the mouse upwards or donwwards, it rotates on the wrong axis, this problem is demonstrated on this part of the tutorial An extra trick is converting the rotation axis from camera coordinates to object coordinates. It's useful when the camera and object are placed differently. For instace, if you rotate the object by 90° on the Y axis ("turn its head" to the right), then perform a vertical move with your mouse, you make a rotation on the camera X axis, but it should become a rotation on the Z axis (plane barrel roll) for the object. By converting the axis in object coordinates, the rotation will respect that the user work in camera coordinates (WYSIWYG). To transform from camera to object coordinates, we take the inverse of the MV matrix (from the MVP matrix triplet). What i have to do acording to the tutorial is convert my axis_in_camera_coordinates to object coordinates, and the rotation is done well, but im confused on what matrix i use to do just that. The tutorial talks about converting the axis from camera to object coordinates by using the inverse of the MV. Then it shows these 3 lines of code witch i havent been able to understand. glm::mat3 camera2object = glm::inverse(glm::mat3(transforms[MODE_CAMERA]) * glm::mat3(mesh.object2world)); glm::vec3 axis_in_object_coord = camera2object * axis_in_camera_coord; So what do i aply to my calculated axis?, the inverse of what, i supose the inverse of the model view? So my question is how do you transform camera axis to object axis. Do i apply the inverse of the lookat matrix? My code: if (cur_mx != last_mx || cur_my != last_my) { va = get_arcball_vector(last_mx, last_my); vb = get_arcball_vector( cur_mx, cur_my); angle = acos(min(1.0f, dotProduct(va, vb)))*20; axis_in_camera_coord = crossProduct(va, vb); axis.x = axis_in_camera_coord[0]; axis.y = axis_in_camera_coord[1]; axis.z = axis_in_camera_coord[2]; axis.w = 1.0f; last_mx = cur_mx; last_my = cur_my; } Quaternion q = qFromAngleAxis(angle, axis); Matrix m; qGLMatrix(q,m); vi = mMultiply(m, vi); up = mMultiply(m, up); ViewMatrix = ogLookAt(vi.x, vi.y, vi.z,0,0,0,up.x,up.y,up.z);

    Read the article

  • 25 reasons to attend JavaOne 2012

    - by arungupta
    17th JavaOne is just around the corner, less than 3 weeks away! If you are still thinking about registering for the conference, here are my top 25 reasons to attend the conference: Biggest gathering of Java geeks in the world Latest and greatest content with 475 technical sessions/Birds of Feathers/Hands-on labs sessions (about 20% more from last year) Reduced number of keynotes to accommodate room for more technical content No product pitches, exclusive focus on technology (I can tell you that from my experience as a track lead) Sessions are divided in different in-depth technical tracks to focus on Java technology that most interests you Reruns of several popular sessions Experts and Practitioners-led HOLs and tutorials Rock star speakers, panelists, faculties, and instructors. Meet several Java Champions and JUG leaders from all around the world Engage with speakers and discuss with fellow developers in a casual setting with lots of networking space A complete conference dedicated for Java Embedded Extensive and fast-paced hands-on University Sessions on Sunday, learn while you are at the conference. You can register for Java University only or attend with the conference. Dukes Choice Awards recognize and celebrate the most innovative usage of the Java platform DEMOgrounds and Exhibition Hall provide extensive opportunities for networking and engagement with the biggest names in Java (dedicated hours on each day as well) Dedicated day for Java User Groups and Communities (GlassFish Community Event and NetBeans Community Day) Multiple registration packages to meet your needs Pay for 4 full conference passes and get a fifth one free Students and Bloggers get a free pass Geek Bike Ride with fellow speakers and attendees in a casual setting Greenest conference on the plane Enjoy different cuisines in the San Francisco city, take a trip to Alcatraz or Napa Valley or go running on the crooked street ;-) There are tons of tourist opportunities in/around San Francisco. Tons of parties during the conference, in the evening, late night, and early mornings. Don't forget Thirsty Bear Party! Pearl Jam and Kings of Leon at Appreciation Party Oracle Music Festival at Yerba Buena Gardens Grab the bragging rights "I have attended JavaOne"! Learn a new skill, build new connections, conceive a new idea and push the boundaries of Java in the most important educational and networking event of the year for Java developers and enthusiasts. With so much geekgasm going on during the 5 days of JavaOne, is there a reason for you to wait ? Register for the conference now! Grab your buttons, banners, and other collateral at JavaOne Toolkit. You can also send an email to [email protected]. And reach out to us using different social media channels ... As a 13 year veteran of the conference, I can tell this is some thing every Java developer must experience! I will be there, will you ?

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

  • Outlying DBAs

    - by steveh99999
    Read an interesting book recently, ‘Outliers – the story of success’ by Malcolm Gladwell. There’s a good synopsis of the book here on wikipedia. I don’t want to write in detailed review of the book, but it’s well worth a read. There were a couple of sections which I thought were possibly relevant to IT professionals and DBAs in particular. Firstly, ‘the 10,000 hour rule’, in this section Gladwell asserts that to be a real ‘elite performer’ takes 10,000 hours of practice. ‘Practice isn’t the thing you do once you’re good, it’s the thing you do that makes you good’.  He gives many interesting examples – the Beatles, Bill Gates etc – but I was wondering could this be applied to DBAs ? If it takes 10,000 hours to be a really elite DBA – how long does that really take ? 8 hours a day makes 1250 days. If we assume that most DBAs work around 230 days a year – then it takes around 5 and a half years to become an elite DBA.   But how much time per day does a DBA spend actually doing DBA work ? Certainly it’s my experience that the more experienced I get as a DBA, the less time I seem to spend actually doing DBA work – ie meetings, change-control meetings, project planning, liasing with other teams, appraisals etc.  Is it more accurate to assume that a DBA spends half their time actually doing ‘real’ DBA work – or is that just my bad luck ?   So, in reality, I’d argue it can take at least 5 1/2 and more likely closer to 10 years to become an elite DBA. Why do I keep receiving CVs for senior DBAs with 2-4 years actual DBA experience ? In the second section I found particularly interesting, Gladwell writes about analysis of plane crashes and the importance of in-cockpit communications. He describes a couple of crashes involving Korean Airlines – where co-pilots were often deferrential to pilots, and unwilling to openly criticise their more senior colleagues or point out errors when things were going badly wrong… There’s a better summary of Gladwell’s concepts on mitigation  here – but to apply this to a DBA role… If you are a DBA and you do not agree with  a decision of one of your superiors, then it’s your duty as a DBA to say what you think is wrong, before it’s too late…  Obviously there’s a fine line between constructive criticism and moaning, but a good senior DBA or manager should be able to take well-researched criticism\debate from a more junior DBA.   Is this really possible ?

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • Can you work for the big (Google, Microsoft, Facebook etc.) without getting too much involved?

    - by Developer Art
    Having seen people talking about interviewing and working for the big companies, I keep wondering how much are you expected to actually get involved in there. 1) That's because I keep seeing folks from Google and Microsoft and others writing in forums, blogging, tweeting, speaking at conferences and seemingly doing this on the 24/7/365 basis from their office, apartment, hotel and even plane. Are you really expected to commit that much if you come to work for them? Do they want you to think about your work while you're eating, sleeping, taking a shower, making love and so on? Can you in fact "switch off" at five and go home forgetting everything? Perhaps you have a hobby, family life, kids, friends, personal projects anyone? Is it so that if you work for the big then you're expected not to have any life outside of the company? You can't develop own projects, have own clients and just have another life? 2) One other thing is the work contracts the big use. I've heard for instance that when you join Microsoft you need to provide a list of projects you're currently working on and after that anything new you'll come up with during your employment automatically belongs to the company. Are all of the big doing this? Can you deny signing a contract until such clause is removed or with the big it is "take it or leave it" because the legal department won't accept any change? Can you make them write the contract in that manner that they step away from anything you've developed in your private time? Of all the big I have only been at SAP during my internship. Lately while browsing through the old papers I've found my old contact which stipulated they owned everything I developed or invented during my employment, which I would never have signed these days. On a side note I don't think I would return to SAP since I remember most people there were clueless and provided the impression they were simply sitting out their years waiting for the retirement. But anyway, what do the other big put in their contracts? How far do you get involved when you go working for the big? Or perhaps fully committed with your body and soul? P.S. I'm not planning to join any of them I'm just curious.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Utility to store/cache all web pages and YouTube videos

    - by jonathanconway
    I found myself in the following situation. I'm travelling abroad with my laptop. I connect to a WiFi point and do a bit of browsing and play a YouTube video or two. Then I disconnect and hop on either a plane or taxi. Now I want to go back to some of the webpages I was browsing before and continue reading them, or watch some more of that YouTube video. Unfortunately it seems like none of these resources are cached, or if they are, I have no idea how to access them. Here's what I'd like: A utility that starts when my computer boots and sits in the background, silently caching all the web pages that I view. Not only that, but also the resources such as YouTube videos. Later, when I re-navigate to a site while disconnected, the browser automatically pulls the pages from my cache rather than giving me a 404 error. Or I can click an icon in the system tray and see a list of all the pages/videos in the cache and view any that I like. I'm sure Internet Explorer had a feature like this at some point, like "Offline Mode" or something. But these days it doesn't seem to work. Even when I select that option I still can't view pages that I'm certain I downloaded before. So has the utility I'm talking about been developed yet?

    Read the article

  • Which HDD brand do you ..trust

    - by Shiki
    Okay it says its 'subjective' but I believe it's not. Basically I want to ask the community about your preference. Not really 'preference' but actual experience. Like if you never had a problem with Western Digital, then write that in an answer, or if there is one with WD, just vote it up. And so on. (Heard so many stories, experiences. I only had Samsung, Maxtor, WD, Seagate HDDs. Samsung died with bad blocks, had anomalies. Maxtor died so fast I couldn't even try it really and it's really hot, loud. Seagate is just as loud as a jet plane, and moderately hot. My WD (green) is quiet, really cool and somewhat fast. That's all I have about experiences. So I would say Western Digital in an answer (OR Hitachi. Never had one yet, but every expert I know says I should get one since they even had problems with WD but Hitachi seems to be ok. (My laptop comes with Hitachi hdd but I don't think its really relevant.)) Basically I mean desktop 7200RPM HDDs here. Well.. notebook HDDs are ok also, but no raptor/scsi/server ones. Hope you get what I meant and it won't get closed.

    Read the article

  • What is the peak theoretical WiFi G user density? [closed]

    - by Bigbio2002
    I've seen a few WiFi capacity planning questions, and this one is related, but hopefully different enough not to be closed. Also, this is related specifically to 802.11g, but a similar question could be made for N. In order to squeeze more WiFi users into a space, the transmit power on the APs need to be reduced and the APs squeezed closer together. My question is, how far can you practically take this before the network becomes unusable? There will come a point where the transmit power is so weak that nobody will actually be able to pick up a connection, or be constantly roaming to/from APs spaced a few feet apart as they walk around. There are also only 3 available channels to use as well, which is a factor to consider. After determining the peak AP density, then multiply by users-per-AP, which should be easier to find out. After factoring all of this in and running some back-of-the-envelope calculations, I'd like to be able to get a figure of "XX users per 10ft^2" or something. This can be considered the physical limit of WiFi, and will keep people from asking about getting 3,000 people in a ballroom conference on WiFi. Can anyone with WiFi experience chime in, or better yet, provide some calculations for a more accurate figure? Assumptions: Let's assume an ideal environment with no reflection (think of a big, square, open room, with the APs spaced out on a plane), APs are placed on the ceiling so humans won't absorb the waves, and the only interference are from the APs themselves and the devices. As for what devices specifically, that's irrelevant for the first point of the question (AP density, so only channel and transmit power should matter). User experience: Wikipedia states that Wireless G has about 22Mbps maximum effective throughput, or about 2.75MB/s. For the purpose of this question, anything below 100KB/s per user can be deemed to be a poor user experience. As for roaming, I'll assume the user is standing in the same place, so hopefully that will be a non-issue.

    Read the article

  • Run-Time Check Failure #2 - Stack around the variable 'indices' was corrupted.

    - by numerical25
    well I think I know what the problem is. I am just having a hard time debugging it. I am working with the directx api and I am trying to generate a plane along the x and z axis according to a book I have. The problem is when I am creating my indices. I think I am setting values out of the bounds of the indices array. I am just having a hard time figuring out what I did wrong. I am unfamiliar with the this method of generating a plane. so its a little difficult for me. below is my code. Take emphasis on the indices loop. #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(0.0f, 10.0f, -20.0f), new D3DXVECTOR3(0.0f, 0.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { VertexPos vertices[NUM_VERTSX * NUM_VERTSY]; for(int z=0; z < NUM_VERTSY; ++z) { for(int x = 0; x < NUM_VERTSX; ++x) { vertices[x + z * NUM_VERTSX].pos.x = (float)x * CELL_WIDTH; vertices[x + z * NUM_VERTSX].pos.z = (float)z * CELL_HEIGHT; vertices[x + z * NUM_VERTSX].pos.y = 0.0f; vertices[x + z * NUM_VERTSX].color = D3DXVECTOR4(1.0, 0.0f, 0.0f, 0.0f); } } DWORD indices[NUM_VERTSX * NUM_VERTSY]; int curIndex = 0; for(int z=0; z < NUM_ROWS; ++z) { for(int x = 0; x < NUM_COLS; ++x) { int curVertex = x + (z * NUM_VERTSX); indices[curIndex] = curVertex; indices[curIndex + 1] = curVertex + NUM_VERTSX; indices[curIndex + 2] = curVertex + 1; indices[curIndex + 3] = curVertex + 1; indices[curIndex + 4] = curVertex + NUM_VERTSX; indices[curIndex + 5] = curVertex + NUM_VERTSX + 1; curIndex += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • Transform OpenGL coordinates to lower UIView coordinates

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John (void)drawView:(GLView*)view { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // The coordinates of the rectangle are myRect.x, // myRect.y, myRect.width, myRect.height // Do I need a transform matrix here? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; .... } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); //glMatrixMode(GL_MODELVIEW); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); }

    Read the article

  • Sample uniformly at random from an n-dimensional unit simplex.

    - by dreeves
    Sampling uniformly at random from an n-dimensional unit simplex is the fancy way to say that you want n random numbers such that they are all non-negative, they sum to one, and every possible vector of n non-negative numbers that sum to one are equally likely. In the n=2 case you want to sample uniformly from the segment of the line x+y=1 (ie, y=1-x) that is in the positive quadrant. In the n=3 case you're sampling from the triangle-shaped part of the plane x+y+z=1 that is in the positive octant of R3: (Image from http://en.wikipedia.org/wiki/Simplex.) Note that picking n uniform random numbers and then normalizing them so they sum to one does not work. You end up with a bias towards less extreme numbers. Similarly, picking n-1 uniform random numbers and then taking the nth to be one minus the sum of them also introduces bias. Wikipedia gives two algorithms to do this correctly: http://en.wikipedia.org/wiki/Simplex#Random_sampling (Though the second one currently claims to only be correct in practice, not in theory. I'm hoping to clean that up or clarify it when I understand this better. I initially stuck in a "WARNING: such-and-such paper claims the following is wrong" on that Wikipedia page and someone else turned it into the "works only in practice" caveat.) Finally, the question: What do you consider the best implementation of simplex sampling in Mathematica (preferably with empirical confirmation that it's correct)? Related questions http://stackoverflow.com/questions/2171074/generating-a-probability-distribution http://stackoverflow.com/questions/3007975/java-random-percentages

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more then one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 𝕥 𝟶 𠂊 You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them Notepad can't deal with them correctly (delete for example) File names editing in Window dialogs in broken All QT3 applications can't deal with them. StackOverflow seems to remove these characters if edited directly in as Unicode characters, and only seems to allow them as HTML Unicode escapes. So... This was very simple test. Do you think that UTF-16 should be considered harmful?

    Read the article

  • Fill 2D area bound by vertices in XNA

    - by oakskc
    I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture. For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent. Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably). This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.

    Read the article

  • MATLAB intersection of 2 surfaces

    - by caglarozdag
    Hi everyone, I consider myself a beginner in MATLAB so bear with me if the answer to my question is an obvious one. Phi=0:pi/100:2*pi; Theta=0:pi/100:2*pi; [PHI,THETA]=meshgrid(Phi,Theta); R=(1 + cos(PHI).*cos(PHI)).*(1 + cos(THETA).*cos(THETA)); [X,Y,Z]=sph2cart(THETA,PHI,R); surf(X,Y,Z); %display hold on; x1=-4:.1:4; [X1,Y1] = meshgrid(x1); a=1.8; b=0; c=3; d=0; Z1=(d- a * X1 - b * Y1)/c; shading flat; surf(X1,Y1,Z1); I have written this code which plots a 3d cartesian plot of a plane intersecting a peanut shaped object at an angle. I need to get the intersection of these on 2D (going to be the outline of a peanut, but a bit skewed since the intersection happens at an angle), but don't know how. Thanks

    Read the article

  • QT- QImage and multi-threading problem.

    - by umanga
    Greetings all, Please refer to image at : http://i48.tinypic.com/316qb78.jpg We are developing an application to extract cell edges from MRC images from electron microscope. MRC file format stores volumetric pixel data (http://en.wikipedia.org/wiki/Voxel) and we simply use 3D char array(char***) to load and store data (gray scale values) from a MRC file. As shown in the image,there are 3 viewers to display XY,YZ and ZX planes respectively. Scrollbars on the top of the viewers use to change the image slice along an axis. Here is the steps we do when user changes the scrollbar position. 1) get the new scrollbar value.(this is the selected slice) 2) for the relavant plane (YZ,XY or ZX), generate (char* slice;) array for the selected slice by reading 3D char array (char***) 3) Create a new QImage* (Format_RGB888) and set pixel values by reading 'slice' (using img-setPixel(x,y,c);) 4) This new QImage* is painted in the paintEvent() method. We are going to execute "edge-detection" process in a seperate thread since it is an intensive process.During this process we need to draw detected curve (set of pixels) on top of above QImage*.(as a layer).This means we need to call drawPoint() methods outside the QT thread. Is it the best wayto use QImage for this case? What is the best way to execute QT drawing methods from another thread? thanks in advance,

    Read the article

  • An implementation of Sharir's or Aurenhammer's deterministic algorithm for calculating the intersect

    - by RGrey
    The problem of finding the intersection/union of 'N' discs/circles on a flat plane was first proposed by M. I. Shamos in his 1978 thesis: Shamos, M. I. “Computational Geometry” Ph.D. thesis, Yale Univ., New Haven, CT 1978. Since then, in 1985, Micha Sharir presented an O(n log2n) time and O(n) space deterministic algorithm for the disc intersection/union problem (based on modified Voronoi diagrams): Sharir, M. Intersection and closest-pair problems for a set of planar discs. SIAM .J Comput. 14 (1985), pp. 448-468. In 1988, Franz Aurenhammer presented a more efficient O(n log n) time and O(n) space algorithm for circle intersection/union using power diagrams (generalizations of Voronoi diagrams): Aurenhammer, F. Improved algorithms for discs and balls using power diagrams. Journal of Algorithms 9 (1985), pp. 151-161. Earlier in 1983, Paul G. Spirakis also presented an O(n^2) time deterministic algorithm, and an O(n) probabilistic algorithm: Spirakis, P.G. Very Fast Algorithms for the Area of the Union of Many Circles. Rep. 98, Dept. Comput. Sci., Courant Institute, New York University, 1983. I've been searching for any implementations of the algorithms above, focusing on computational geometry packages, and I haven't found anything yet. As neither appear trivial to put into practice, it would be really neat if someone could point me in the right direction!

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18  | Next Page >