Search Results

Search found 588 results on 24 pages for 'vision'.

Page 4/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Stereo Matching - Dynamic Programming

    - by Varun
    Hi, I am supposed to implement Dynamic programming algorithm for Stereo matching problem. I have read 2 research papers but still haven't understood as to how do I write my own c++ program for that ! Is there any book or resource that's available somewhere that I can use to get an idea as to how to start coding actually ? Internet search only gives me journal and conference papers regarding Dynamic Programming but not how to implement the algorithm step by step. Thanks Varun

    Read the article

  • Calculating rotation and translation matrices between two odometry positions for monocular linear triangulation

    - by user1298891
    Recently I've been trying to implement a system to identify and triangulate the 3D position of an object in a robotic system. The general outline of the process goes as follows: Identify the object using SURF matching, from a set of "training" images to the actual live feed from the camera Move/rotate the robot a certain amount Identify the object using SURF again in this new view Now I have: a set of corresponding 2D points (same object from the two different views), two odometry locations (position + orientation), and camera intrinsics (focal length, principal point, etc.) since it's been calibrated beforehand, so I should be able to create the 2 projection matrices and triangulate using a basic linear triangulation method as in Hartley & Zissermann's book Multiple View Geometry, pg. 312. Solve the AX = 0 equation for each of the corresponding 2D points, then take the average In practice, the triangulation only works when there's almost no change in rotation; if the robot even rotates a slight bit while moving (due to e.g. wheel slippage) then the estimate is way off. This also applies for simulation. Since I can only post two hyperlinks, here's a link to a page with images from the simulation (on the map, the red square is simulated robot position and orientation, and the yellow square is estimated position of the object using linear triangulation.) So you can see that the estimate is thrown way off even by a little rotation, as in Position 2 on that page (that was 15 degrees; if I rotate it any more then the estimate is completely off the map), even in a simulated environment where a perfect calibration matrix is known. In a real environment when I actually move around with the robot, it's worse. There aren't any problems with obtaining point correspondences, nor with actually solving the AX = 0 equation once I compute the A matrix, so I figure it probably has to do with how I'm setting up the two camera projection matrices, specifically how I'm calculating the translation and rotation matrices from the position/orientation info I have relative to the world frame. How I'm doing that right now is: Rotation matrix is composed by creating a 1x3 matrix [0, (change in orientation angle), 0] and then converting that to a 3x3 one using OpenCV's Rodrigues function Translation matrix is composed by rotating the two points (start angle) degrees and then subtracting the final position from the initial position, in order to get the robot's straight and lateral movement relative to its starting orientation Which results in the first projection matrix being K [I | 0] and the second being K [R | T], with R and T calculated as described above. Is there anything I'm doing really wrong here? Or could it possibly be some other problem? Any help would be greatly appreciated.

    Read the article

  • minimum enclosing rectangle of fixed aspect ratio

    - by Ramya Narasimha
    I have an Image with many rectangles at different positions in the image and of different sizes (both overlapping and non-overlapping). I also have a non-negative scores associated with each of these rectangles. My problem now is to find one larger rectangle *of a fixed (given) aspect ratio* that encloses as many of these rectangles as possible. I am looking for an algorithm to do this, if anyone has a solution, even a partial one it would be helpful. Please note that the positions of the rectangles in the image is fixed and cannot be moved around and there is no orientation issue as all of them are upright.

    Read the article

  • Camera and Image recognition

    - by kjh
    I recently watched a youtube video where a guy got a camera to recognize when a rubik's cube was held up to it, and it captured the 9 square color combination before snapping a picture of the cube and displaying the 3x3 grid on the screen of his computer. What kind of programming is this and where would I start reading to get into this sort of thing? specifically, controlling a camera, and getting it to pick out certain parts of an image and translate that data.

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

  • Merging photo textures - (from calibrated cameras) - projected onto geometry

    - by freakTheMighty
    I am looking for papers/algorithms for merging projected textures onto geometry. To be more specific, given a set of fully calibrated cameras/photographs and geometry, how can we define a metric for choosing which photograph should be used to texture a given patch of the geometry. I can think of a few attributes one may seek minimize including the angle between the surface normal and the camera, the distance of the camera from the surface, as well as minimizing some parameterization of sharpness. The question is how do these things get combined and are there well established existing solutions?

    Read the article

  • Detecting Markers Using OpenCV

    - by Hamza Yerlikaya
    I am trying to detect various objects containing colored markers, so a red blue green marker identifies object A, and a red blue red marker identifies object B. My problem is I can't use template matching cause objects can be rotated, currently I am thinking about check for each color then find the object by checking the distance between colors but it seems inefficient, so my question is there a better way to do this?

    Read the article

  • How do I construct a 3D model of a room from 2 stereo cameras? What is the determining factor to an

    - by yasumi
    Currently, I have extracted depth points to construct a 3D model from 2 stereo cameras. The methods I have used are openCV graphCut method and a software from http://sourceforge.net/projects/reconststereo/. However, the generated 3D models are not very accurate, which leads me to question: 1) What is the problem with pixel-based method? 2) Should I change my pixel-based method to feature-based or object-recognition-based method? Is there a best method? 3) Are there any other ways to do such reconstruction? Additionally, the depth extracted comes only from 2 images. What if I am turning the camera 360 degrees to obtain a video? Looking forward to suggestion on how to combine this depth information. Thank you very much :)

    Read the article

  • How to do motion tracking of a object using video

    - by Niroshan
    Could someone direct me to a tutorial or guide me how to track motion of a object moving with 6 DOF. I am planing to use a video stream of a moving toy car. I want to calculate displacement and rotation angle of the toy car. I came across some research papers but couldn't find any libraries to the job. Is there a way to do this using OpenCV or Matlab or some other freely available software? Thank you

    Read the article

  • Track Pedestrians

    - by 2vision2
    I am using OpenCV sample code “peopledetect.cpp” to detect and track pedestrians. The code uses HoG for feature extraction and SVM for classification. Please find the reference paper used here. The camera is mounted on the wall at a height of 10 feet and 45 degree down. There is no restriction on the pedestrian movement within the frame. I want to track the detected pedestrians’ movement within the frame. The issue I am facing is pedestrians are detected only in the middle region of the frame as most of the features are not visible as soon as the pedestrian enters the frame region. I want to track each person’s movement in the entire frame region. How to do it? Is tracking required? Can anyone give any reference to blogs/codes?

    Read the article

  • Regarding Standard Oxford Format for vlfeat sift

    - by Karl
    One of my upper classmen has gave me a data set for experimenting with vlfeat's SIFT, however, her extracted SIFT data for the frame part contains 5 dimensions. Recall from vl_sift function: [F,D] = VL_SIFT(I) Each column of D is the descriptor of the corresponding frame in F. F normally contains 4 dimensions which consists of x-coordinate, y-coordinate, scale, and orientation. So I asked her what is this 5th dimension, and she pointed me to search for "standard oxford format" for sift feature. The thing is I tried to search around regarding this standard oxford format and sift feature, but I got no luck in finding it at all. If somebody knows regarding this, could you please point me to the right direction?

    Read the article

  • Automatic people counting + twittering.

    - by c2h2
    Want to develop a system accurately counting people that go through a normal 1-2m wide door. and twitter whenever people goes in or out and tells how many people remain inside. Now, Twitter part is easy, but people counting is difficult. There is some semi existing counting solution, but they do not quite fit my needs. My idea/algorithm: Should I get some infra-red camera mounting on top of my door and constantly monitoring, and divide the camera image into several grid and calculating they entering and gone? can you give me some suggestion and starting point?

    Read the article

  • Triangulation & Direct linear transform

    - by srand
    Following Hartley/Zisserman's Multiview Geometery, Algorithm 12: The optimal triangulation method (p318), I got the corresponding image points xhat1 and xhat2 (step 10). In step 11, one needs to compute the 3D point Xhat. One such method is Direct Linear Transform (DLT), mentioned in 12.2 (p312) and 4.1 (p88). The homogenous method (DLT), p312-313, states that it finds a solution as the unit singular vector corresponding to the smallest singular value of A, thus, A = [xhat1(1) * P1(3,:)' - P1(1,:)' ; xhat1(2) * P1(3,:)' - P1(2,:)' ; xhat2(1) * P2(3,:)' - P2(1,:)' ; xhat2(2) * P2(3,:)' - P2(2,:)' ]; [Ua Ea Va] = svd(A); Xhat = Va(:,end); plot3(Xhat(1),Xhat(2),Xhat(3), 'r.'); However, A is a 16x1 matrix, resulting in a Va that is 1x1. What am I doing wrong (and a fix) in getting the 3D point? For what its worth sample data: xhat1 = 1.0e+009 * 4.9973 -0.2024 0.0027 xhat2 = 1.0e+011 * 2.0729 2.6624 0.0098 P1 = 699.6674 0 392.1170 0 0 701.6136 304.0275 0 0 0 1.0000 0 P2 = 1.0e+003 * -0.7845 0.0508 -0.1592 1.8619 -0.1379 0.7338 0.1649 0.6825 -0.0006 0.0001 0.0008 0.0010 A = <- my computation 1.0e+011 * -0.0000 0 0.0500 0 0 -0.0000 -0.0020 0 -1.3369 0.2563 1.5634 2.0729 -1.7170 0.3292 2.0079 2.6624

    Read the article

  • Video Reconstruction

    - by chris barber
    How does reconstruction using video compare to using standard reconstruction using still images? What similarities and differences are there. Finally what can and cannot be reconstructed using standard stereo methods?

    Read the article

  • Efficient way to calculate "vision cones" on 2D tile map?

    - by OverMachoGrande
    I'm trying to calculate which tiles a particular unit can "see" if facing a certain direction on a tile map (within a certain range and angle of facing). The easiest way would be to draw a certain number of tiles outward and raycast to each tile. However, I'm hoping for something slightly more efficient. A picture says a thousand words: The red dot is the unit (who's facing upwards). My goal is to calculate the yellow tiles. The green blocks are walls (walls are between tiles, and it's easy to check if you can pass between two tiles). The blue line represents something like the "raycasting" method I was talking about, but I'd rather not have to do this. EDIT: Units can only be facing north/south/east/west (0, 90, 180, or 270 degrees) and FoV is always 90 degrees. Should simplify some calculations. I'm thinking there's some sort of recursive-ish/stack-based/queue-based algorithm, but I can't quite figure it out. Thanks!

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • Adventures in Scrum: Lesson 2 - For the record

    - by Martin Hinshelwood
    At SSW we have always done Agile. Recently we have started doing Scrum and we have nearly completed our first Sprint ever using Scrum. As you probably guessed from my previous post, it looks like it is going to be a “Failed Sprint”, but the Scrum Team (This includes the ScrumMaster and the Product Owner) has learned a huge amount about working in the Scrum Framework. We have been running with a “Proxy Product Owner” for the last two weeks, but a simple mistake occurred either during the “Product Planning Meeting” or the “Sprint Planning Meeting” that could have prevented this Sprint from failing. We has a heated discussion on the vision of someone not in the room which ended with the assertion that the Product Owner would be quizzed again on their vision. This did not happen and we ran with the “Proxy Product Owner’s vision for two weeks. Product Owner vision: Update Component A of Product A to Silverlight Proxy Product Owner vision: Update Product A to Silverlight Do you see the problem? Worse than that, as we had a lot of junior members of the Scrum Team and we are just feeling our way around how Scrum will work at SSW I missed implementing a fundamental rule. That’s right, it was me. It does not matter that I did not know about this rule, its on the site and I should have read it. Would a police officer let you off if you did not know that a red light meant stop? I think not… But, what is this amazing rule I hear you shout.. Its simple, as per our rule I should have sent the following email: “ Dear Proxy Product Owner, For the record, I disagree that the Product Owner wants us to ‘Update Product A to Silverlight’ as I still think that he wants us to ‘Update Component A of Product A to Silverlight’ and not the entire application. Regards Martin” - ‘For the record’ - Rules to being Software Consultants - Dealing with Clients This email should have been copied to the entire Scrum Team, which would have included the Product Owner, who would have nipped this misunderstanding in the bud and we would have had one less impediment. Technorati Tags: SSW,SSW Rules,SSW Standards,Scrum,Product Owner,ScrumMaster,Sprint,Sprint Planning Meeting,Product Planning Meeting

    Read the article

  • Leading Analyst Firm Positions Oracle in Leaders Quadrant for Web Content Management

    - by Christie Flanagan
    Gartner, Inc. has named Oracle a Leader in its latest “Magic Quadrant for Web Content Management.” Gartner’s Magic Quadrants position vendors within a particular quadrant based on their completeness of vision and their ability to execute on that vision. According to Gartner, “WCM plays an increasingly important role in business performance. It has become the central point of coordination for initiatives involving the enterprise's online presence, and these initiatives have become more sophisticated and more important to enterprises' business strategies. Thus, WCM is key for organizations wishing to execute a strategy of OCO (online channel optimization) that embraces areas such as customer experience management, e-commerce, digital marketing, multichannel marketing and website consolidation.” Gartner continued, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of OCO. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit. Leaders can: demonstrate enterprise deployments’ offer integration with other business applications and content repositories; provide a vertical-process or horizontal-solution focus.” Oracle WebCenter, the engagement platform powering exceptional experiences for customers, employees and partners, connects people and information by bringing together the most complete portfolio of portal, Web experience management, content, social, and collaboration technologies into a single integrated product suite. Oracle WebCenter also provides the foundation for Oracle Fusion Middleware and Oracle Fusion Applications to deliver a next-generation user experience.  To see the latest reports, webcasts and demonstrations about Oracle's web experience management solution, Oracle WebCenter Sites, please visit our Connected Customer Experience Resource Center.

    Read the article

  • Major Analyst Report Chooses Oracle As An ECM Leader

    - by brian.dirking(at)oracle.com
    Oracle announced that Gartner, Inc. has named Oracle as a Leader in its latest "Magic Quadrant for Enterprise Content Management" in a press release issued this morning. Gartner's Magic Quadrant reports position vendors within a particular quadrant based on their completeness of vision and ability to execute. According to Gartner, "Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clearly articulated vision. In the context of ECM, they have strong channel partners, presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technology or vertical market. Leaders deliver a suite that addresses market demand for direct delivery of the majority of core components, though these are not necessarily owned by them, tightly integrated, unique or best-of-breed in each area. We place more emphasis this year on demonstrated enterprise deployments; integration with other business applications and content repositories; incorporation of Web 2.0 and XML capabilities; and vertical-process and horizontal-solution focus. Leaders should drive market transformation." "To extend content governance and best practices across the enterprise, organizations need an enterprise content management solution that delivers a broad set of functionality and is tightly integrated with business processes," said Andy MacMillan, vice president, Product Management, Oracle. "We believe that Oracle's position as a Leader in this report is recognition of the industry-leading performance, integration and scalability delivered in Oracle Enterprise Content Management Suite 11g." With Oracle Enterprise Content Management Suite 11g, Oracle offers a comprehensive, integrated and high-performance content management solution that helps organizations increase efficiency, reduce costs and improve content security. In the report, Oracle is grouped among the top three vendors for execution, and is the furthest to the right, placing Oracle as the most visionary vendor. This vision stems from Oracle's integration of content management right into key business processes, delivering content in context as people need it. Using a PeopleSoft Accounts Payable user as an example, as an employee processes an invoice, Oracle ECM Suite brings that invoice up on the screen so the processor can verify the content right in the process, improving speed and accuracy. Oracle integrates content into business processes such as Human Resources, Travel and Expense, and others, in the major enterprise applications such as PeopleSoft, JD Edwards, Siebel, and E-Business Suite. As part of Oracle's Enterprise Application Documents strategy, you can see an example of these integrations in this webinar: Managing Customer Documents and Marketing Assets in Siebel. You can also get a white paper of the ROI Embry Riddle achieved using Oracle Content Management integrated with enterprise applications. Embry Riddle moved from a point solution for content management on accounts payable to an infrastructure investment - they are now using Oracle Content Management for accounts payable with Oracle E-Business Suite, and for student on-boarding with PeopleSoft e-Campus. They continue to expand their use of Oracle Content Management to address further use cases from a core infrastructure. Oracle also shows its vision in the ability to deliver content optimized for online channels. Marketers can use Oracle ECM Suite to deliver digital assets and offers as part of an integrated campaign that understands website visitors and ensures that they are given the most pertinent information and offers. Oracle also provides full lifecycle management through its built-in records management. Companies are able to manage the lifecycle of content (both records and non-records) through built-in retention management. And with the integration of Oracle ECM Suite and Sun Storage Archive Manager, content can be routed to the appropriate storage media based upon content type, usage data or other business rules. This ensures that the most accessed content is instantly available, and archived content is stored on a more appropriate medium like tape. You can learn more in this webinar - Oracle Content Management and Sun Tiered Storage. If you are interested in reading more about why Oracle was chosen as a Leader, view the Gartner Magic Quadrant for Enterprise Content Management.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >