Search Results

Search found 3186 results on 128 pages for 'david meadows'.

Page 19/128 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Customer Experience Management : A conversation with world experts RTD

    - by David lefranc
    A conversation with world experts in Customer Experience Management in Rome, Italy - Wed, June 20, 2012 It is our pleasure to share the registration link below for your chance to meet active members of the Oracle Real-Time Decisions Customer Advisory Board. Join us to hear how leading brands across the world have achieved tremendous return on investment through their Oracle Real-Time Decisions deployments and do not miss this unique opportunity to ask them specific questions directly during our customer roundtable. Please share this information with anyone interested in real-time decision management and cross-channel predictive process optimization.http://www.oracle.com/goto/RealTimeDecisions

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

  • Oracle Magazine - OWB 11gR2 and Heterogeneous Databases

    - by David Allan
    There's a nice article titled 'Oracle Warehouse Builder 11g Release 2 and Heterogeneous Databases' from Oracle ACE director and cofounder of Rittman Mead Consulting, Mark Rittman in the May/June 2010 Oracle Magazine that covers the heterogeneous database support in OWB 11gR2: http://www.oracle.com/technology/oramag/oracle/10-may/o30bi.html Big thanks to Mark for this write up. There is an Oracle white paper on the support here and for examples of this extensibility you can go to the OWB blog archive where there are quite a few posts. I would recommend the following interesting posts out of the archive architecture overview, bulk file loading, MySQL open connectivity and MySQL bulk extract as interesting posts amongst others.

    Read the article

  • CRM vs VRM

    - by David Dorf
    In a previous post, I discussed the potential power of combining social, interest, and location graphs in order to personalize marketing and shopping experiences for consumers.  Marketing companies have been trying to collect detailed information for that very purpose, a large majority of which comes from tracking people on the internet.  But their approaches stem from the one-way nature of traditional advertising.  With TV, radio, and magazines there is no opportunity to truly connect to customers, which has trained marketing companies to [covertly] collect data and segment customers into easily identifiable groups.  To a large extent, we think of this as CRM. But what if we turned this viewpoint upside-down to accommodate for the two-way nature of social media?  The notion of marketing as conversations was the basis for the Cluetrain, an early attempt at drawing attention to the fact that customers are actually unique humans.  A more practical implementation is Project VRM, which is a reverse CRM of sorts.  Instead of vendors managing their relationships with customers, customers manage their relationships with vendors. Your shopping experience is not really controlled by you; rather, its controlled by the retailer and advertisers.  And unfortunately, they typically don't give you a say in the matter.  Yes, they might tailor the content for "female age 25-35 interested in shoes" but that's not really the essence of you, is it?  A better approach is to the let consumers volunteer information about themselves.  And why wouldn't they if it means a better, more relevant shopping experience?  I'd gladly list out my likes and dislikes in exchange for getting rid of all those annoying cookies on my harddrive. I really like this diagram from Beyond SocialCRM as it captures the differences between CRM and VRM. The closest thing to VRM I can find is Buyosphere, a start-up that allows consumers to track their shopping history across many vendors, then share it appropriately.  Also, Amazon does a pretty good job allowing its customers to edit their profile, which includes everything you've ever purchased from Amazon.  You can mark items as gifts, or explicitly exclude them from their recommendation engine.  This is a win-win for both the consumer and retailer. So here is my plea to retailers: Instead of trying to infer my interests from snapshots of my day, please just ask me.  We'll both have a better experience in the long-run.

    Read the article

  • How to change eclipse font sizes

    - by David M. Karr
    I'm trying to reduce the font sizes used in Eclipse. I've read several notes talking about how to do this, but none of them have made a difference. Obviously, changing it in Eclipse preferences doesn't do it. The common answers about using "Appearance-Fonts" doesn't work, because there is no "Fonts" tab. I believe I saw one person say that the "Fonts" tab isn't supposed to be there anymore. The next suggestion is to install MyUnity and change the font settings there. That appeared to change the fonts used in other apps, like gnome-terminal and window headers, but it still has had no effect on Eclipse.

    Read the article

  • The Apple Passbook

    - by David Dorf
    In a previous job I worked on smart card systems.  Our vision was to replace the physical wallet with a chip card that contained stored value, credit cards, and loyalty cards.  The technology was up to the task, but the business model never worked out.  When all those things go onto a single card, who owns the card and maintains the applications?  Each bank wanted their own card with branding, so instead of consolidating lots of cards onto one, we ended up with the same number of cards, just more expensive chip cards.  The Costanza wallet would not die. More recently I've been able to move lots of these cards into iOS apps using products like CardStar, TripIt, and Fandango.  I guess moving from physical to digital is progress, but still no consolidation.  But this week Apple announced its Passbook, an iOS feature that consolidates boarding passes, loyalty cards, and movie tickets.  Another step in the right direction. We've been waiting for Apple to announce a NFC solution to take advantage of the 400 million credit cards it stores in iTunes for its customers.  Perhaps Passbook is the first step in that direction.  It wouldn't take much to add credit cards to Passbook, then enable secure transfer of the track data using a NFC equipped iPhone.  I've got to think this has to be part of the larger vision, but of course Apple is very secretive. I think the steps will be loyalty, coupons, and then payment when it comes to the evolving Passbook.  Retailers should keep an eye on Apple, and expect these things to happen in the Apple stores first.

    Read the article

  • Keeping Aspect Screen Ratio While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Is there a way to disallow only crawling in https in robots.txt?

    - by David Wilkins
    I just realized that Bingbot is crawling my company's website's pages over https. Bing already crawls the site over http, so this seems frivolous. Is there a way to specify Disallow: / for https only? According to Wikipedia, each protocol has its own robots.txt And according to Google's Robots.txt Specification, the robots.txt applies to http AND https I don't want to Disallow: / for Bing totally, just over https.

    Read the article

  • Attend COLLABORATE 11 Virtualy

    - by david.stokes(at)oracle.com
    Stay connected to one of the leading Oracle training and educational events - COLLABORATE 11 - IOUG Forum. Join virtually by attending Plug-In to Orlando for just $299.  Oracle and other leading industry experts will present over 40 hours of live presentations on topics such as Database, Development, Business Intelligence, Security, Data Warehousing and more.For a full list of scheduled Plug-in sessions, click here. Register now and enter the priority code PC07 to claim your group license

    Read the article

  • OWB 11gR2 &ndash; Parallel DML and Query

    - by David Allan
    A quick post illustrating conventional (non direct path) parallel inserts and query using OWB following on from some recent posts from Jean-Pierre and Randolf on this topic. The mapping configuration properties is where you can define these hints in OWB, taking JP’s simplistic illustration, the parallel query hints in OWB are defined on the ‘Extraction hint’ property for the source, and the parallel DML hints are defined on the ‘Loading hint’ property on the target table operator. If we then generate the code you can see the intermediate code generated below… Finally…remember the parallel enabled session for this all to fly… Anyway, hope this helps join a few dots….

    Read the article

  • Secure login for a game that is open source

    - by David Park
    I am making a game which i will be open sourcing. Its a simple arcade like game but requires a network connection because it is meant to be played with other people. The thing i am worrying about is how would i be sure that the client is the one that i put out for the end user to play with? Kind of a like of sv_pure for Team Fortress 2. I was thinking of different ways to combat this such as the server requesting the client's version or even it's md5 hash but people with simple java knowledge could just force a method to always return what the server wants.

    Read the article

  • A good example project to 'prove' my skills [closed]

    - by David Archer
    I've been a commercial programmer for about 3 years now but all of my commercial work is based upon PHP (with Cake PHP, Wordpress and Wildfire) and ASP.Net (on C#, including MVC 3, Umbraco and Kentico) as well as plenty of HTML/CSS/jQuery examples to show. A future employer has asked me to show my Ruby on Rails potential. I've done Ruby on Rails before for fun, but nothing worthy of commercial showing. What I'd like to know, from a group of programmers, is what would be a good 'portfolio demo' piece for you? What have you seen in the past that impressed you? What are you looking for? For Ruby lead developers specifically, what sort of things are you looking to see in the code? Cheers!

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • MacOSX VirtualHost: "You don't have permission to access / on this server" error

    - by David Casillas
    The Apache instalation of MacOSX is running Ok. I have tried to create a VirtualHost called test.local, but as soon as I uncomment from /private/etc/apache2/httpd.conf the line Include /private/etc/apache2/extra/httpd-vhosts.conf , and try to access test.local virtualhost I get an error "You don't have permission to access / on this server". The VirtualHost configuration in /private/etc/apache2/extra/httpd-vhosts.conf is: <VirtualHost *:80> ServerName test.local DocumentRoot "/Users/username/Sites/Test/public" <Directory "/Users/username/Sites/Test/public"> Options Indexes FollowSymLinks Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I have also include the VirtualHost in hosts file: 127.0.0.1 test.local

    Read the article

  • authorize.net SIM PCI compliance

    - by David
    Does anyone know if authorize.net's SIM rids you of having to be PCI compliant? The payment form is hosted on authorize.net's site and they're processing the payment. I know you can do a relay response which basically puts some of the transaction details in a url that goes back to your website(to display a receipt). I'm not sure what all information gets put into the url though. I'm wondering if that makes you have to become PCI compliant?

    Read the article

  • Can't (re)Install VLC (removed by update{again})

    - by David matthews
    I use VLC a lot, And When 2.0 came out Ubuntu did not update to that version, the REPO had the older version even months later, So I added the daily repo: http://ppa.launchpad.net/videolan/stable-daily/ubuntu and that worked for a while, after a few months later I received a 'Distribution upgrade' and when I installed it, it removed VLC. when I tried to re-install it gave me a bunch of unmet dependency's, so I disabled the source, ran apt-get update, and tried to install the older VLC, that did not work either. I eventually found a web page, and it helped me get it working, and I was also able to get the 'Stable Daily' working too But last night, I got another 'disto upgrade' and it uninstalled VLC again. when I try to reinstall from daily I get: The following packages have unmet dependencies: vlc : Depends: fonts-freefont-ttf but it is not installable Depends: vlc-nox (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed Recommends: vlc-plugin-pulse (= 2.0.3+git20121005+r392-0~r42~precise1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. and from the default source: vlc : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed Recommends: vlc-plugin-notify (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed vlc-plugin-pulse : Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but it is not going to be installed Depends: libvlccore5 (>= 2.0.0) but it is not going to be installed E: Unable to correct problems, you have held broken packages. (and yes, I ran apt-get update after turning off daily) Any Ideas? (ubuntu 12.04 64bit)

    Read the article

  • Checking for cross-site scripting vulnerabilities in Perl web applications

    - by David Scholefield
    I'm putting together some notes for a dev team on how to write secure Perl code - especially taking into account the current OWASP top 10 web application vulnerabilities. For cross-site scripting I've included information on ensuring that all output to the browser is checked and escaped where necessary, but I'm looking for more automated mechanisms that would mean a developer doesn't have to think about every output statement and, potentially, miss one. Perl's 'taint' function sounds like it should be a help because it distrusts all user input, but it doesn't complain on tainted data being output to the browser. Apart from checking all output statements individually (probably by calling a generic sanitizing function) does anyone have any ideas on how Perl can help with this with existing libraries or techniques?

    Read the article

  • Auto Save and Auto Load Game onto the Device's Storage Concept Question

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game feature only every time the newbies open the first app then continues it on the other day? I tried making a sprite that is moving, starting at the center. When I close and re-open the app, the sprite goes back to the center instead of the last coordinate where the sprite land on this part (i.e. at the top). The thing I want to know how the sequence of saving and loading goes like this: I open the app The starting sprite at the center. It displays a coordinate of the sprite plus number of times does the sprite move. I exit the app that automatically saves the game without notice. Finally, when I re-opened it, it automatically loads the game retaining the number of times the sprite move, coordinates, and the sprite's area landed. These steps above are similar, but not the sprite movement test app, to the sequence of saving and loading the game's level and record in Jewel Stackers for the Android app. And, by default, if there is no SD card in any tab or phone that runs on Android, does it automatically save/load onto the internal drive or the APK file itself? Is it also useful to use auto save and auto load feature for protecting and fetching informations (i.e. fastest time, last time where the sprite is located via coordinates, etc.)?

    Read the article

  • JavaOne - Java SE Embedded Booth - Freescale Technologies

    - by David Clack
    Hi All, I've been working with Freescale this year on both the Power Architecture (PPC) and ARM solutions to test Java SE Embedded we will have a special Freescale demo case I had built, in the booth at JavaOne is the Freescale i.MX28, i.MX53 and i.MX6 demos plus the P1025 Tower Power Architecture demo. Freescale i.MX ARM Freescale Power Architecture This year we became a sponsor at the Freescale Technology Forum shows in San Antonio, TX, Beijing, China and Bangalore, India, FTF Japan is at the end of October in Tokyo. It's really exciting to get to see what is being developed in the M2M and IoT space on the Freescale technologies, lots of products use the Freescale chips with Java that we don't even really know about like the original Amazon Kindle. If you are registered at JavaOne you can come over to the Java Embedded @ JavaOne for $100 Come see us in booth 5605 See you there Dave

    Read the article

  • Unity 3d (Using Blender) - anime/manga/cel-shaded style characters

    - by David Archer
    Making a game using Blender for 3D models and Unity for the game engine. Just wondering if anyone knows any links to pages that give a tutorial on Japanese anime style 3D modelling, texturing and shading through blender. I'm actually looking to create a cel-shaded look eventually (read: Okami/Jet Set Radio style) and I'm kind of stuck with the design stuff. I'm not a Blender expert by any means, and still kind of new to the design side of things (I'm a programmer by trade), so please don't vote me down too hard. I've tried googling, but there doesn't seem to be much in the way of what I'm after. The only thing I've found really is a plugin for blender called freestyle, or using the ToonShader shading tool. If there are any good tutorials or anything, I'm really happy to sit through them - just want to learn :) Thanks for any help :)

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • Graduate Life in Oracle by Ramakrishna Nalabothula

    - by david.talamelli
    Preparation for the BIG interview: I prepared in both technical and logical aspects to face the Oracle Interview. I had to cover almost all main areas in technical and many types of problems in logical areas. We attended mock- interviews and written tests at our college, browsed websites and communities. Having started with such a rigorous preparation before Oracle visited the college; it was possible for me to make it into the list of final selection. I put in a big effort to reach this position and I am very happy to achieve this. Why I chose Oracle: Oracle is one of the best technology providers and has a large customer base. I think it is not an easy job to offer services to that many customers. So, the company needs young and dynamic people and I wanted to be one among them. This gave me spirit and led me to walk into Oracle. I am working on different technologies and learning something new in the field. Having many customers is challenging Oracle and my work is challenging me. I am confident enough that customers to both me and Oracle will never lose their faith. Learning at Oracle: The style of learning is good and never resembles a classroom session in a college. It is always fun to learn here. There is no exam to track the performance. There is enough time until we completely learn it. There is no concept of stiff competition. Peers help through KT (Knowledge Transfers) and there are good resources in the Oracle to learn. People are always there to direct you to those. There are lots of opportunities for Web learning too! My Work Area: My team gives me a great opportunity to offer service to the entire product. There were no situations when I got tense with my work or targets and deliverables. I work with a Global Team and my manager is based in the UK. I have a lot of freedom and flexibility. I use the work from home option in case of any disturbances in the city or due to personal problems. I have a weekly meeting with my manager and use Instant Messenger for status updates. My manager plans very well to give me enough time to complete tasks. I have good coordination from the team towards our deadlines. My work has also brought me close to many people across various technology and product teams. I am glad to make many friends across Oracle. I am enjoying my time and work here. I cover all the major activities in the team. I am thankful to everyone from the Development and Quality Assurance Team to have high confidence in me by assigning such big responsibilities. Primary tasks are maintaining the environments that are very unstable at times. This really requires big time and effort to trace the root causes. I am working and still learning on all these areas. The happiest thing is I got chances to travel to USA & UK for training and for supporting a few customer demo projects. I have got to explore more across two countries and got sponsored to visit the places around due to Oracle's policies. I am very much thankful for these what I have from Oracle and for the cooperation there from other colleagues. Fun at Work: Oracle has a club from Employees to conduct games and events. I had an opportunity to participate in competitions, tournaments inside Oracle and Inter-Corporate for all games. I thank Oracle for providing me all these opportunities and I would like to extend my thanks to Senior Management for their confidence in me. I thank Oracle HR Recruiting Team too for selecting me into Oracle and giving me this opportunity to share my experience and feelings. Ramakrishna Nalabothula

    Read the article

  • Turn your laptop into a wireless Access Point with Windows 7!

    - by David Nudelman
    Windows 7 offers a very cool feature where you can connect multiple devices to any wired and wireless network connection (hotel, cable, 3G, UMTS, EDGE, WIFI, RJ45, Ethernet, etc.) by turning your own laptop into a wireless AP (Access Point) to relay those devices not directly connected to the internet. For this just enter these two commands to an elevated (right click on CMD.EXE, run as administrator): netsh wlan set hostednetwork mode=allow ssid=YOURFRIENDLYSSID key=SOMEPASSWORD netsh wlan start hostednetwork At this point, if Internet Connection Sharing (ICS) is setup, anyone can connect to your SoftAP (if they know the PWD of course) and the traffic will be sent through whatever adapter you want. You can actually bridge it across an entirely different adapter... or the same on a different Wifi LAN. A GUI to set this up can be downloaded for free here: http://www.connectify.me/

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >