Search Results

Search found 2736 results on 110 pages for 'generating'.

Page 71/110 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • How to implement an email unsubscribe system for a site with many kinds of emails?

    - by Mike Liu
    I'm working on a website that features many different types of emails. Users have accounts, and when logged in they have access to a setting page that they can use to customize what types of emails they receive. However, I'd like to also give users an easy way to unsubscribe directly in the emails they receive. I've looked into list unsubscribe headers as well as creating some type of one click link that would unsubscribe a user from that type of email without requiring login or further action. The later would probably require me to break convention and make changes to the database in response to a GET on the link. However, am I incorrect in thinking that either of these would require me to generate and permanently store a unique identifier in my database for every email I ever send, really complicating email delivery? Without that, I'm not sure how I would be able to uniquely identify a user and a type of email in order to change their email preferences, and this identifier would need to be stored forever as a user could have an email sitting in their inbox for a long time before they decide to act on it. Alternatively, I was considering having a no-login page for managing email preferences. In contrast to above where I would need one of these identifiers for each email, this would only need one identifier per user, with no generation or other action required on sending an email. All of these raise security issues, and they could potentially be used by people to tamper with others' email preferences. This could be mitigated somewhat by ensuring that the identifier is really difficult to guess. For the once per user identifier approach, I was considering generating the identifier by passing a user's ID through some type of encryption algorithm, is this a sound approach? For the per-email identifiers, perhaps I could use a user's ID appended to the time. However, even this would not eliminate the problem entirely, as this would really just be security through obscurity, and anyone with the URL could tamper, and in the end the main defense would have to be that most people aren't so bored as to tamper with other people's email preferences. Are there any other alternatives I've missed, or issues or solutions with these that anyone can provide insight on? What are best practices in this area?

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation with Input Parameters

    - by Stuart Brierley
    As previsouly noted in my post on Schema Generation using the Community ODBC Adapter, I ran into a problem when trying to generate a schema to represent a MySQL stored procedure that had input parameters.  After a bit of investigation and a few deadends I managed to figure out a way around this issue - detailed below are both the problem and solution in case you ever run into this yourself. The Problem Imagine a stored procedure that is coded as follows in MySQL: StuTest(in DStr varchar(80)) BEGIN   Declare GRNID int;   Select grn_id into GRNID from grn_header where distribution_number = DStr;   Select GRNID; END This is quite a simple stored procedure but can be used to illustrate the issue with parameters quite niceley. When generating the schema using the Add Generated Items wizard, I tried selecting "Stored Procedure" and then in the Statement Information window typing the stored procedure name: StuTest Pressing generate then gives the following error: "Incorrect Number of arguments for Procedure StuTest; expected 1, got 0" If you attempt to supply a value for the parameter you end up with a schema that will only ever supply the parameter value that you specify.  For example supplying StuTest('123') will always call the procedure with a parameter value of 123. The Solution   I tried contacting Two Connect about this, but their experience of testing the adapter with MySQL was limited. After looking through the code for the ODBC adapter myself and trying a few things out, I was eventually able to use the ODBC adapter to call a test stored procedure using a two way send port. In the generate schema wizard instead of selecting Stored Procedure I had to choose SQL Script instead, detailing the following script: Call StuTest(@InputParameter) By default this would create a request schema with an attribute called InputParameter, with a SQL type of NVarChar(1).  In most cases this is not going to be correct for the stored procedure being called. To change the type from the default that is applied you need to select the "Override default query processing" check box when specifying the script in the wizard.  This then opens the BizTalk ODBC Override window which lets you change the properties of the parameters and also test out the query script.  Once I had done this I was then able to generate the correct schema, which included an attribute representing the parameter.  By deploying the schema assembly I was then able to try the ODBC adapter out on a two way send port. When supplied with an appropriate message instance (for the generated request schema) this send port successfully returned the expected response.

    Read the article

  • Do I have to worry about "error: superfluous RAID member"?

    - by 0xC0000022L
    When running update-grub on the newly installed Ubuntu 12.04 with an older software RAID (md), I get: error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). Generating grub.cfg ... error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). Found linux image: /boot/vmlinuz-3.2.0-24-generic Found initrd image: /boot/initrd.img-3.2.0-24-generic error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). Found linux image: /boot/vmlinuz-3.2.0-23-generic Found initrd image: /boot/initrd.img-3.2.0-23-generic error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). Found memtest86+ image: /boot/memtest86+.bin error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). error: superfluous RAID member (5 found). Found Debian GNU/Linux (5.0.9) on /dev/sdb1 Found Debian GNU/Linux (5.0.9) on /dev/sdc1 done I would be less worried if the message would say warning: ..., but since it says error: ... I'm wondering what the problem is. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdc1[1] sdb1[0] 48829440 blocks [2/2] [UU] md3 : active raid1 sdc2[1] sdb2[0] 263739008 blocks [2/2] [UU] md1 : active raid5 sdg1[3] sdf1[2] sde1[1] sdh1[0] sdi1[4] sdd1[5](S) 1250274304 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] unused devices: <none> Do I have to worry or is this harmless? btw: disregard the mentioning of Debian 5.0.9, that was the previously installed system and is going to be overwritten. It's on /dev/md2 actually.

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • Development-led security vs administration-led security in a software product?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers that do not listen to this attribute, strange hacks* are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingeniosity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Hardware and Software Working Together - What Does LJE say?

    - by Stephen Slade
     IDG News Service - Oracle CEO Larry Ellison said Oracle will continue to bet on selling high-end custom hardware for its software products, even amidst a growing trend toward roomfuls of cheap, generic servers. "You have to be in the hardware business and the software business, to get the best possible system," he said during a keynote speech at Oracle's OpenWorld conference in Tokyo. "We believe it's the right idea, we believe it's the next generation of computing, we believe all the pieces have to fit together." Ellison, as he has often done in the past, repeatedly referred to Apple as his "favorite example" of such tight integration. He was a close friend of Apple's co-founder Steve Jobs and previously served on Apple's board of directors.He said sales of Oracle's advanced servers were booming and generating around a billion dollars a year in revenue for the company, which has until recent years focused almost exclusively on its software offerings. With the explosion of popular online services and the increasing number of mobile devices that access them, demand is high for databases that can quickly respond to high numbers of relatively simple queries. While Oracle is pitching its expensive, finely-tuned machines to meet this requirement, Internet behemoths like Google, Facebook and Microsoft increasingly rely on armies of low-cost, easily replaceable servers. Ellison emphasized the high specifications of Oracle's servers, which come packed with multiple terabytes of RAM and flash-based storage for speed. Such machines are superior to large server farms, he said, because they require far less electricity and floor space, and are also cost competitive. When asked about whether purchasing such products would lock customers in to expensive hardware from Oracle, he promised that the company's software would always run on "multiple hardware sources."  Ellison, who spoke from Kyoto, Japan's ancient capital, was shown live online via webcast. The Oracle founder has a fondness for Japanese architecture and is staying in his large garden residence in the city Source: Ellison: Hardware-software integration key, Apple is best example. Oracle's founder and CEO reaffirmed his commitment to custom hardware for its software products  LINK to Computerworld article Apr 5, 2012 http://www.computerworld.com/s/article/9225858/Ellison_Hardware_software_integration_key_Apple_is_best_example?source=CTWNLE_nlt_entsoft_2012-04-09&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+computerworld%2Fs%2Ffeed%2Ftopic%2F173+%28Computerworld+Databases+News%29#disqus_thread

    Read the article

  • Salt River Project Identifies US$500,000 in Cost Reduction Opportunities Through Unified IT Portfolio Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Salt River Project (SRP) includes two entities serving the Phoenix area: the Salt River Project Agricultural Improvement and Power District and the Salt River Valley Water Users’ Association. The SRP district operates various power plants and generating stations to provide electricity to nearly 956,000 retail customers. The SRP association maintains an extensive system of reservoirs, wells, and irrigation laterals to deliver nearly 1 million acre-feet of water annually. Salt River Project implemented Oracle’s Primavera Portfolio Management to unify management of its extensive IT portfolio, including essential utility systems, like work and asset management, as well as programming frameworks and development tools. With the system, SRP discovered almost US$500,000 in cost-reduction opportunities by identifying redundant or low use software, including 150 applications that are close to being unsupported. The company retired 10 applications in the last year and upgraded 34 systems. SRP also identified preferred technologies and ensured that more than 90% of applications are based on standard technologies—reducing procurement costs, simplifying maintenance support, and lowering total cost of ownership. Solutions: Provided approximately 70 users in the IT support group with detailed insight into the product lifecycle of each piece of IT infrastructure and software in the entire portfolio Discovered almost US$500,000 in cost reduction opportunities by identifying redundant or low use software that could be eliminated or migrated to alternative solutions Identified approximately 150 applications that are close to being unsupported and prioritized them to begin modernization Click here to view more Oracle Primavera Portfolio Management solutions for SRP. Why Oracle Salt River Project chose Oracle’s Primavera Portfolio Management after evaluating it against four other solutions. “Oracle’s Primavera Portfolio Management offered the most functionality to support our diverse needs,” said Eileen Ahles, IT portfolio manager, Salt River Project. Read the complete customer success story Access a list of all Primavera customer success stories

    Read the article

  • What are the tradeoffs for using 'partial view models'?

    - by Kenny Evitt
    I've become aware of an itch due to some non-DRY code pertaining to view model classes in an (ASP.NET) MVC web application and I'm thinking of scratching my itch by organizing code in various 'partial view model' classes. By partial-view-model, I'm referring to a class like a view model class in an analogous way to how partial views are like views, i.e. a way to encapsulate common info and behavior. To strengthen the 'analogy', and to aid in visually organizing the code in my IDE, I was thinking of naming the partial-view-model classes with a _ prefix, e.g. _ParentItemViewModel. As a slightly more concrete example of why I'm thinking along these lines, imagine that I have a domain-model-entity class ParentItem and the user-friendly descriptive text that identifies these items to users is complex enough that I'd like to encapsulate that code in a method in a _ParentItemViewModel class, for which I can then include an object or a collection of objects of that class in all the view model classes for all the views that need to include a reference to a parent item, e.g. ChildItemViewModel can have a ParentItem property of the _ParentItemViewModel class type, so that in my ChildItemView view, I can use @Model.ParentItem.UserFriendlyDescription as desired, like breadcrumbs, links, etc. Edited 2014-02-06 09:56 -05 As a second example, imagine that I have entity classes SomeKindOfBatch, SomeKindOfBatchDetail, and SomeKindOfBatchDetailEvent, and a view model class and at least one view for each of those entities. Also, the example application covers a lot more than just some-kind-of-batches, so that it wouldn't really be useful or sensible to include info about a specific some-kind-of-batch in all of the project view model classes. But, like the above example, I have some code, say for generating a string for identifying a some-kind-of-batch in a user-friendly way, and I'd like to be able to use that in several views, say as breadcrumb text or text for a link. As a third example, I'll describe another pattern I'm currently using. I have a Contact entity class, but it's a fat class, with dozens of properties, and at least a dozen references to other fat classes. However, a lot of view model classes need properties for referencing a specific contact and most of those need other properties for collections of contacts, e.g. possible contacts to be referenced for some kind of relationship. Most of these view model classes only need a small fraction of all of the available contact info, basically just an ID and some kind of user-friendly description (i.e. a friendly name). It seems to be pretty useful to have a 'partial view model' class for contacts that all of these other view model classes can use. Maybe I'm just misunderstanding 'view model class' – I understand a view model class as always corresponding to a view. But maybe I'm assuming too much.

    Read the article

  • Best practice for marking a bug as resolved in Bugzilla ?

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • Is it a bug or a task when something doesn't work, yet, in development process

    - by Patkos Csaba
    We usually have this dilemma in our team. Sometimes, in order to implement a task or a story we find out that the system must be in a specific state. For example, a specific system configuration has to be made beforehand. The task / story can be completed and it is working as specified on it with the proper configuration in place. Note that the configuration is not directly related with the task. Next, we have to create a new ... ??? ... something for the process of generating that configuration file. This is where the problems appear. Some say that it is a bug others say it is a task or an extra feature. So, where is the limit between bugs and tasks in the development phase? Should we even consider something a bug if all the tasks are working as stated in their definitions? Can a thing be considered a bug because one compares it to the current (unstable) state of the system? Short example: A feature requires configuring a communication service for a specific operation. In the process of the implementation the team discovers that the service requires the hostnames of the pears to be resolvable to an IP address. The team adds the hostnames to the DNS server (or hosts files) and continues implementing the required feature. After the initial feature is working, a question is risen. Should the sysadmin configure the DNS or hosts file or should our application do it automatically? An automatic solution is possible. So a decision is made to implement it. ... here start the discussions ... is this a bug or an extra feature / task? PS: I know that I mixed feature / task / story in the question. It is intentional. I am interested in separating bugs from the rest. Doesn't matter what the rest means in a particular case.

    Read the article

  • How do I apply skeletal animation from a .x (Direct X) file?

    - by Byte56
    Using the .x format to export a model from Blender, I can load a mesh, armature and animation. I have no problems generating the mesh and viewing models in game. Additionally, I have animations and the armature properly loaded into appropriate data structures. My problem is properly applying the animation to the models. I have the framework for applying the models and the code for selecting animations and stepping through frames. From what I understand, the AnimationKeys inside the AnimationSet supplies the transformations to transform the bind pose to the pose in the animated frame. As small example: Animation { {Armature_001_Bone} AnimationKey { 2; //Position 121; //number of frames 0;3; 0.000000, 0.000000, 0.000000;;, 1;3; 0.000000, 0.000000, 0.005524;;, 2;3; 0.000000, 0.000000, 0.022217;;, ... } AnimationKey { 0; //Quaternion Rotation 121; 0;4; -0.707107, 0.707107, 0.000000, 0.000000;;, 1;4; -0.697332, 0.697332, 0.015710, 0.015710;;, 2;4; -0.684805, 0.684805, 0.035442, 0.035442;;, ... } AnimationKey { 1; //Scale 121; 0;3; 1.000000, 1.000000, 1.000000;;, 1;3; 1.000000, 1.000000, 1.000000;;, 2;3; 1.000000, 1.000000, 1.000000;;, ... } } So, to apply frame 2, I would take the position, rotation and scale from frame 2, create a transformation matrix (call it Transform_A) from them and apply that matrix the vertices controlled by Armature_001_Bone at their weights. So I'd stuff TransformA into my shader and transform the vertex. Something like: vertexPos = vertexPos * bones[ int(bfs_BoneIndices.x) ] * bfs_BoneWeights.x; Where bfs_BoneIndices and bfs_BoneWeights are values specific to the current vertex. When loading in the mesh vertices, I transform them by the rootTransform and the meshTransform. This ensures they're oriented and scaled correctly for viewing the bind pose. The problem is when I create that transformation matrix (using the position, rotation and scale from the animation), it doesn't properly transform the vertex. There's likely more to it than just using the animation data. I also tried applying the bone transform hierarchies, still no dice. Basically I end up with some twisted models. It should also be noted that I'm working in openGL, so any matrix transposes that might need to be applied should be considered. What data do I need and how do I combine it for applying .x animations to models?

    Read the article

  • Map and fill texture using PBO (OpenGL 3.3)

    - by NtscCobalt
    I'm learning OpenGL 3.3 trying to do the following (as it is done in D3D)... Create Texture of Width, Height, Pixel Format Map texture memory Loop write pixels Unmap texture memory Set Texture Render Right now though it renders as if the entire texture is black. I can't find a reliable source for information on how to do this though. Almost every tutorial I've found just uses glTexSubImage2D and passes a pointer to memory. Here is basically what my code does... (In this case it is generating an 1-byte Alpha Only texture but it is rendering it as the red channel for debugging) GLuint pixelBufferID; glGenBuffers(1, &pixelBufferID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); glBufferData(GL_PIXEL_UNPACK_BUFFER, 512 * 512 * 1, nullptr, GL_STREAM_DRAW); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); GLuint textureID; glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, 512, 512, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr); glBindTexture(GL_TEXTURE_2D, 0); glBindTexture(GL_TEXTURE_2D, textureID); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBufferID); void *Memory = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY); // Memory copied here, I know this is valid because it is the same loop as in my working D3D version glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER); glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0); And then here is the render loop. // This chunk left in for completeness glUseProgram(glProgramId); glBindVertexArray(glVertexArrayId); glBindBuffer(GL_ARRAY_BUFFER, glVertexBufferId); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 20, 0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 20, 12); GLuint transformLocationID = glGetUniformLocation(3, 'transform'); glUniformMatrix4fv(transformLocationID , 1, true, somematrix) // Not sure if this is all I need to do glBindTexture(GL_TEXTURE_2D, pTex->glTextureId); GLuint textureLocationID = glGetUniformLocation(glProgramId, "texture"); glUniform1i(textureLocationID, 0); glDrawArrays(GL_TRIANGLES, Offset*3, Triangles*3); Vertex Shader #version 330 core in vec3 Position; in vec2 TexCoords; out vec2 TexOut; uniform mat4 transform; void main() { TexOut = TexCoords; gl_Position = vec4(Position, 1.0) * transform; } Pixel Shader #version 330 core uniform sampler2D texture; in vec2 TexCoords; out vec4 fragColor; void main() { // Output color fragColor.r = texture2D(texture, TexCoords).r; fragColor.g = 0.0f; fragColor.b = 0.0f; fragColor.a = 1.0; }

    Read the article

  • Best practice while marking a bug as resolved with Bugzilla (versioning of product and components)

    - by Vincent B.
    I am wondering what is the best way to handle the situation of marking a bug as resolved and providing a version of component/product in which this fix can be found. Context For a project I am working on, we are using Bugzilla for issue tracking, and we have the following: A product "A" with a version number like vA.B.C.D, This product "A" have the following components: Component "C1" with a version number like vA.B.C.D, Component "C2" with a version number like vA.B.C.D, Component "C3" with a version number like vA.B.C.D. Internally we keep track of which component versions have been used to generate the product A version vA.B.C.D. Example: Product "A" version v1.0.0.0 has been produced from component "C1" v1.0.0.3, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. And Product "A" version v1.0.1.0 has been produced from component "C1" v1.0.0.4, component "C2" v1.3.0.0 and component "C3" v2.1.3.5. Each component is a SVN repository. The person in charge of generating the product "A" have only access to the different components tags folder in SVN, and not the trunk of each component repository. Problem Now the problem is the following, when a bug is found in the product "A", and that the bug is related to Component "C1", the version of product "A" is chosen (e.g. v1.0.0.0), and this version allow the developer to know which version of component "C1" has the bug (here it will be v1.0.0.3). A bug report is created. Now let's say that the developer responsible for component "C1" corrects the bug, then when the bug seems to be fixed and after some test and validation, the developer generates a new tag for component "C1", with the version v1.0.0.4. At this time, the developer of component "C1" needs to update the bug report, but what is the best to do: Mark the bug as resolved/fixed and add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component" ? Keep the bug as assigned, add a comment saying "This bug has been fixed in the tags v1.0.0.4 of C1 component, update this bug status to resolved for the next version of the product that will be generated with the newest version (v1.0.0.4 of C1)" ? Another possible way to deal with this problem. Right now the problem is that when a product component CX is fixed, it is not sure in which future version of the product A it will be included, so it is for me not possible to say in which version of the product it will be solved, but it is possible to say in which version of the Component CX it has been solved. So when do we need to mark a bug as solved, when the product A version include the fixed version of CX, or only when CX component has been fixed ? Thanks for your personal feedback and ideas about this !

    Read the article

  • Cool Tools You Can Use: Validation Templates for PeopleSoft Contracts Processes

    - by Mark Rosenberg
    This is the first in a series of postings we’ll be making under the heading of Cool Tools You Can Use. Our PeopleSoft product management team identified the need for this series after reflecting on the many conversations we have each year with our PeopleSoft community members. During these conversations, we were discovering that customers and implementation partners were often not aware that solutions exist to the problems they were trying to address and that the solutions were readily available at no additional charge. Thus, the Cool Tools You Can Use series will describe the business challenge we’ve heard, the PeopleSoft solution to the challenge, and how you can learn more about the solution so that everyone can be sure to make full use of what PeopleSoft applications have to offer. The first cool tool we’ll look at is the Validation Template for PeopleSoft Contracts Process Requests, which was first released in December 2013 as part of PeopleSoft Contracts 9.2 Update Image 4. The business issue our customers highlighted to us is the need to tightly control but easily configure and manage the scope of data that any user can process when initiating a process. Control of each user’s span of impact is essential to reducing billing reconciliation issues, passing span of authority audits, and reducing (or even eliminating) the frequency of unexpected process results.  Setting Up the Validation Template for a PeopleSoft Contracts Process With the validation template, organizations can easily and quickly ensure the software restricts the scope of transactions a user can affect and gives organizations the confidence to know that business processes are being governed effectively. Additionally, this control of PeopleSoft Contracts process requests can be applied and easily maintained and adjusted from a web browser thereby enabling analysts to administer the rules without having to engage software developers to customize the software. During the field validation template setup, an analyst specifies the combinations of fields that must contain values when a user tries to setup a run control and initiate a PeopleSoft Contracts process from a process request page. For example, for the Process Limits component, an organization could require that users enter a valid combination of values for the business unit, contract, and contract type fields or a value in the contract administrator field. Until the user enters a valid combination of entries on the process request page, he cannot launch the process. With the validation template activated for process request pages, organizations can be confident that PeopleSoft Contracts users will not accidentally begin generating invoices or triggering other revenue management processes for transactions beyond their scope of authority. To learn more about the Validation Template, please review the Defining Validation Templates section of the PeopleSoft Contracts PeopleBooks. 

    Read the article

  • SOA Community Newsletter September 2012

    - by JuergenKress
    Dear SOA partner community member Are you ready for Oracle Open World 2012? If you are planning to attend, make sure that you prepare your trip to San Francisco. If you could not make it, watch the keynotes live on-demand. You can also plan and decide to visit the SOA, Cloud and Service Technology Symposium 2012 and meet Tim Hall and Demed Lher from our product management team in London. As an Oracle partner you will get 50% discount on the conference pass, please use the code DJMXZ370 and avail your discount. The BPM Solution Catalogue is now live, make sure you use the process examples and contribute your processes. SOA Proactive support is the best resource to support your SOA implementations. To administrate your SOA systems Enterprise Manager Cloud Control 12c is the best tool, you can now attend thefree on-demand training. EM12c, Real User Experience Insight 12R1 gives you all the details, checkout our new demo. The BPM11g demo for Oracle E-Business Suite has become available. A wonderful SOA demo case is the Fusion Order Demo, Antony Reynolds posted an article how to update it on SOA Suite PS5. If you do use Coherence e.g. for SOA Suite, checkout the extension from our partner CloudTran. In this edition to this you will also find articles from: Automatically Disable Proxy Service to avoid overloading OSB By Jian Liang & Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn & Exploring MDS Explorer by Mark Nelson & Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL by Rajesh Raheja & Oracle Service Bus duplicate message check using Coherence by Jan van Zoggel & Installing Oracle SOA Suite10g on Oracle Enterprise Linux Lonneke Dikmans & Generating an EJB SDO Service Interface for Oracle SOA Suite by Edwin Biemond. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsSeptember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Samba fails to install

    - by jschoen
    I am running XBMC, which is built around Ubuntu 10.04. It does not come with samba pre-installed, and I need to share some media with a couple other boxes. I followed the Think Geek directions found here. I had it all set up a couple days ago, and thought I was in the clear. I rebooted this evening and when it came back up Samba was not started. I determined this by trying access the samba shares, and it would return there was an connecting to the server. I can ssh into it, so I know it is connected. In my inifinite wisdom, I figured I just messed something up and would just uninstall and reinstall. So I did: sudo apt-get purge samba and sudo apt-get purge smbfs. Then tried to follow the tutorial above again. The what I get after running sudo apt-get install samba smbfs is Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: openbsd-inetd inet-superserver smbldap-tools ldb-tools ufw smbclient The following NEW packages will be installed: samba smbfs 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 0B/8,131kB of archives. After this operation, 22.6MB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package samba. (Reading database ... 57098 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb)... Selecting previously deselected package smbfs. Unpacking smbfs (from .../smbfs_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb) ... Processing triggers for ureadahead ... Setting up samba (2:3.4.7~dfsg-1ubuntu3.2) ... Generating /etc/default/samba... update-alternatives: using /usr/bin/smbstatus.samba3 to provide /usr/bin/smbstatus (smbstatus) in auto mode. smbd start/running, process 2963 **start: Job failed to start** Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.2) ... The bold is my own emphasis. So I am not sure what I messed up here, or how to get back to where it was. Though I am pretty sure I made it worse than it is. I found where the logs are located, /var/logs, and found this line that seems to be the culprit. Jan 29 11:59:34 XBMCLive smbd[2806]: error opening config file So it seems to not create the configuration files. Is there a way to get samba to try to recreate them again?

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • Stuff I learned at Innovate 2011

    - by David Dorf
    After returning from the NRF Innovate 2011 conference, I picked up few nuggets I thought I'd share here.  These thoughts are a bit random, but I hope they're useful nonetheless.Kevin Kelly opened the conference with six verbs that represent the future.  They were Screening, Interacting, Sharing, Accessing, Flowing, and Generating.  It struck me that these are all ways in which we merge the digital and physical worlds.  The internet of things continues to gain momentum.Some buzzwords:  deal economy, subscription commerce, discovery (instead of search), curationThat last one, curation, came up over and over.  Retailers, especially those in fashion, are finding value in helping their customers organize and present their own collections.  Social media has made sharing such collections easy, and mobile lets them take those ideas into the stores.  Mannequins are becoming less relevant.I heard from both HauteLook and Gilt Groupe (flash sale retailers) that a large percentage of their visits come from mobile devices, and most of those are iOS devices.  I find it interesting that even though Android has passed iPhone in units shipped (and will eventually pass iOS as a whole), its still the Apple crowd that leads the way.RadioShack mentioned their Holiday Heroes campaigned was very successful.  They asked their Foursquare users to check-in at a gym, coffee shop, and transportation hub as part of being a hero.  For this feat, customers were awarded a special badge that was worth 20% off at their next store visit. They claim a 3.5x increase in ticket size vs. regular check-in customers, and a 5x increase vs those that don't check-in at all.I also learned of RadioShack's #28 campaign, which is apparently one of the largest Twitter trends ever.  Their partnership with LIVESTRONG has gotten them followers, impressions, and credit for supporting the fight against cancer.The guys at Invodo showed the importance of video to e-commerce.  They gave compelling examples of how video can show customers the value of products better than just words.The highlight of the show was Guy Kawasaki's talk on innovation, which was not only informative but also peppered with humor and personality.  Back in the early days of the internet boom, Guy turned down the CEO position at Yahoo! because the commute was too long.  By his calculation, that was a $2B mistake.There are other good accounts of the conference at the NRF Blog.

    Read the article

  • TalkTalk Business Succeeds with Engaging Conversations on an Eloqua platform

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “Everybody from online, CRM and data, channel management, communications, and content had to pull together to deliver one campaign and Eloqua was the glue that held it all together.” says Paul Higgins, Marketing Director, TalkTalk Business (UK, telco) in this 2'23 video. Challenges Nurturing multiple sales channels with very diverse customers. Generating leads qualified to a very high standard. Engaging a fatigued and apathetic target audience. Positioning the brand as industry thought leaders. Solutions “What’s Your Business Grade?” campaign Eloqua automated email nurture strategy Eloqua Partner Network – Stein IAS Results ROI of 20:1. 40% uplift in sales opportunities in the smaller end with a 25% reduction in costs. 20–25% increase in sales qualified leads for mid-market, corporate, and enterprise customer sets. Open rate highs of 61.84% and click to open highs of 15.97%. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What Precalculus knowledge is required before learning Discrete Math Computer Science topics?

    - by Ein Doofus
    Below I've listed the chapters from a Precalculus book as well as the author recommended Computer Science chapters from a Discrete Mathematics book. Although these chapters are from two specific books on these subjects I believe the topics are generally the same between any Precalc or Discrete Math book. What Precalculus topics should one know before starting these Discrete Math Computer Science topics?: Discrete Mathematics CS Chapters 1.1 Propositional Logic 1.2 Propositional Equivalences 1.3 Predicates and Quantifiers 1.4 Nested Quantifiers 1.5 Rules of Inference 1.6 Introduction to Proofs 1.7 Proof Methods and Strategy 2.1 Sets 2.2 Set Operations 2.3 Functions 2.4 Sequences and Summations 3.1 Algorithms 3.2 The Growths of Functions 3.3 Complexity of Algorithms 3.4 The Integers and Division 3.5 Primes and Greatest Common Divisors 3.6 Integers and Algorithms 3.8 Matrices 4.1 Mathematical Induction 4.2 Strong Induction and Well-Ordering 4.3 Recursive Definitions and Structural Induction 4.4 Recursive Algorithms 4.5 Program Correctness 5.1 The Basics of Counting 5.2 The Pigeonhole Principle 5.3 Permutations and Combinations 5.6 Generating Permutations and Combinations 6.1 An Introduction to Discrete Probability 6.4 Expected Value and Variance 7.1 Recurrence Relations 7.3 Divide-and-Conquer Algorithms and Recurrence Relations 7.5 Inclusion-Exclusion 8.1 Relations and Their Properties 8.2 n-ary Relations and Their Applications 8.3 Representing Relations 8.5 Equivalence Relations 9.1 Graphs and Graph Models 9.2 Graph Terminology and Special Types of Graphs 9.3 Representing Graphs and Graph Isomorphism 9.4 Connectivity 9.5 Euler and Hamilton Ptahs 10.1 Introduction to Trees 10.2 Application of Trees 10.3 Tree Traversal 11.1 Boolean Functions 11.2 Representing Boolean Functions 11.3 Logic Gates 11.4 Minimization of Circuits 12.1 Language and Grammars 12.2 Finite-State Machines with Output 12.3 Finite-State Machines with No Output 12.4 Language Recognition 12.5 Turing Machines Precalculus Chapters R.1 The Real-Number System R.2 Integer Exponents, Scientific Notation, and Order of Operations R.3 Addition, Subtraction, and Multiplication of Polynomials R.4 Factoring R.5 Rational Expressions R.6 Radical Notation and Rational Exponents R.7 The Basics of Equation Solving 1.1 Functions, Graphs, Graphers 1.2 Linear Functions, Slope, and Applications 1.3 Modeling: Data Analysis, Curve Fitting, and Linear Regression 1.4 More on Functions 1.5 Symmetry and Transformations 1.6 Variation and Applications 1.7 Distance, Midpoints, and Circles 2.1 Zeros of Linear Functions and Models 2.2 The Complex Numbers 2.3 Zeros of Quadratic Functions and Models 2.4 Analyzing Graphs of Quadratic Functions 2.5 Modeling: Data Analysis, Curve Fitting, and Quadratic Regression 2.6 Zeros and More Equation Solving 2.7 Solving Inequalities 3.1 Polynomial Functions and Modeling 3.2 Polynomial Division; The Remainder and Factor Theorems 3.3 Theorems about Zeros of Polynomial Functions 3.4 Rational Functions 3.5 Polynomial and Rational Inequalities 4.1 Composite and Inverse Functions 4.2 Exponential Functions and Graphs 4.3 Logarithmic Functions and Graphs 4.4 Properties of Logarithmic Functions 4.5 Solving Exponential and Logarithmic Equations 4.6 Applications and Models: Growth and Decay 5.1 Systems of Equations in Two Variables 5.2 System of Equations in Three Variables 5.3 Matrices and Systems of Equations 5.4 Matrix Operations 5.5 Inverses of Matrices 5.6 System of Inequalities and Linear Programming 5.7 Partial Fractions 6.1 The Parabola 6.2 The Circle and Ellipse 6.3 The Hyperbola 6.4 Nonlinear Systems of Equations

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Another Custom Property Locator: a Library of Books

    - by Cindy McMullen
    Introduction The previous post gave an introduction to custom property locators and showed how create one using JDeveloper.  This post continues on the custom locator theme, with a slightly more complex locator: a library of books.  It demonstrates using the DAO pattern to delegate data access from the Locator, which is likely how many actual backing stores will integrate with the Locator.  You can imagine, rather than a library of books, the data store might be a user database of sorts.  The same sort of pattern would apply. This post uses the BookLocator example originally shown in the WebCenter documentation, but has: updated the source code to reflect the final Property APIs includes the steps for generating the namespace and property definition files via JDeveloper detailed usage of the PropertyService APIs Getting Started If you're new to JDeveloper, you might want to check out this tutorial.  There is also the "Jump-Start to using Personalization" blog post that you might find useful.  Otherwise, if you're already familiar with both, you can skip those tutorials and jump right in to using JDeveloper. Download the BookLocator.zip file (which has been updated from the original post) and unzip it to a new directory.  Start JDeveloper, navigate to the BookLocator.jws file, and open it.   It should look something like this: The Properties Namespace file contains the property definitions and property set definitions you define.  It is explained more in detail in the Namespace documentation.  Although this example doesn't show it, the property set definitions have the ability to reference multiple locators per property.   This can be done by right-clicking on the 'Locator Info' box.  Configure the contents of the Locator Map  by editing locators and mapping them to available property names in the property set definition. Compiling, deploying, and running your locator The rest of the steps in this tutorial basically follow those in the previous blog on custom locators, and won't be repeated here.   A scenario to invoke your locator is included with the sample app: see BookProperties.scenarios_diagram above.  Summary This post demonstrates a simple library of books accessed by the BookPropertyLocator via the DAO layer.  This is a useful pattern for more realistic property retrievals, such as a backing user store.  It also points out the possibility of retrieving properties from multiple locators, which would be quite handy to retrieve user attributes from multiple sources.

    Read the article

  • Syncing client and server CRUD operations using json and php

    - by Justin
    I'm working on some code to sync the state of models between client (being a javascript application) and server. Often I end up writing redundant code to track the client and server objects so I can map the client supplied data to the server models. Below is some code I am thinking about implementing to help. What I don't like about the below code is that this method won't handle nested relationships very well, I would have to create multiple object trackers. One work around is for each server model after creating or loading, simply do $model->clientId = $clientId; IMO this is a nasty hack and I want to avoid it. Adding a setCientId method to all my model object would be another way to make it less hacky, but this seems like overkill to me. Really clientIds are only good for inserting/updating data in some scenarios. I could go with a decorator pattern but auto generating a proxy class seems a bit involved. I could use a generic proxy class that uses a __call function to allow for original object data to be accessed, but this seems wrong too. Any thoughts or comments? $clientData = '[{name: "Bob", action: "update", id: 1, clientId: 200}, {name:"Susan", action:"create", clientId: 131} ]'; $jsonObjs = json_decode($clientData); $objectTracker = new ObjectTracker(); $objectTracker->trackClientObjs($jsonObjs); $query = $this->em->createQuery("SELECT x FROM Application_Model_User x WHERE x.id IN (:ids)"); $query->setParameters("ids",$objectTracker->getClientSpecifiedServerIds()); $models = $query->getResults(); //Apply client data to server model foreach ($models as $model) { $clientModel = $objectTracker->getClientJsonObj($model->getId()); ... } //Create new models and persist foreach($objectTracker->getNewClientObjs() as $newClientObj) { $model = new Application_Model_User(); .... $em->persist($model); $objectTracker->trackServerObj($model); } $em->flush(); $resourceResponse = $objectTracker->createResourceResponse(); //Id mappings will be an associtave array representing server id resources with client side // id. //This method Dosen't seem to flexible if we want to return additional data with each resource... //Would have to modify the returned data structure, seems like tight coupling... //Ex return value: //[{clientId: 200, id:1} , {clientId: 131, id: 33}];

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >