Search Results

Search found 3262 results on 131 pages for 'david clarke'.

Page 16/131 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Recommended method for XML level loading in XNA

    - by David Saltares Márquez
    I want to use Blender as my level designer tool for an XNA game. Using an existing plugin, I can export my levels to DotScene format which is basically an xml file like this one: <scene formatVersion="1.0.0"> <nodes> <node name="scene-staircase.001"> <position x="10.500000" y="1.400000" z="-9.600000"/> <quaternion x="0.000000" y="0.000000" z="-0.000000" w="1.000000"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <entity name="scene-staircase.001" meshFile="staircase.mesh"/> </node> <node name="Lamp.003"> <position x="11.024290" y="5.903862" z="9.658987"/> <quaternion x="-0.284166" y="0.726942" z="0.342034" w="0.523275"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <light name="Spot.003" type="point"> <colourDiffuse r="0.400000" g="0.154618" b="0.145180"/> <colourSpecular r="0.400000" g="0.154618" b="0.145180"/> <lightAttenuation range="5000.0" constant="1.000000" linear="0.033333" quadratic="0.000000"/> </light> </node> ... </nodes> </scene> Using naming conventions I could easily parse the file and load the correspondent in game content. I am new to XNA and I have seen that there are several methods to load XML files into a game like serializing and deserializing. Which one would you recommend?

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • How do you convert many files from .xlsx to .xls ?

    - by David Oneill
    What is a way to convert a batch of .xlsx files to .xls format? I would prefer it to be a command-line solution, but anything is better than opening each manually, and manually saving in the new format. ~~Edit~~ So is there a way to get around that error? errored: Leaking python objects bridged to UNO for reason pyuno runtime is not initialized, (the pyuno.bootstrap needs to be called before using any uno classes) python: tpp.c:63: __pthread_tpp_change_priority: Assertion `new_prio == -1 || (new_prio >= __sched_fifo_min_prio && new_prio <= __sched_fifo_max_prio)' failed. Aborted

    Read the article

  • JQuery and the multiple date selector

    - by David Carter
    Overview I recently needed to build a web page that would allow a user to capture some information and most importantly select multiple dates. This functionality was core to the application and hence had to be easy and quick to do. This is a public facing website so it had to be intuitive and very responsive. On the face of it it didn't seem too hard, I know enough juery to know what it is capable of and I was pretty sure that there would be some plugins that would help speed things along the way. I'm using ASP.Net MVC for this project as I really like the control that it gives you over the generated html and javascript. After years of Web Forms development it makes me feel like a web developer again and puts a smile on my face, that can only be a good thing!   The Calendar The first item that I needed on this page was a calender and I wanted the ability to: have the calendar be always visible select/deselect multiple dates at the same time bind to the select/deselect event so that I could update a seperate listing of the selected dates allow the user to move to another month and still have the calender remember any dates in the previous month I was hoping that there was a jQuery plugin that would meet my requirements and luckily there was! The jQuery datepicker does everything I want and there is quite a bit of documentation on how to use it. It makes use of a javascript date library date.js which I had not come across before but has a number of very useful date utilities that I have used elsewhere in the project. As you can see from the image there still needs to be some styling done! But there will be plenty of time for that later. The calendar clearly shows which dates the user has selected in red and i also make use of an unordered list to show the the selected dates so the user can always clearly see what has been selected even if they move to another month on the calendar. The javascript code that is responsible for listening to events on the calendar and synchronising the list look as follows: <script type="text/javascript">     $(function () {         $('.datepicker').datePicker({ inline: true, selectMultiple: true })         .bind(             'dateSelected',             function (e, selectedDate, $td, state) {                                 var dateInMillisecs = selectedDate.valueOf();                 if (state) { //adding a date                     var newDate = new Date(selectedDate);                     //insert the new item into the correct place in the list                     var listitems = $('#dateList').children('li').get();                     var liToAdd = "<li id='" + dateInMillisecs + "' >" + newDate.toString('ddd dd MMM yyyy') + "</li>";                     var targetIndex = -1;                     for (var i = 0; i < listitems.length; i++) {                         if (dateInMillisecs <= listitems[i].id) {                             targetIndex = i;                             break;                         }                     }                     if (targetIndex < 0) {                         $('#dateList').append(liToAdd);                     }                     else {                         $($('#dateList').children("li")[targetIndex]).before(liToAdd);                     }                 }                 else {//removing a date                     $('ul #' + dateInMillisecs).remove();                 }             }         )     }); When a date is selected on the calendar a function is called with a number of parameters passed to it. The ones I am particularly interested in are selectedDate and state. State tells me whether the user has selected or deselected the date passed in the selectedDate parameter. The <ul> that I am using to show the date has an id of dateList and this is what I will be adding and removing <li> items from. To make things a little more logical for the user I decided that the date should be sorted in chronological order, this means that each time a new date is selected it need to be placed in the correct position in the list. One way to do this would be just to append a new <li> to the list and then sort the whole list. However the approach I took was to get an array of all the items in the list var listitems = ('#dateList').children('li').get(); and then check the value of each item in the array against my new date and as soon as I found the case where the new date was less than the current item remember that position in the list as this is where I would insert it later. To make this work easily I decided to store a numeric representation of each date in the list in the id attribute of each <li> element. Fortunately javascript natively stores dates as the number of milliseconds since 1 Jan 1970. var dateInMillisecs = selectedDate.valueOf(); Please note that this is the value of the date in UTC! I always like to store dates in UTC as I learnt a long time ago that it saves a lot of refactoring at a later date... When I convert the dates back to their original back on the server I will need the UTC offset that was used when calculating the dates, this and how to actually serialise the dates and get them posted back will be the subject of another post.

    Read the article

  • ODI 11g – How to Load Using Partition Exchange

    - by David Allan
    Here we will look at how to load large volumes of data efficiently into the Oracle database using a mixture of CTAS and partition exchange loading. The example we will leverage was posted by Mark Rittman a couple of years back on Interval Partitioning, you can find that posting here. The best thing about ODI is that you can encapsulate all those ‘how to’ blog posts and scripts into templates that can be reused – the templates are of course Knowledge Modules. The interface design to mimic Mark's posting is shown below; The IKM I have constructed performs a simple series of steps to perform a CTAS to create the stage table to use in the exchange, then lock the partition (to ensure it exists, it will be created if it doesn’t) then exchange the partition in the target table. You can find the IKM Oracle PEL.xml file here. The IKM performs the follows steps and is meant to illustrate what can be done; So when you use the IKM in an interface you configure the options for hints (for parallelism levels etc), initial extent size, next extent size and the partition variable;   The KM has an option where the name of the partition can be passed in, so if you know the name of the partition then set the variable to the name, if you have interval partitioning you probably don’t know the name, so you can use the FOR clause. In my example I set the variable to use the date value of the source data FOR (TO_DATE(''01-FEB-2010'',''dd-MON-yyyy'')) Using a variable lets me invoke the scenario many times loading different partitions of the same target table. Below you can see where this is defined within ODI, I had to double single-quote the strings since this is placed inside the execute immediate tasks in the KM; Note also this example interface uses the LKM Oracle to Oracle (datapump), so this illustration uses a lot of the high performing Oracle database capabilities – it uses Data Pump to unload, then a CreateTableAsSelect (CTAS) is executed on the external table based on top of the Data Pump export. This table is then exchanged in the target. The IKM and illustrations above are using ODI 11.1.1.6 which was needed to get around some bugs in earlier releases with how the variable is handled...as far as I remember.

    Read the article

  • Adobe Flash Player fails

    - by David Cole
    Using UBUNTU 11.10 the FireFox error message says "A plugin is needed to display this content: Adobe Flash Player Installer" So I install it. Then it says "Installed - restart FireFox" I restart FireFox and the same error message appears. This problem doesn't happen with Windows 7 (IE, Chrome & Firefox are fine) or my previous version of Ubuntu. Problem occurs when I access CallOfRoma.com Thank You

    Read the article

  • What is the maximum number of characters in the utm_content param in GA?

    - by David Parks
    For example, we want to differentiate people who followed our daily product email blast. I could use the product ID for utm_content, but it would be easier to read to use the SEO friendly URL path, such as: http://www.oursite.com/products/really-great-new-product https://www.frugg.com/? utm_source=a&utm_medium=b& utm_term=c& utm_content=Can-I-use-a-really-long-content-tag-like-this-one-or-is-this-going-to-break-something& utm_campaign=d

    Read the article

  • How does landscape calculate memory usage?

    - by David Planella
    I'm trying to debug an OOM situation in an Ubuntu 12.04 server, and looking at the Memory graphs in Landscape, I noticed that there wasn't any serious memory usage spike. Then I looked at the output of the free command and I wasn't quite sure how both memory usage results relate to each other. Here's landscape's output on the server: $ landscape-sysinfo System load: 0.0 Processes: 93 Usage of /: 5.6% of 19.48GB Users logged in: 1 Memory usage: 26% IP address for eth0: - Swap usage: 2% Then I ran the free command and I get: $ free -m total used free shared buffers cached Mem: 486 381 105 0 4 165 -/+ buffers/cache: 212 274 Swap: 255 7 248 I can understand the 2% swap usage, but where does the 26% memory usage come from?

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • Adding objects to the environment at timed intervals

    - by david
    I am using an ArrayList to handle objects and at each interval of 120 frames, I am adding a new object of the same type at a random location along the z-axis of 60. The problem is, it doesn't add just 1. It depends on how many are in the list. If I kill the Fox before the time interval when one is supposed to spawn comes, then no Fox will be spawned. If I don't kill any foxes, it grows exponentially. I only want one Fox to be added every 120 frames. This problem never happened before when I created new ones and added them to the environment. Any insights? Here is my code: /**** FOX CLASS ****/ import env3d.EnvObject; import java.util.ArrayList; public class Fox extends Creature { private int frame = 0; public Fox(double x, double y, double z) { super(x, y, z); // Must use the mutator as the fields have private access // in the parent class setTexture("models/fox/fox.png"); setModel("models/fox/fox.obj"); setScale(1.4); } public void move(ArrayList<Creature> creatures, ArrayList<Creature> dead_creatures, ArrayList<Creature> new_creatures) { frame++; setX(getX()-0.2); setRotateY(270); if (frame > 120) { Fox f = new Fox(60, 1, (int)(Math.random()*28)+1); new_creatures.add(f); frame = 0; } for (Creature c : creatures) { if (this.distance(c) < this.getScale()+c.getScale() && c instanceof Tux) { dead_creatures.add(c); } } for (Creature c : creatures) { if (c.getX() < 1 && c instanceof Fox) { dead_creatures.add(c); } } } } import env3d.Env; import java.util.ArrayList; import org.lwjgl.input.Keyboard; /** * A predator and prey simulation. Fox is the predator and Tux is the prey. */ public class Game { private Env env; private boolean finished; private ArrayList<Creature> creatures; private KingTux king; private Snowball ball; private int tuxcounter; private int kills; /** * Constructor for the Game class. It sets up the foxes and tuxes. */ public Game() { // we use a separate ArrayList to keep track of each animal. // our room is 50 x 50. creatures = new ArrayList<Creature>(); for (int i = 0; i < 10; i++) { creatures.add(new Tux((int)(Math.random()*10)+1, 1, (int)(Math.random()*28)+1)); } for (int i = 0; i < 1; i++) { creatures.add(new Fox(60, 1, (int)(Math.random()*28)+1)); } king = new KingTux(25, 1, 35); ball = new Snowball(-400, -400, -400); } /** * Play the game */ public void play() { finished = false; // Create the new environment. Must be done in the same // method as the game loop env = new Env(); // Make the room 50 x 50. env.setRoom(new Room()); // Add all the animals into to the environment for display for (Creature c : creatures) { env.addObject(c); } for (Creature c : creatures) { if (c instanceof Tux) { tuxcounter++; } } env.addObject(king); env.addObject(ball); // Sets up the camera env.setCameraXYZ(30, 50, 55); env.setCameraPitch(-63); // Turn off the default controls env.setDefaultControl(false); // A list to keep track of dead tuxes. ArrayList<Creature> dead_creatures = new ArrayList<Creature>(); ArrayList<Creature> new_creatures = new ArrayList<Creature>(); // The main game loop while (!finished) { if (env.getKey() == 1 || tuxcounter == 0) { finished = true; } env.setDisplayStr("Tuxes: " + tuxcounter, 15, 0); env.setDisplayStr("Kills: " + kills, 140, 0); processInput(); ball.move(); king.check(); // Move each fox and tux. for (Creature c : creatures) { c.move(creatures, dead_creatures, new_creatures); } for (Creature c : creatures) { if (c.distance(ball) < c.getScale()+ball.getScale() && c instanceof Fox) { dead_creatures.add(c); ball.setX(-400); ball.setY(-400); ball.setZ(-400); kills++; } } // Clean up of the dead tuxes. for (Creature c : dead_creatures) { if (c instanceof Tux) { tuxcounter--; } env.removeObject(c); creatures.remove(c); } for (Creature c : new_creatures) { creatures.add(c); env.addObject(c); } // we clear the ArrayList for the next loop. We could create a new one // every loop but that would be very inefficient. dead_creatures.clear(); new_creatures.clear(); // Update display env.advanceOneFrame(); } // Just a little clean up env.exit(); } private void processInput() { int keyDown = env.getKeyDown(); int key = env.getKey(); if (keyDown == 203) { king.setX(king.getX()-1); } else if (keyDown == 205) { king.setX(king.getX()+1); } if (ball.getX() <= -400 && key == Keyboard.KEY_S) { ball.setX(king.getX()); ball.setY(king.getY()); ball.setZ(king.getZ()); } } /** * Main method to launch the program. */ public static void main(String args[]) { (new Game()).play(); } } /**** CREATURE CLASS ****/ /* (Parent class to Tux, Fox, and KingTux) */ import env3d.EnvObject; import java.util.ArrayList; abstract public class Creature extends EnvObject { private int frame; private double rand; /** * Constructor for objects of class Creature */ public Creature(double x, double y, double z) { setX(x); setY(y); setZ(z); setScale(1); rand = Math.random(); } private void randomGenerator() { rand = Math.random(); } public void move(ArrayList<Creature> creatures, ArrayList<Creature> dead_creatures, ArrayList<Creature> new_creatures) { frame++; if (frame > 12) { randomGenerator(); frame = 0; } // if (rand < 0.25) { // setX(getX()+0.3); // setRotateY(90); // } else if (rand < 0.5) { // setX(getX()-0.3); // setRotateY(270); // } else if (rand < 0.75) { // setZ(getZ()+0.3); // setRotateY(0); // } else if (rand < 1) { // setZ(getZ()-0.3); // setRotateY(180); // } if (rand < 0.5) { setRotateY(getRotateY()-7); } else if (rand < 1) { setRotateY(getRotateY()+7); } setX(getX()+Math.sin(Math.toRadians(getRotateY()))*0.5); setZ(getZ()+Math.cos(Math.toRadians(getRotateY()))*0.5); if (getX() < getScale()) setX(getScale()); if (getX() > 50-getScale()) setX(50 - getScale()); if (getZ() < getScale()) setZ(getScale()); if (getZ() > 50-getScale()) setZ(50 - getScale()); // The move method now handles collision detection if (this instanceof Fox) { for (Creature c : creatures) { if (c.distance(this) < c.getScale()+this.getScale() && c instanceof Tux) { dead_creatures.add(c); } } } } } The rest of the classes are a bit trivial to this specific problem.

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • ODI 12c - Loading Files into Oracle, community post from ToadWorld

    - by David Allan
    There's a complete soup to nuts post from Deepak Vohra on the Oracle community pages of ToadWorld on loading a fixed length file into the Oracle database. This post is interesting from a few fronts; firstly this is the out of the box experience, no specialized KMs so just basic integration from getting the software installed to running a mapping. Also it demonstrates fixed length file integration including how to use the ODI UI to define the fields and pertinent properties.  Check the blog post out below.... http://www.toadworld.com/platforms/oracle/w/wiki/10935.loading-text-file-data-into-oracle-database-12c-with-oracle-data-integrator-12c.aspx Hopefully you also find this useful, many thanks to Deepak for sharing his experiences. You could take this example further and illustrate how to load into Oracle using the LKM File to Oracle via External table knowledge module which will perform much better and also leverage such things as using wildcards for loading many files into the 12c database.

    Read the article

  • Where to Perform Authentication in REST API Server?

    - by David V
    I am working on a set of REST APIs that needs to be secured so that only authenticated calls will be performed. There will be multiple web apps to service these APIs. Is there a best-practice approach as to where the authentication should occur? I have thought of two possible places. Have each web app perform the authentication by using a shared authentication service. This seems to be in line with tools like Spring Security, which is configured at the web app level. Protect each web app with a "gateway" for security. In this approach, the web app never receives unauthenticated calls. This seems to be the approach of Apache HTTP Server Authentication. With this approach, would you use Apache or nginx to protect it, or something else in between Apache/nginx and your web app? For additional reference, the authentication is similar to services like AWS that have a non-secret identifier combined with a shared secret key. I am also considering using HMAC. Also, we are writing the web services in Java using Spring. Update: To clarify, each request needs to be authenticated with the identifier and secret key. This is similar to how AWS REST requests work.

    Read the article

  • 10.04 drops to '(initramfs)' prompt on boot

    - by David Yenor
    I'm not sure what to do to solve the problem, I received this error upon boot. mount: mounting /dev/disk/by-uuid/f60e3ce2-0237-45bb-bf07-581d0090cbc7 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _

    Read the article

  • Analiytics: Can I set a goal on multiple events?

    - by David Parks
    We have a popup dialogue that requests users email address or facebook login. The page behind the popup loads, so a page view is counted. We want to measure: How many users ignored the popup completely How many users engaged the popup, but don't complete the process (we trigger an event when the user performs actions defined as "engaging") How many users completed the popup Bounce rates aren't telling because some users won't receive the popup. We are basically triggering events "PopupDisplayed" "PopupEngaged" and "PopupComplete", with labels to differentiate between email and facebook. But I don't think I can set goals to count "Users who received 'PopupDisplayed' AND 'PopupComplete'" events, so I can count how many users both saw the popup and completed it.

    Read the article

  • Chalk Talk, Glenn Block &ndash; Leith, Edinburgh 12th March 2011

    - by David Christiansen
    Exciting news. I am proud to announce that Glenn Block from Microsoft  will be coming all the way from Seattle to Scotland on the 12th March to talk to you!. Glenn is a PM on the WCF team working on Microsoft’s future HTTP and REST stack and has been involved in some pretty exciting and ground-breaking Microsoft development mind-shifts in recent times. Don’t miss the chance to hear him speak and ask him questions. Brief history of Glenn Prior to WCF he was a PM on the new Managed Extensibility Framework in .NET 4.0. Glenn has a breadth of experience both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has also been very active in involving folks from the community in the development of software at Microsoft. This has included shipping several products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a frequent speaker at local and international events and user groups.  When he's not working and playing with technology, he spends his time with his wife and daughter either at their home in Seattle or at one of the local coffee shops. Glenn Block on the web mvcConf 2 - Glenn Block: Take some REST with WCF (Feb 2011) @gblock on twitter My Technobabble - Glenn’s Blog Sponsored by Storm ID is an award winning full service digital agency in Edinburgh

    Read the article

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • Going Inside the Store

    - by David Dorf
    Location was the first "killer-tech" for smartphones, and innovators have found several ways to use it. For retail, apps exist to find nearby stores, provide coupons, and give directions to the front door. But once you enter the store, location-finding ceases to work. That's because your location is usually found by finding GPS satellites in they sky, and the store's roof blocks the signal. But it won't take technology long to solve that problem. The first problem to solve is a lack of indoor maps. Navteq and others provide very accurate maps of the outdoors, enabling navigation for cars and pedestrians. Micello is building a business creating digital maps of indoor locations like malls, convention centers, office buildings. They have over 500 live maps, including maps of IKEA stores. They claim it took them only four hours to create a map of the Stanford Shopping Center in Palo Alto with its 1.4 million square feet and 140 retail stores. And within stores, retailers are producing more accurate plan-o-grams. I'm always impressed watching demos of our space planning from AVT. It uses CAD software to allow you to walk the virtual store and see products on the shelves. The second problem is being able to determine location inside the store so it can be overlayed on the map. There are several goals for this endeavor. Your smartphone might direct you straight to particular products, it might summon a sales associate to your location for immediate assistance, and it might send you coupons based on the aisle you're viewing. Companies like Nearbuy, ZuluTime, and Skyhook are working to master indoor location using a combination of GPS signals, WiFi, and cell tower positioning to calculate a location. (Skyhook calls this WPS, as depicted in the chart.) Today they can usually hit 10 meters accuracy, but that number is improving all the time. When it gets inside 3 meters some the goals mentioned earlier will be in easy reach. I for one can't wait until the time my iPhone leads me directly to the sprinkler heads in Lowes and Home Depot.

    Read the article

  • Do backlinks to blocked content add value?

    - by David Fisher
    We've been debating the following SEO question at our office: If you block bot access to a page either via robots.txt or on-page noindex metadata, does that negate the value of any backlinks to that page? We have a client who wants to block some event booking form pages from being indexed as each booking form page has a unique URL parameter and the pages are "clogging up" the Google index; however lots of websites link to those booking form pages and we wouldn't want to lose the value of those links. Any opinions welcomed.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Post Crosstalk 2012

    - by David Dorf
    This year the Oracle Retail users conference, Crosstalk, had a 20% increase in attendees, which was driven by both new customers and those acquired via Endeca.  As the product assets of Oracle have grown, so has the completeness of the solution set.  This year was marked by the breadth of omni-channel stories. Rose Spicer and her marketing team (see photo on left) always strive for an equal balance of retailer presentations, networking opportunities, and unique experiences -- this year was no exception.  We had 41 different retailers from China, Russia, South Africa, Brazil, Chile, US, Canada and the UK sharing their insights with one another. In all there were 251 executives from 120 iconic brands such as Daphne, Kohl's, Morrisons, Abercrombie & Fitch, Hot Topic, Talbots, Petco, Deckers, Sportmaster, Mr. Price, Falabella, and Disney to name a few. From a product perspective, there were a few new developments from Oracle Retail: Endeca's search engine has been integrated into the ATG commerce platform. The latest Retail Analytics application, Oracle Retail Customer Analytics, is generally available. Oracle Retail previewed a new fully-integrated mobile POS. But the real benefit of attending Crosstalk was hearing about the experiences of retailers and partners.  Here are are a few interesting facts I picked up: At Kohl's, the most popular website accessed by customers within their stores is Facebook.  With all the buzz about showrooming, I was really expecting it to be Amazon. Daphne, a Chinese shoe retailer, is opening 3 new stores per day.  Being located near the factories allows them to have a very agile supply chain as well. Disney Stores have increased sales by 25% at stores upgraded to include Mobile POS.  They continue to lead the pack with excellent customer experiences. Quicksilver reported that 1 in 5 visits to their website comes from a tablet.  More evidence that tablets are replacing traditional PCs in households. By tagging shoes with RFID, Saks is able to ensure all shoe models are on display.  If a model is not being displayed, it has no chance of being sold. Additionally, there were awards, store tours on Michigan Avenue, fireworks at Navy Pier, and the Oracle Retail house band, Bolo313, performing at Solider Field.  Speaking of which, a few retailers got on stage and jammed with band -- possible rival to Rock & Roll Retail? You can always find the latest info from us at the Retail Rack. The next events on tap are the Partner Summit followed by OpenWorld.

    Read the article

  • Custom keyboard shortcut to lauch a terminal and run a command in Unity

    - by David Weinraub
    I know this should be the simplest thing, but coming up empty. ;-( I would like to create a keyboard shortcut ctrl-alt-P that opens a terminal window and runs a ping command: ping -c 4 somefixeddomain.com [Useful for quickly checking whether my internet connection is actually working.] I have attempted to do this (in Unity, Ubuntu v11.10) using: Settings > Keyboard > Custom Shortcuts filling in all the obvious stuff, but no luck. All ideas welcome.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >