Search Results

Search found 4298 results on 172 pages for 'david thomas garcia'.

Page 28/172 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How do I resize tables in Visio 2010?

    - by Thomas
    Create a Database Model Diagram Reverse Engineer a database (Database Tab, Reverse Engineer). Once the diagram is created, how do you resize the tables? I've tried: Enable Developer mode, Choose Protection, Choose None. When I do that, I'm given the impression that I should be able to resize a given table but I cannot actually do it. Enable Developer mode, right-click on a table, Choose Show ShapeSheet, Set all Lock values in the Protection section to 0.

    Read the article

  • What is the maximum number of characters in the utm_content param in GA?

    - by David Parks
    For example, we want to differentiate people who followed our daily product email blast. I could use the product ID for utm_content, but it would be easier to read to use the SEO friendly URL path, such as: http://www.oursite.com/products/really-great-new-product https://www.frugg.com/? utm_source=a&utm_medium=b& utm_term=c& utm_content=Can-I-use-a-really-long-content-tag-like-this-one-or-is-this-going-to-break-something& utm_campaign=d

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • Adding objects to the environment at timed intervals

    - by david
    I am using an ArrayList to handle objects and at each interval of 120 frames, I am adding a new object of the same type at a random location along the z-axis of 60. The problem is, it doesn't add just 1. It depends on how many are in the list. If I kill the Fox before the time interval when one is supposed to spawn comes, then no Fox will be spawned. If I don't kill any foxes, it grows exponentially. I only want one Fox to be added every 120 frames. This problem never happened before when I created new ones and added them to the environment. Any insights? Here is my code: /**** FOX CLASS ****/ import env3d.EnvObject; import java.util.ArrayList; public class Fox extends Creature { private int frame = 0; public Fox(double x, double y, double z) { super(x, y, z); // Must use the mutator as the fields have private access // in the parent class setTexture("models/fox/fox.png"); setModel("models/fox/fox.obj"); setScale(1.4); } public void move(ArrayList<Creature> creatures, ArrayList<Creature> dead_creatures, ArrayList<Creature> new_creatures) { frame++; setX(getX()-0.2); setRotateY(270); if (frame > 120) { Fox f = new Fox(60, 1, (int)(Math.random()*28)+1); new_creatures.add(f); frame = 0; } for (Creature c : creatures) { if (this.distance(c) < this.getScale()+c.getScale() && c instanceof Tux) { dead_creatures.add(c); } } for (Creature c : creatures) { if (c.getX() < 1 && c instanceof Fox) { dead_creatures.add(c); } } } } import env3d.Env; import java.util.ArrayList; import org.lwjgl.input.Keyboard; /** * A predator and prey simulation. Fox is the predator and Tux is the prey. */ public class Game { private Env env; private boolean finished; private ArrayList<Creature> creatures; private KingTux king; private Snowball ball; private int tuxcounter; private int kills; /** * Constructor for the Game class. It sets up the foxes and tuxes. */ public Game() { // we use a separate ArrayList to keep track of each animal. // our room is 50 x 50. creatures = new ArrayList<Creature>(); for (int i = 0; i < 10; i++) { creatures.add(new Tux((int)(Math.random()*10)+1, 1, (int)(Math.random()*28)+1)); } for (int i = 0; i < 1; i++) { creatures.add(new Fox(60, 1, (int)(Math.random()*28)+1)); } king = new KingTux(25, 1, 35); ball = new Snowball(-400, -400, -400); } /** * Play the game */ public void play() { finished = false; // Create the new environment. Must be done in the same // method as the game loop env = new Env(); // Make the room 50 x 50. env.setRoom(new Room()); // Add all the animals into to the environment for display for (Creature c : creatures) { env.addObject(c); } for (Creature c : creatures) { if (c instanceof Tux) { tuxcounter++; } } env.addObject(king); env.addObject(ball); // Sets up the camera env.setCameraXYZ(30, 50, 55); env.setCameraPitch(-63); // Turn off the default controls env.setDefaultControl(false); // A list to keep track of dead tuxes. ArrayList<Creature> dead_creatures = new ArrayList<Creature>(); ArrayList<Creature> new_creatures = new ArrayList<Creature>(); // The main game loop while (!finished) { if (env.getKey() == 1 || tuxcounter == 0) { finished = true; } env.setDisplayStr("Tuxes: " + tuxcounter, 15, 0); env.setDisplayStr("Kills: " + kills, 140, 0); processInput(); ball.move(); king.check(); // Move each fox and tux. for (Creature c : creatures) { c.move(creatures, dead_creatures, new_creatures); } for (Creature c : creatures) { if (c.distance(ball) < c.getScale()+ball.getScale() && c instanceof Fox) { dead_creatures.add(c); ball.setX(-400); ball.setY(-400); ball.setZ(-400); kills++; } } // Clean up of the dead tuxes. for (Creature c : dead_creatures) { if (c instanceof Tux) { tuxcounter--; } env.removeObject(c); creatures.remove(c); } for (Creature c : new_creatures) { creatures.add(c); env.addObject(c); } // we clear the ArrayList for the next loop. We could create a new one // every loop but that would be very inefficient. dead_creatures.clear(); new_creatures.clear(); // Update display env.advanceOneFrame(); } // Just a little clean up env.exit(); } private void processInput() { int keyDown = env.getKeyDown(); int key = env.getKey(); if (keyDown == 203) { king.setX(king.getX()-1); } else if (keyDown == 205) { king.setX(king.getX()+1); } if (ball.getX() <= -400 && key == Keyboard.KEY_S) { ball.setX(king.getX()); ball.setY(king.getY()); ball.setZ(king.getZ()); } } /** * Main method to launch the program. */ public static void main(String args[]) { (new Game()).play(); } } /**** CREATURE CLASS ****/ /* (Parent class to Tux, Fox, and KingTux) */ import env3d.EnvObject; import java.util.ArrayList; abstract public class Creature extends EnvObject { private int frame; private double rand; /** * Constructor for objects of class Creature */ public Creature(double x, double y, double z) { setX(x); setY(y); setZ(z); setScale(1); rand = Math.random(); } private void randomGenerator() { rand = Math.random(); } public void move(ArrayList<Creature> creatures, ArrayList<Creature> dead_creatures, ArrayList<Creature> new_creatures) { frame++; if (frame > 12) { randomGenerator(); frame = 0; } // if (rand < 0.25) { // setX(getX()+0.3); // setRotateY(90); // } else if (rand < 0.5) { // setX(getX()-0.3); // setRotateY(270); // } else if (rand < 0.75) { // setZ(getZ()+0.3); // setRotateY(0); // } else if (rand < 1) { // setZ(getZ()-0.3); // setRotateY(180); // } if (rand < 0.5) { setRotateY(getRotateY()-7); } else if (rand < 1) { setRotateY(getRotateY()+7); } setX(getX()+Math.sin(Math.toRadians(getRotateY()))*0.5); setZ(getZ()+Math.cos(Math.toRadians(getRotateY()))*0.5); if (getX() < getScale()) setX(getScale()); if (getX() > 50-getScale()) setX(50 - getScale()); if (getZ() < getScale()) setZ(getScale()); if (getZ() > 50-getScale()) setZ(50 - getScale()); // The move method now handles collision detection if (this instanceof Fox) { for (Creature c : creatures) { if (c.distance(this) < c.getScale()+this.getScale() && c instanceof Tux) { dead_creatures.add(c); } } } } } The rest of the classes are a bit trivial to this specific problem.

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • Bridge virtual machines out WLAN interface

    - by Thomas
    It seems that my wlan card (intel 5100 AGN) firmware doesn't allow "spoofing" MAC addresses. This has the side effect of destroying the capability to bridge out my virtual machines on that interface. Apparently this is a common thing on wlan cards. I can see the incoming traffic just fine in my virtual machines, but their DHCP queries don't get bridged out of the WLAN card. It works perfectly well when using the wired ethernet port. Is there a workaround for this? MAC-NAT or something? I don't want to route my virtual machines out to the Internet because I don't want my host OS to even have an IP address. I'm using Linux and KVM for virtualization.

    Read the article

  • How to test web application performance from other continent?

    - by Thomas Einwaller
    We are hosting our web application http://timr.com on a server located in Germany. The server handles a high load of traffic very well and everything works as desired in terms of performance and load times. However we sometimes get complaints from our overseas users (US, South America) that the experience slow page loading times. What would be the best way to test the performance of a web application "as if you are on another continent"? I want to make sure that the distance between the server and the user is no problem?

    Read the article

  • Chalk Talk, Glenn Block &ndash; Leith, Edinburgh 12th March 2011

    - by David Christiansen
    Exciting news. I am proud to announce that Glenn Block from Microsoft  will be coming all the way from Seattle to Scotland on the 12th March to talk to you!. Glenn is a PM on the WCF team working on Microsoft’s future HTTP and REST stack and has been involved in some pretty exciting and ground-breaking Microsoft development mind-shifts in recent times. Don’t miss the chance to hear him speak and ask him questions. Brief history of Glenn Prior to WCF he was a PM on the new Managed Extensibility Framework in .NET 4.0. Glenn has a breadth of experience both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has also been very active in involving folks from the community in the development of software at Microsoft. This has included shipping several products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a frequent speaker at local and international events and user groups.  When he's not working and playing with technology, he spends his time with his wife and daughter either at their home in Seattle or at one of the local coffee shops. Glenn Block on the web mvcConf 2 - Glenn Block: Take some REST with WCF (Feb 2011) @gblock on twitter My Technobabble - Glenn’s Blog Sponsored by Storm ID is an award winning full service digital agency in Edinburgh

    Read the article

  • 10.04 drops to '(initramfs)' prompt on boot

    - by David Yenor
    I'm not sure what to do to solve the problem, I received this error upon boot. mount: mounting /dev/disk/by-uuid/f60e3ce2-0237-45bb-bf07-581d0090cbc7 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _

    Read the article

  • Where to Perform Authentication in REST API Server?

    - by David V
    I am working on a set of REST APIs that needs to be secured so that only authenticated calls will be performed. There will be multiple web apps to service these APIs. Is there a best-practice approach as to where the authentication should occur? I have thought of two possible places. Have each web app perform the authentication by using a shared authentication service. This seems to be in line with tools like Spring Security, which is configured at the web app level. Protect each web app with a "gateway" for security. In this approach, the web app never receives unauthenticated calls. This seems to be the approach of Apache HTTP Server Authentication. With this approach, would you use Apache or nginx to protect it, or something else in between Apache/nginx and your web app? For additional reference, the authentication is similar to services like AWS that have a non-secret identifier combined with a shared secret key. I am also considering using HMAC. Also, we are writing the web services in Java using Spring. Update: To clarify, each request needs to be authenticated with the identifier and secret key. This is similar to how AWS REST requests work.

    Read the article

  • ODI 12c - Loading Files into Oracle, community post from ToadWorld

    - by David Allan
    There's a complete soup to nuts post from Deepak Vohra on the Oracle community pages of ToadWorld on loading a fixed length file into the Oracle database. This post is interesting from a few fronts; firstly this is the out of the box experience, no specialized KMs so just basic integration from getting the software installed to running a mapping. Also it demonstrates fixed length file integration including how to use the ODI UI to define the fields and pertinent properties.  Check the blog post out below.... http://www.toadworld.com/platforms/oracle/w/wiki/10935.loading-text-file-data-into-oracle-database-12c-with-oracle-data-integrator-12c.aspx Hopefully you also find this useful, many thanks to Deepak for sharing his experiences. You could take this example further and illustrate how to load into Oracle using the LKM File to Oracle via External table knowledge module which will perform much better and also leverage such things as using wildcards for loading many files into the 12c database.

    Read the article

  • Analiytics: Can I set a goal on multiple events?

    - by David Parks
    We have a popup dialogue that requests users email address or facebook login. The page behind the popup loads, so a page view is counted. We want to measure: How many users ignored the popup completely How many users engaged the popup, but don't complete the process (we trigger an event when the user performs actions defined as "engaging") How many users completed the popup Bounce rates aren't telling because some users won't receive the popup. We are basically triggering events "PopupDisplayed" "PopupEngaged" and "PopupComplete", with labels to differentiate between email and facebook. But I don't think I can set goals to count "Users who received 'PopupDisplayed' AND 'PopupComplete'" events, so I can count how many users both saw the popup and completed it.

    Read the article

  • Sonicwall 3500 (Creating VLANS for usage of multiple SSID)

    - by Thomas Chia
    I have an issue here. Here is my setup: X0 - STAFF network | X2 - AGENT network | X4 - Wireless AP (for multiple SSID) Okay, basically I'm going to set up the wireless AP to be broadcasting 2 SSID, STAFF-WIFI and AGENT WIFI. When connected to STAFF-WIFI, the users will be connected to the STAFF network, while connected to the AGENT WIFI, the users will be connected to the AGENT network. Can someone explain in details how to go about setting up this configuration? Thanks!

    Read the article

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • Going Inside the Store

    - by David Dorf
    Location was the first "killer-tech" for smartphones, and innovators have found several ways to use it. For retail, apps exist to find nearby stores, provide coupons, and give directions to the front door. But once you enter the store, location-finding ceases to work. That's because your location is usually found by finding GPS satellites in they sky, and the store's roof blocks the signal. But it won't take technology long to solve that problem. The first problem to solve is a lack of indoor maps. Navteq and others provide very accurate maps of the outdoors, enabling navigation for cars and pedestrians. Micello is building a business creating digital maps of indoor locations like malls, convention centers, office buildings. They have over 500 live maps, including maps of IKEA stores. They claim it took them only four hours to create a map of the Stanford Shopping Center in Palo Alto with its 1.4 million square feet and 140 retail stores. And within stores, retailers are producing more accurate plan-o-grams. I'm always impressed watching demos of our space planning from AVT. It uses CAD software to allow you to walk the virtual store and see products on the shelves. The second problem is being able to determine location inside the store so it can be overlayed on the map. There are several goals for this endeavor. Your smartphone might direct you straight to particular products, it might summon a sales associate to your location for immediate assistance, and it might send you coupons based on the aisle you're viewing. Companies like Nearbuy, ZuluTime, and Skyhook are working to master indoor location using a combination of GPS signals, WiFi, and cell tower positioning to calculate a location. (Skyhook calls this WPS, as depicted in the chart.) Today they can usually hit 10 meters accuracy, but that number is improving all the time. When it gets inside 3 meters some the goals mentioned earlier will be in easy reach. I for one can't wait until the time my iPhone leads me directly to the sprinkler heads in Lowes and Home Depot.

    Read the article

  • Post Crosstalk 2012

    - by David Dorf
    This year the Oracle Retail users conference, Crosstalk, had a 20% increase in attendees, which was driven by both new customers and those acquired via Endeca.  As the product assets of Oracle have grown, so has the completeness of the solution set.  This year was marked by the breadth of omni-channel stories. Rose Spicer and her marketing team (see photo on left) always strive for an equal balance of retailer presentations, networking opportunities, and unique experiences -- this year was no exception.  We had 41 different retailers from China, Russia, South Africa, Brazil, Chile, US, Canada and the UK sharing their insights with one another. In all there were 251 executives from 120 iconic brands such as Daphne, Kohl's, Morrisons, Abercrombie & Fitch, Hot Topic, Talbots, Petco, Deckers, Sportmaster, Mr. Price, Falabella, and Disney to name a few. From a product perspective, there were a few new developments from Oracle Retail: Endeca's search engine has been integrated into the ATG commerce platform. The latest Retail Analytics application, Oracle Retail Customer Analytics, is generally available. Oracle Retail previewed a new fully-integrated mobile POS. But the real benefit of attending Crosstalk was hearing about the experiences of retailers and partners.  Here are are a few interesting facts I picked up: At Kohl's, the most popular website accessed by customers within their stores is Facebook.  With all the buzz about showrooming, I was really expecting it to be Amazon. Daphne, a Chinese shoe retailer, is opening 3 new stores per day.  Being located near the factories allows them to have a very agile supply chain as well. Disney Stores have increased sales by 25% at stores upgraded to include Mobile POS.  They continue to lead the pack with excellent customer experiences. Quicksilver reported that 1 in 5 visits to their website comes from a tablet.  More evidence that tablets are replacing traditional PCs in households. By tagging shoes with RFID, Saks is able to ensure all shoe models are on display.  If a model is not being displayed, it has no chance of being sold. Additionally, there were awards, store tours on Michigan Avenue, fireworks at Navy Pier, and the Oracle Retail house band, Bolo313, performing at Solider Field.  Speaking of which, a few retailers got on stage and jammed with band -- possible rival to Rock & Roll Retail? You can always find the latest info from us at the Retail Rack. The next events on tap are the Partner Summit followed by OpenWorld.

    Read the article

  • Do backlinks to blocked content add value?

    - by David Fisher
    We've been debating the following SEO question at our office: If you block bot access to a page either via robots.txt or on-page noindex metadata, does that negate the value of any backlinks to that page? We have a client who wants to block some event booking form pages from being indexed as each booking form page has a unique URL parameter and the pages are "clogging up" the Google index; however lots of websites link to those booking form pages and we wouldn't want to lose the value of those links. Any opinions welcomed.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • Oracle MDM at the MDM Summit in San Francisco

    - by David Butler
    Oracle is sponsoring the Product MDM track at this year’s MDM & Data Governance San Francisco Summit. Sachin Patel, Director of Product Strategy, Product Hub Applications, at Oracle will present the keynote: Product Master Data Management for Today’s Enterprise. Here’s the abstract: Today businesses struggle to boost operational efficiency and meet new product launch deadlines due to poor and cumbersome administrative processes. One of the primary reasons enterprises are unable to achieve cohesion is due to various domain silos and fragmented product data. This adversely affects business performance including, but not limited to, excess inventories, under-leveraged procurement spend, downstream invoicing or order errors and lost sales opportunities. In this session, you will learn the key elements and business processes that are required for you to master an enterprise product record. Additionally you will gain insights into how to improve the accuracy of your data and deliver reliable and consistent product information across your enterprise. This provides a high level of confidence that business managers can achieve their goals. In this session, you will understand how adopting a Master Data Management strategy for product information can help your enterprise change course towards a more profitable, competitive and successful business. Cisco Systems will join Sachin and cover their experiences, lessons learned and best practices. If you are in the Bay Area and interested in mastering your product data for the benefit of multiple applications, business processes and analytical systems, please join us at the Hyatt, Fisherman’s Wharf this Thursday, June 30th.

    Read the article

  • Custom keyboard shortcut to lauch a terminal and run a command in Unity

    - by David Weinraub
    I know this should be the simplest thing, but coming up empty. ;-( I would like to create a keyboard shortcut ctrl-alt-P that opens a terminal window and runs a ping command: ping -c 4 somefixeddomain.com [Useful for quickly checking whether my internet connection is actually working.] I have attempted to do this (in Unity, Ubuntu v11.10) using: Settings > Keyboard > Custom Shortcuts filling in all the obvious stuff, but no luck. All ideas welcome.

    Read the article

  • Good window management grid keyboard shortcuts on keyboards without a numeric keypad

    - by Bryce Thomas
    I like to use Winsplit Revolution to position open windows in a specific place on my screen in a grid-like fashion. One of the things I like about Winsplit Revolution is that the default keyboard shortcuts use the physical layout of the numeric keypad as a mnemonic for where each key positions a window (e.g. Ctrl + Alt + 7 positions window in top left hand corner because 7 is in top left hand corner and Ctrl + Alt + 3 positions window in bottom right hand corner because 3 is in bottom right hand corner). I am looking to get a laptop (Macbook Pro) whose keyboard does not feature a numeric keypad. Can anyone suggest a set of keyboard shortcuts on such a machine that provides a similar mnemonic to aid in remembering what each shortcut does, rather than a simple arbitrary assignment of shortcuts? To be clear, I am not interested in specific window management software, just suggestions for keyboard shortcuts that are easy to remember.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Pause que in Windows Video?

    - by thomas
    Is therer a way to make pause ques in Windows Media Video, like sprites in Quicktime? I want to be able to run a wmv file in Windows Media Player that stops automaticly on a text, then I click and the film starts again and goes on to the next text and stops, and so on.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >