Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 244/335 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Silverlight Cream for June 15, 2010 - 2 -- #883

    - by Dave Campbell
    In this Issue: Vibor Cipan, Chris Klug, Pete Brown, Kirupa, and Xianzhong Zhu. Shoutouts (thought I gave up on them, didn't you?): Jesse Liberty has the companion video to his WP7 OData post up: New Video: Master/Detail in WinPhone 7 with oData Michael Scherotter who made the first Ball Watch SL1 app back in the day, has a Virtual Event: Creating an Entry for the BALL Watch Silverlight Contest... sounds like the thing to do if you want in on this :) Even if you don't speak Portuguese, you can check this out: MSN Brazil Uses Silverlight to Showcase the 2010 FIFA World Cup South Africa Erik Mork and crew have their latest up: This Week in Silverlight – Teched and Quizes Michael Klucher has a post up to give you some relief if you're having Trouble Installing the Windows Phone Developer Tools Portuguese above and now French... Jeremy Alles has a post up about [WP7] Windows Phone 7 challenge for french readers ! Just a note, not that it makes any difference, but Adam Kinney turned @SilverlightNews over to me today. I am the only one that has ever posted on it, but still having it all to myself feels special :) From SilverlightCream.com: Silverlight 4 tutorial: HOW TO use PathListBox and Sample Data Crank up that new version of Blend and follow along with Vibor Cipan's PathListBox tutorial ... oh, and sample data too. Cool INotifyPropertyChanged implementation Chris Klug shows off some INotifyPropertyChange goodness he is not implementing, and credits a blog by Manuel Felicio for some inspiration. Check out that post as well... I've tagged his blog... I needed *another* one :) Silverlight Tip: Using LINQ to Select the Largest Available Webcam Resolution With no Silverlight Tip of the Day today, Pete Brown stepped up with this tip for finding the largest available webcam resolution using LINQ ... and read the comment from Rene as well. Creating a Master-Detail UI in Blend Kirupa has a very nice Master/Detail UI post up with backrounder info and the code for the project. There's a running example in the post for you to get an idea what you're learning. Get started with Farseer Physics 2.1.3 in Silverlight 3 Xianzhong Zhu has a Silverlight 3 tutorial up for Farseer Physics 2.1.3 ... might track for Silverlight 4, but hey, WP7 is kinda/sort Silverlight 3, right? ... lots of code and external links. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for March 05, 2011 -- #1053

    - by Dave Campbell
    In this all-sumbittal (while I was at MVP11) Issue: Michael Washington(-2-), goldytech, JFo, Andrea Boschin, Jonathan Marbutt, Gregor Biswanger, Michael Wolf, and Peter Kuhn. Above the Fold: Silverlight: "A Simple Bindable CheckboxList Control" Jonathan Marbutt WP7: "Struggles with the Panorama Control" JFo Lightswitch: "HTML (including HTML 5) and LightSwitch at the same time?" Michael Washington From SilverlightCream.com: LightSwitch vs HTML 5 ? In his first post-MVP11 post, Michael Washington takes on HTML5 with a Lightswitch discussion. Good discussion follows in the comments also. HTML (including HTML 5) and LightSwitch at the same time? Michael Washington's 2nd post is a great tutorial on creating a re-usable business layer with Lightswitch... all good stuff, and look for more from Michael as Lightswitch matures. How to add Computed Properties in WCF Ria Services on client goldytech has a new post up about providing real-time solutions to client-side calculations with WCF RIA services. Struggles with the Panorama Control JFo details a problem he had with the Panorama control on WP7... detailing 4 problems she had and her solutions... well thought-out explanations too.. a definite good read... and another blogger to add to my list! Windows Phone 7 - Part #7: Understanding Push Notifications Andrea Boschin has part 7 of his WP7 series up at SilverlightShow, concentrating on Push Notifications this time out... great explanation of push notifications in this tutorial from the service and phone side with a working sample to boot. A Simple Bindable CheckboxList Control Jonathan Marbutt took a completely different direction than most and created his own Bindable CheckboxList by starting with ContentControl rather than a Listbox as most do... pretty cool and all the source. Own routed events in Silverlight I met Gregor Biswanger at the MVP Summit and asked him to send me his blog run through Microsoft Translator ... here's a great post on routed events he did back in November... and a discussion of his CallMethodAction Behavior... which looks like another good post subject! Creating a Silverlight Out-of-Browser Splash Screen Michael Wolf has a post up discussing OOB splash screens... I like his "White screen of Awesome" definition ... I'm very familiar with that :) ... check out his solution for getting around that white screen, and lots of external links too. XNA for Silverlight developers: Part 5 - Input (touch + gestures) Peter Kuhn has Part 5 in his tutorial series on XNA for Silverlight devs up at SilverlightShow... this time covering touch and gestures ... how to enable and read gestures, and the difference between Silverlight and XNA in the touch department. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • It's called College.

    - by jeffreyabecker
    Today I saw yet another 'GUID vs int as your primary key' article. Like most of the ones I've read this was filled with technical misrepresentations and out-right fallices. Chef's famous line that "There's a time and a place for everything children" applies here. GUIDs have distinct advantages and disadvantages which should be considered when choosing a data type for the primary key. Fallacy 1: "Its easier" An integer data type(tinyint, smallint, int, bigint) is a better artifical key than a GUID because its easier to remember. I'm a firm believer that your artifical primary keys should be opaque gibberish. PK's are an implementation detail which should never be exposed to the user or relied on for business logic. If you want things to come back in an order, add and ORDER BY clause and SortOrder fields. If you want a human-usable look-up add a business key with a unique constraint. If you want to know what order things were inserted into a table add a timestamp. Fallacy 2: "Size Matters" For many applications, the size of the artifical primary key is going to be irrelevant. The particular article which kicked this post off stated repeatedly that joining against an int has better performance than joining against a GUID. In computer science the performance of your algorithm is always a function of the number of data points. This still holds true for databases. Unless your table is very large, the performance difference between an int and a guid probably isnt going to be mesurable let alone noticeable. My personal experience is that the performance becomes an issue when you start having billions of rows in the table. At this point, you should probably start looking to move from int to bigint so the effective space/performance gain isnt as much as you'd think. GUID Advantages: Insert-ability / Mergeability: You can reliably insert guids into tables without key collisions. Database Independence: Saving entities to the database often requires knowing ids. With identity based ids the id must be selected back after every insert. GUIDs can be generated application-side allowing much faster inserts. GUID Disadvantages: Generatability: You can calculate the next id for an integer pk pretty easily in your head but will need a program to generate GUIDs. Solution: "Select top 100 newid() from sysobjects" Fragmentation: most GUID generation algorithms generate pseudo random GUIDs. This can cause inserts into the middle of your clustered index. Solutions: add a default of newsequentialid() or use GuidComb in NHibernate.

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • Getting developers and support to work together

    - by Matt Watson
    Agile development has ushered in the norm of rapid iterations and change within products. One of the biggest challenges for agile development is educating the rest of the company. At my last company our biggest challenge was trying to continually train 100 employees in our customer support and training departments. It's easy to write release notes and email them to everyone. But for complex software products, release notes are not usually enough detail. You really have to educate your employees on the WHO, WHAT, WHERE, WHY, WHEN of every item. If you don't do this, you end up with customer service people who know less about your product than your users do. Ever call a company and feel like you know more about their product than their customer service people do? Yeah. I'm talking about that problem.WHO does the change effect?WHAT was the actual change?WHERE do I find the change in the product?WHY was the change made? (It's hard to support something if you don't know why it was done.)WHEN will the change be released?One thing I want to stress is the importance of the WHY something was done. For customer support people to be really good at their job, they need to understand the product and how people use it. Knowing how to enable a feature is one thing. Knowing why someone would want to enable it, is a whole different thing and the difference in good customer service. Another challenge is getting support people to better test and document potential bugs before escalating them to development. Trying to fix bugs without examples is always fun... NOT. They might as well say "The sky is falling, please fix it!"We need to over train the support staff about product changes and continually stress how they document and test potential product bugs. You also have to train the sales staff and the marketing team. Then there is updating sales materials, your website, product documentation and other items there are always out of date. Every product release causes this vicious circle of trying to educate the rest of the company about the changes.Do we need to record a simple video explaining the changes and email it to everyone? Maybe we should  use a simple online training type app to help with this problem. Ultimately the struggle is taking the time to do the training, but it is time well spent. It may save you a lot of time answering questions and fixing bugs later. How do we efficiently transfer key product knowledge from developers and product owners to the rest of the company? How have you solved these issues at your company?

    Read the article

  • How to Use Your Android Phone as a Modem; No Rooting Required

    - by Jason Fitzpatrick
    If your cellular provider’s mobile hotspot/tethering plans are too pricey, skip them and tether your phone to your computer without inflating your monthly bill. Read on to see how you can score free mobile internet. We recently received a letter from a How-To Geek reader, requesting help linking their Android phone to their laptop to avoid the highway robbery their cellular provider was insisting upon: Dear How-To Geek, I recently found out that my cellphone company charges $30 a month to use your smartphone as a data modem. That’s an outrageous price when I already pay an extra $15 a month charge just because they insist that because I have a smartphone I need a data plan because I’ll be using so much more data than other users. They expect me to pay what amounts to a $45 a month premium just to do some occasional surfing and email checking from the comfort of my laptop instead of the much smaller smartphone screen! Surely there is a work around? I’m running Windows 7 on my laptop and I have an Android phone running Android OS 2.2. Help! Sincerely, No Double Dipping! Well Double Dipping, this is a sentiment we can strongly related to as many of us on staff are in a similar situation. It’s absurd that so many companies charge you to use the data connection on the phone you’re already paying for. There is no difference in bandwidth usage if you stream Pandora to your phone or to your laptop, for example. Fortunately we have a solution for you. It’s not free-as-in-beer but it only costs $16 which, over the first year of use alone, will save you $344. Let’s get started! Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • How far is too far?

    - by David Dorf
    Previously I've talked about Safeway's personalized pricing as well as Target's use of analytics to learn about customers.  Then last week I read about Orbitz tailoring their hotel offers based on the browser used.  (Orbitz claims that Mac users are 40% more likely than PC users to book four- or five-star hotels.)  So just how far is too far when tailoring the retail experience? When most consumers read about these types of tactics, they tend to feel violated, as if someone was reading their personal diary.  Nobody wants to be tricked into buying things.  Walking into a grocery store and seeing crates of apples stacked high looks enticing, but the crates are just for display and the apples may be over a year old.  Even though its much cheaper to print markdown tags, many retailers manually write the price tags because consumers think they deal is better if the price is hand-written. The technology already exists to personalize prices and experiences for consumers.  People get upset thinking they paid more for something than a neighbor, but it already happens all the time with cars, flights, and the use of loyalty programs and coupons. There are many variables at play for any purchase.  They only difference is that the customer segments are getting smaller, sometimes reaching a size of one. There's two ways to look at this.  Retailers have always manipulated the environment to get consumers to buy more -- or -- Retailers are getting better at tuning the shopping experience for consumers.  I choose the latter, and so do most consumers by spending their money in the stores they like.  Consumers like to see fresh flowers at the entrance to the grocery store, and they like to see specials scrawled on chalkboards. The key is making sure that consumers benefit from the experience as well.  I'm willing to give up some personal information in exchange for discounts and more relevant marketing, and the next-generation of shoppers are even less concerned about privacy.  Retailers need to use all the tools available to differentiate their offers and connect with their customers. So if Orbitz wants to put three-star hotels at the top of the list for me because I'm using a PC, that's fine by me.

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • SQL SERVER – Reseting Identity Values for All Tables

    - by pinaldave
    Sometime email requesting help generates more questions than the motivation to answer them. Let us go over one of the such examples. I have converted the complete email conversation to chat format for easy consumption. I almost got a headache after around 20 email exchange. I am sure if you can read it and feel my pain. DBA: “I deleted all of the data from my database and now it contains table structure only. However, when I tried to insert new data in my tables I noticed that my identity values starts from the same number where they actually were before I deleted the data.” Pinal: “How did you delete the data?” DBA: “Running Delete in Loop?” Pinal: “What was the need of such need?” DBA: “It was my development server and I needed to repopulate the database.” Pinal: “Oh so why did not you use TRUNCATE which would have reset the identity of your table to the original value when the data got deleted? This will work only if you want your database to reset to the original value. If you want to set any other value this may not work.” DBA: (silence for 2 days) DBA: “I did not realize it. Meanwhile I regenerated every table’s schema and dropped the table and re-created it.” Pinal: “Oh no, that would be extremely long and incorrect way. Very bad solution.” DBA: “I understand, should I just take backup of the database before I insert the data and when I need, I can use the original backup to restore the database. This way I will have identity beginning with 1.” Pinal: “This going totally downhill. It is wrong to do so on multiple levels. Did you even read my earlier email about TRUNCATE.” DBA: “Yeah. I found it in spam folder.” Pinal: (I decided to stay silent) DBA: (After 2 days) “Can you provide me script to reseed identity for all of my tables to value 1 without asking further question.” Pinal: USE DATABASE; EXEC sp_MSForEachTable ' IF OBJECTPROPERTY(object_id(''?''), ''TableHasIdentity'') = 1 DBCC CHECKIDENT (''?'', RESEED, 1)' GO Our conversation ended here. If you have directly jumped to this statement, I encourage you to read the conversation one time. There is difference between reseeding identity value to 1 and reseeding it to original value – I will write an another blog post on this subject in future. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Oracle Tutor: *** CAUTION to Word .docx Users ***

    - by [email protected]
    Microsoft released a security update KB969604 for Office 2007 (around June 2009) This update causes document variables within Word docx files to be scrambled. This update might still be pushed out via Office 2007 updates DO NOT save files as docx using MS OFFICE 2007 until you apply the MS hotfix # 970942 available here If you are using Windows XP with Office 2003 or Office 2000 and have installed an older Office 2007 compatibility pack, documents saved as docx may also cause the scrambled document variables. Installing the 2007 compatibility pack published on 1/6/2010 (version 4) will prevent the document variables from becoming corrupt. Those on Windows 2000 may not be able to install the latest compatibility pack, or the compatibility pack may not function properly. This situation will hopefully be rectified in the coming months. What is a document variable? Document variables store data inside the document, invisible to the user. The Tutor software uses them when converting the document to HTML and when creating the flowchart, just to name a couple of uses. How will you know if a document's variables are scrambled? The difficulty in diagnosing the issue is that the symptoms can take myriad forms. There isn't a single error message or a single feature that one can point to and say, "test for the problem by doing this." The best clue about the error is seeing any kind of string in an error message that has garbage characters, question marks, xml code snippets, or just nonsense. Such as "Language ?????????????xlr;lwlerkjl could not be found." It is also possible to see the corrupted data in the footers of the Word docs. And, just because the footers look correct does not mean that the document variables are not corrupted. The corruption problem does not occur in every document variable in the document, just some of them. Often it is less than a quarter of them. What is the difference between docx files and doc files? Office 2007 uses Office Open XML formats with .docx and .docm filename extensions. - Docx is an Office Open XML word document. - Docm is a macro enabled Office Open XML document. This means the file structure behind the scenes is quite different from the binary file formats used prior to Office 2007 such as .doc, .dot, .xls, and .ppt. Solution Summary: For Windows XP and Word 2007: Install the hotfix, or save files as *.doc For Windows XP and Word 2000 and 2003: Install the latest compatibility pack or save files as *.doc For Windows 2000 with Word 2000 or 2003, do not use any compatibility pack, save files as *.doc Emily Chorba Principle Product Manager for Oracle Tutor

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

  • What type of interview questions should you ask for "legacy" programmers?

    - by Marcus Swope
    We have recently been receiving lots of applicants for our open developer positions from people who I like to refer to as "legacy" programmers. I don't like the term "old" because it seems a little prejudiced (especially to HR!) and it doesn't accurately reflect what I mean. We are a company that does primarily .NET development using TDD in an Agile environment, we use Git as a source control system, we make heavy use of OSS tools and projects and we contribute to them as well, we have a strong bias towards adhering to strong Object-Oriented principles, SOLID, etc, etc, etc... Now, the normal list of questions that we ask doesn't really seem to apply to applicants that are fresh out of school, nor does it seem to apply to these "legacy" programmers. Here is how I (loosely) define a "legacy" programmer. Spent a significant amount of their career working almost exclusively with Assembly/Machine Languages. Primary accomplishments include work done with TANDEM systems. Has extensive experience with technologies like FoxPro and ColdFusion It's not that we somehow think that what we do is "better" than what they do, on the contrary, we respect these types of applicants and we are scared that we may be missing a good candidate. It is just very difficult to get a good read on someone who is essentially speaking a different language than you. To someone like this, it seems a little strange to ask a question like: What is the difference between an abstract class and an interface? Because, I would think that they would almost never know the answer or even what I'm talking about. However, I don't want to eliminate someone who could be a very good candidate in their own right and could be able to eventually learn the stuff that we do. But, I also don't want to just ask a bunch of behavioral questions, because I want to know about their technical background as well. Am I being too naive? Should "legacy" programmers like this already know about things like TDD, source control strategies, and best practices for object-oriented programming? If not, what questions should we ask to get a good representation about whether or not they are still able to learn them and be able to keep up in our fast-paced environment? EDIT: I'm not concerned with whether or not applicants that meet these criteria are in general capable or incapable, as I have already stated that I believe that they can be 100% capable. I am more interested in figuring out how to evaluate their talents, as I am having a hard time figuring out how to determine if they are an A+ "legacy" programmer or if they are a D- "legacy" programmer. I've worked with both.

    Read the article

  • Is throwing an error in unpredictable subclass-specific circumstances a violation of LSP?

    - by Motti Strom
    Say, I wanted to create a Java List<String> (see spec) implementation that uses a complex subsystem, such as a database or file system, for its store so that it becomes a simple persistent collection rather than an basic in-memory one. (We're limiting it specifically to a List of Strings for the purposes of discussion, but it could extended to automatically de-/serialise any object, with some help. We can also provide persistent Sets, Maps and so on in this way too.) So here's a skeleton implementation: class DbBackedList implements List<String> { private DbBackedList() {} /** Returns a list, possibly non-empty */ public static getList() { return new DbBackedList(); } public String get(int index) { return Db.getTable().getRow(i).asString(); // may throw DbExceptions! } // add(String), add(int, String), etc. ... } My problem lies with the fact that the underlying DB API may encounter connection errors that are not specified in the List interface that it should throw. My problem is whether this violates Liskov's Substitution Principle (LSP). Bob Martin actually gives an example of a PersistentSet in his paper on LSP that violates LSP. The difference is that his newly-specified Exception there is determined by the inserted value and so is strengthening the precondition. In my case the connection/read error is unpredictable and due to external factors and so is not technically a new precondition, merely an error of circumstance, perhaps like OutOfMemoryError which can occur even when unspecified. In normal circumstances, the new Error/Exception might never be thrown. (The caller could catch if it is aware of the possibility, just as a memory-restricted Java program might specifically catch OOME.) Is this therefore a valid argument for throwing an extra error and can I still claim to be a valid java.util.List (or pick your SDK/language/collection in general) and not in violation of LSP? If this does indeed violate LSP and thus not practically usable, I have provided two less-palatable alternative solutions as answers that you can comment on, see below. Footnote: Use Cases In the simplest case, the goal is to provide a familiar interface for cases when (say) a database is just being used as a persistent list, and allow regular List operations such as search, subList and iteration. Another, more adventurous, use-case is as a slot-in replacement for libraries that work with basic Lists, e.g if we have a third-party task queue that usually works with a plain List: new TaskWorkQueue(new ArrayList<String>()).start() which is susceptible to losing all it's queue in event of a crash, if we just replace this with: new TaskWorkQueue(new DbBackedList()).start() we get a instant persistence and the ability to share the tasks amongst more than one machine. In either case, we could either handle connection/read exceptions that are thrown, perhaps retrying the connection/read first, or allow them to throw and crash the program (e.g. if we can't change the TaskWorkQueue code).

    Read the article

  • Retrieving Custom Attributes Using Reflection

    - by Scott Dorman
    The .NET Framework allows you to easily add metadata to your classes by using attributes. These attributes can be ones that the .NET Framework already provides, of which there are over 300, or you can create your own. Using reflection, the ways to retrieve the custom attributes of a type are: System.Reflection.MemberInfo public abstract object[] GetCustomAttributes(bool inherit); public abstract object[] GetCustomAttributes(Type attributeType, bool inherit); public abstract bool IsDefined(Type attributeType, bool inherit); System.Attribute public static Attribute[] GetCustomAttributes(MemberInfo member, bool inherit); public static bool IsDefined(MemberInfo element, Type attributeType, bool inherit); If you take the following simple class hierarchy: public abstract class BaseClass { private bool result;   [DefaultValue(false)] public virtual bool SimpleProperty { get { return this.result; } set { this.result = value; } } }   public class DerivedClass : BaseClass { public override bool SimpleProperty { get { return true; } set { base.SimpleProperty = value; } } } Given a PropertyInfo object (which is derived from MemberInfo, and represents a propery in reflection), you might expect that these methods would return the same result. Unfortunately, that isn’t the case. The MemberInfo methods strictly reflect the metadata definitions, ignoring the inherit parameter and not searching the inheritance chain when used with a PropertyInfo, EventInfo, or ParameterInfo object. It also returns all custom attribute instances, including those that don’t inherit from System.Attribute. The Attribute methods are closer to the implied behavior of the language (and probably closer to what you would naturally expect). They do respect the inherit parameter for PropertyInfo, EventInfo, and ParameterInfo objects and search the implied inheritance chain defined by the associated methods (in this case, the property accessors). These methods also only return custom attributes that inherit from System.Attribute. This is a fairly subtle difference that can produce very unexpected results if you aren’t careful. For example, to retrieve the custom  attributes defined on SimpleProperty, you could use code similar to this: PropertyInfo info = typeof(DerivedClass).GetProperty("SimpleProperty"); var attributeList1 = info.GetCustomAttributes(typeof(DefaultValueAttribute), true)); var attributeList2 = Attribute.GetCustomAttributes(info, typeof(DefaultValueAttribute), true));   The attributeList1 array will be empty while the attributeList2 array will contain the attribute instance, as expected. Technorati Tags: Reflection,Custom Attributes,PropertyInfo

    Read the article

  • Executable execution path. Does it depends of the place the executable is called from?

    - by Valkea
    as I'm still a new Linux user, I still discover some behaviours and I'm unable to tell if they are "normals" or not. I searched the Internet but as I can't really find an answer I guess it's time to ask here. Few weeks ago I installed a small game called "Machinarium" and I played it... but few days later when I wanted to continue my game I was unable to make the game start correctly. And as I didn't had the time to search I given up. But yesterday as I was working on a program of mine, I had the exactly same behaviour. So I searched a bit and I discovered that when using Nautilus with the "List view", I was able to run the program (ie: the program does find the sound, images etc resources) when I was literally "inside" the executable folder, but unable when I was in a parent folder and expanding it to the executable folder to run it. To illustrate the behaviour here are two screen shots. It doesn't works if the executable is double clicked from here It does works if the executable is double clicked from here This is indeed the same "place", but the Nautilus view is slightly different as the current folder is not the same and it seems to make a difference for the program. Furthermore, when I create a menu items via System Settings/Main Menu to the executable, it behaves just like if the executable can't find the resources (that's why I was unable to play Machinarium the second time as I created a menu short-cut after my first game). So I asked my program to generate a text file at it's root when running, and I started to launch it from different "parent" folders to see where is generated the text file. Each time the file was generated on the top folder of the current Nautilus view. I was expected to see it appears in the same folder of the executable (well not as I was guessing what as happening, but before that I would have expected to see it appears in the exe folder). Does anyone can explain me why it does works like this (I guess it's normal) ? How I'm supposed to solve this when creating programs (Should I detect the executable path in my C++ code or should I organize the resources files another way than on windows ?)

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • JavaScript: Code Folding

    - by Petr
    Today I would like to mentioned code folding in the new JavaScript editor support, which is available in the continual builds from our server. It's a basic feature, but was mentioned in a comment under the mentioned post. So you can fold comments and every code block between { and }. The current support allows only methods to be folded. The difference is shown below. In the picture on the left side is the current folding and on the right side the new one.   The code folding can be switched off in the Editor Options (Tools main menu -> Options -> Editor category -> General Tab). In this dialog you can also define which folds should be collapsed by default when you open a file. These options more closely fit Java editor needs, but you can see in the next picture how the options are mapped for JavaScript code.  The Method option folds all functions in the code. Other code blogs are fold through the option Tags and Other Code Blogs.  The documentation comments (starts with /**) are fold through Javadoc Comments and when you check Initial Comment, then all comments that start with /* are folded by default.  The new JavaScript editor also supports custom folds. To add your custom fold, type in two special comments as shown in this example: // <editor-fold> Your code goes here... // </editor-fold> You can define the default description of a collapsed fold by adding a "desc" attribute: // <editor-fold desc="This is my super secret genius code."> Your code goes here... // </editor-fold> You can set a fold to be collapsed by default by adding a "defaultstate" attribute: // <editor-fold defaultstate="collapsed"> Your code goes here... // </editor-fold> There is a code template that helps with writing custom fold comments. The abbreviation for the template is fcom. As I wrote the new JS support is available in the continual builds. Go here for more info.

    Read the article

  • Tales of a corrupt SQL log

    - by guybarrette
    Warning: I’m a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so it’s just a mater of restoring a good copy of the database and log files.  Well, I found out that’s not exactly true.  You see, for this database, I have daily file backups and these are not database backups created by SQL Server. So I restored a set of files from a couple of days ago, stopped the SQL service, copied the files over the bad ones, restarted the service only to find out that SQL doesn’t like when you do that.  It suspects something fishy and marks the database as suspect.  A database marked as suspect can’t be accessed at all.  So now what? I searched throughout the tubes of the InterWeb and found that you can restore from a corrupted log file by creating a new database with the same name as the defective one, then copy the restored database file (the one with data) over the newly created one.  Sweet!  But you still end up with SQL marking the database as suspect but at least, the newly created log is OK.  Well not true, it’s not corrupted but the lack of data makes it not OK for SQL so you need to rebuild the log.  How can you do that when SQL blocks any action the database?  First, you need to change the database status from suspect to emergency.  Then you need to set the database for single access only.  After that, you need to repair the log with DBCC and do the DBA dance.  If you dance long enough, SQL should repair the log file.  Now you need to set the access back to multi user.  Here’s the T-SQL script: use master GO EXEC sp_resetstatus 'MyDatabase' ALTER DATABASE MyDatabase SET EMERGENCY Alter database MyDatabase set Single_User DBCC checkdb('MyDatabase') ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE DBCC CheckDB ('MyDatabase', REPAIR_ALLOW_DATA_LOSS) ALTER DATABASE MyDatabase SET MULTI_USER So I guess that I would have been a lot easier to restore a SQL backup.  I can’t really say but the InterWeb seems to say so.  Anyway, lessons learned: Vive la différence: File backups are different then SQL backups. Don’t touch me: SQL doesn’t like when you restore a file over a corrupted one. The more the merrier: You should do both SQL and file backups. WTF?: The InterWeb provides you with dozens of way to deal with the problem but many are SQL 2000 or SQL 2005 only, many are confusing and many are written in strange dialects only DBAs understand. var addthis_pub="guybarrette";

    Read the article

  • Help with this optimization

    - by Milo
    Here is what I do: I have bitmaps which I draw into another bitmap. The coordinates are from the center of the bitmap, thus on a 256 by 256 bitmap, an object at 0.0,0.0 would be drawn at 128,128 on the bitmap. I also found the furthest extent and made the bitmap size 2 times the extent. So if the furthest extent is 200,200 pixels, then the bitmap's size is 400,400. Unfortunately this is a bit inefficient. If a bitmap needs to be drawn at 500,500 and the other one at 300,300, then the target bitmap only needs to be 200,200 in size. I cannot seem to find a correct way to draw in the components correctly with a reduced size. I figure out the target bitmap size like this: float AvatarComposite::getFloatWidth(float& remainder) const { float widest = 0.0f; float widestNeg = 0.0f; for(size_t i = 0; i < m_components.size(); ++i) { if(m_components[i].getSprite() == NULL) { continue; } float w = m_components[i].getX() + ( ((m_components[i].getSprite()->getWidth() / 2.0f) * m_components[i].getScale()) / getWidthToFloat()); float wn = m_components[i].getX() - ( ((m_components[i].getSprite()->getWidth() / 2.0f) * m_components[i].getScale()) / getWidthToFloat()); if(w > widest) { widest = w; } if(wn > widest) { widest = wn; } if(w < widestNeg) { widestNeg = w; } if(wn < widestNeg) { widestNeg = wn; } } remainder = (2 * widest) - (widest - widestNeg); return widest - widestNeg; } And here is how I position and draw the bitmaps: int dw = m_components[i].getSprite()->getWidth() * m_components[i].getScale(); int dh = m_components[i].getSprite()->getHeight() * m_components[i].getScale(); int cx = (getWidth() + (m_remainderX * getWidthToFloat())) / 2; int cy = (getHeight() + (m_remainderY * getHeightToFloat())) / 2; cx -= m_remainderX * getWidthToFloat(); cy -= m_remainderY * getHeightToFloat(); int dx = cx + (m_components[i].getX() * getWidthToFloat()) - (dw / 2); int dy = cy + (m_components[i].getY() * getHeightToFloat()) - (dh / 2); g->drawScaledSprite(m_components[i].getSprite(),0.0f,0.0f, m_components[i].getSprite()->getWidth(),m_components[i].getSprite()->getHeight(),dx,dy, dw,dh,0); I basically store the difference between the original 2 * longest extent bitmap and the new optimized one, then I translate by that much which I would think would cause me to draw correctly but then some of the components look cut off. Any insight would help. Thanks

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >